0% found this document useful (0 votes)
766 views715 pages

CICS Performance Guide

Uploaded by

gborja8881331
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
766 views715 pages

CICS Performance Guide

Uploaded by

gborja8881331
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 715

CICS® Transaction Server for OS/390® 

CICS Performance Guide


Release 3

SC33-1699-03
CICS® Transaction Server for OS/390® 

CICS Performance Guide


Release 3

SC33-1699-03
Note!
Before using this information and the product it supports, be sure to read the general information under “Notices” on
page xiii.

Fourth edition (July 1999)


This edition applies to Release 3 of CICS Transaction Server for OS/390, program number 5655-147, and to all
subsequent versions, releases, and modifications until otherwise indicated in new editions. Make sure you are using
the correct edition for the level of the product.
This edition replaces and makes obsolete the previous editions. The technical changes for this edition are
summarized under ″Summary of changes″ and are indicated by a vertical bar to the left of a change.
Order publications through your IBM representative or the IBM branch office serving your locality. Publications are
not stocked at the address given below.
At the back of this publication is a page entitled “Sending your comments to IBM”. If you want to make comments,
but the methods described are not available to you, please address them to:
IBM United Kingdom Laboratories, Information Development,
Mail Point 095, Hursley Park, Winchester, Hampshire, England, SO21 2JN.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 1983, 1999. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . xiii Chapter 2. Gathering data for
Programming Interface Information . . . . . . xiv performance objectives . . . . . . . . 7
Trademarks . . . . . . . . . . . . . . xv Requirements definition phase . . . . . . . . 7
External design phase . . . . . . . . . . . 7
Preface . . . . . . . . . . . . . . xvii Internal design phase . . . . . . . . . . . 7
What this book is about . . . . . . . . . . xvii Coding and testing phase . . . . . . . . . . 8
Who this book is for . . . . . . . . . . xvii Post-development review . . . . . . . . . . 8
What you need to know to understand this Information supplied by end users . . . . . . . 8
book . . . . . . . . . . . . . . . xvii
How to use this book . . . . . . . . . xvii Chapter 3. Performance monitoring and
Notes on terminology . . . . . . . . . xvii review . . . . . . . . . . . . . . . 11
Deciding on monitoring activities and techniques. . 11
Bibliography . . . . . . . . . . . . xix Developing monitoring activities and techniques . . 12
CICS Transaction Server for OS/390 . . . . . . xix Planning the review process . . . . . . . . . 13
CICS books for CICS Transaction Server for When to review? . . . . . . . . . . . . 13
OS/390 . . . . . . . . . . . . . . xix Dynamic monitoring . . . . . . . . . . 13
CICSPlex SM books for CICS Transaction Server Daily monitoring . . . . . . . . . . . 14
for OS/390 . . . . . . . . . . . . . xx Weekly monitoring . . . . . . . . . . . 14
Other CICS books . . . . . . . . . . . xx Monthly monitoring . . . . . . . . . . 15
Books from related libraries . . . . . . . . . xx Monitoring for the future . . . . . . . . . . 15
ACF/VTAM . . . . . . . . . . . . . xx Reviewing performance data . . . . . . . . 16
CICSPlex System Manager for MVS/ESA . . . xx Confirming that the system-oriented objectives are
DATABASE 2. . . . . . . . . . . . . xx reasonable . . . . . . . . . . . . . . . 16
DATABASE 2 Performance Monitor (DB2PM) . . xx Typical review questions . . . . . . . . . . 17
DFSMS/MVS . . . . . . . . . . . . xxi Anticipating and monitoring system changes and
IMS/ESA . . . . . . . . . . . . . . xxi growth . . . . . . . . . . . . . . . . 19
MVS . . . . . . . . . . . . . . . xxi
OS/390 RMF. . . . . . . . . . . . . xxi
Tivoli Performance Reporter for OS/390 . . . xxi
Part 2. Tools that measure the
NetView Performance Monitor (NPM) . . . . xxi performance of CICS. . . . . . . . 21
Tuning tools . . . . . . . . . . . . . xxi
Others . . . . . . . . . . . . . . . xxi Chapter 4. An overview of
Determining if a publication is current . . . . . xxii performance-measurement tools . . . 23
CICS performance data . . . . . . . . . . 24
Summary of changes. . . . . . . . xxiii CICS statistics . . . . . . . . . . . . 24
| Changes for CICS Transaction Server for OS/390 The CICS monitoring facility . . . . . . . 24
| Release 3 . . . . . . . . . . . . . . xxiii The sample statistics program (DFH0STAT). . . 25
Changes for CICS Transaction Server for OS/390 CICS trace facilities . . . . . . . . . . . 26
Release 2 . . . . . . . . . . . . . . xxiv Other CICS data . . . . . . . . . . . . 26
Changes for the CICS Transaction Server Operating system performance data . . . . . . 27
Release 1 edition . . . . . . . . . . . xxiv | System management facility (SMF) . . . . . 27
Changes for the CICS/ESA 4.1 edition . . . . xxv Resource measurement facility (RMF). . . . . 27
Generalized trace facility (GTF) . . . . . . . 29
Part 1. Setting performance GTF reports . . . . . . . . . . . . . 30
Tivoli Performance Reporter for OS/390 . . . . 31
objectives . . . . . . . . . . . . . 1 Performance data for other products . . . . . . 32
ACF/VTAM . . . . . . . . . . . . . 32
Chapter 1. Establishing performance Virtual telecommunication access method
objectives . . . . . . . . . . . . . . 3 (VTAM) trace . . . . . . . . . . . . . 32
Defining some terms. . . . . . . . . . . . 3 Network performance, analysis, and reporting
Defining performance objectives and priorities . . . 4 system (NETPARS) . . . . . . . . . . . 32
Analyzing the current workload . . . . . . . . 5 VTAM performance, analysis, and reporting
Translating resource requirements into system system II (VTAMPARS II). . . . . . . . . 33
objectives . . . . . . . . . . . . . . . 5 Generalized performance analysis reporting
(GPAR) . . . . . . . . . . . . . . . 33

© Copyright IBM Corp. 1983, 1999 iii


VTAM storage management (SMS) trace . . . . 33 The SYSEVENT class of monitoring data . . . 67
VTAM tuning statistics . . . . . . . . . 33 CICS Monitoring Facility (CMF) and the MVS
NetView for MVS . . . . . . . . . . . 33 workload manager . . . . . . . . . . . 67
NetView performance monitor (NPM) . . . . 34 Using CICS monitoring SYSEVENT information
LISTCAT (VSAM) . . . . . . . . . . . 34 with RMF . . . . . . . . . . . . . . . 67
DB monitor (IMS) . . . . . . . . . . . 35 CICS usage of RMF transaction reporting . . . 67
Program isolation (PI) trace . . . . . . . . 35 CICS monitoring facility use of SYSEVENT . . . 67
IMS System Utilities/Database Tools (DBT) . . . 35 MVS IEAICS member . . . . . . . . . . 68
IMS monitor summary and system analysis II ERBRMF member for Monitor I session . . . . 69
(IMSASAP II) . . . . . . . . . . . . . 36 ERBRMF member for Monitor II session . . . . 69
DATABASE 2 Performance Monitor (DB2PM) . . 36 RMF operations . . . . . . . . . . . . 69
Teleprocessing network simulator (TPNS) . . . 37 | Using the CICS monitoring facility with Tivoli
| Performance Reporter for OS/390 . . . . . . . 69
Chapter 5. Using CICS statistics . . . . 39 Event monitoring points . . . . . . . . . . 69
Introduction to CICS statistics . . . . . . . . 39 The monitoring control table (MCT) . . . . . . 71
Types of statistics data. . . . . . . . . . 39 DFHMCT TYPE=EMP . . . . . . . . . . 71
Resetting statistics counters . . . . . . . . 43 DFHMCT TYPE=RECORD . . . . . . . . 71
Processing CICS statistics . . . . . . . . . . 45 Controlling CICS monitoring . . . . . . . . 72
Interpreting CICS statistics . . . . . . . . . 45 Processing of CICS monitoring facility output . . . 72
Statistics domain statistics . . . . . . . . . 46 Performance implications . . . . . . . . . . 73
Transaction manager statistics . . . . . . . . 46 Interpreting CICS monitoring . . . . . . . . 73
Transaction class (TRANCLASS) statistics . . . . 47 Clocks and time stamps . . . . . . . . . 73
CICS DB2 statistics . . . . . . . . . . . . 47 Performance class data . . . . . . . . . 74
Dispatcher statistics . . . . . . . . . . . 47 | Performance data in group DFHCBTS . . . . 83
TCB statistics . . . . . . . . . . . . . 47 Performance data in group DFHCICS. . . . . 84
Storage manager statistics . . . . . . . . . 48 | Performance data in group DFHDATA . . . . 86
Loader statistics . . . . . . . . . . . . . 49 Performance data in group DFHDEST . . . . 87
Temporary storage statistics . . . . . . . . . 49 | Performance data in group DFHDOCH . . . . 87
Transient data statistics . . . . . . . . . . 50 Performance data in group DFHFEPI . . . . . 87
| User domain statistics . . . . . . . . . . . 50 Performance data in group DFHFILE . . . . . 88
VTAM statistics . . . . . . . . . . . . . 51 Performance data in group DFHJOUR . . . . 90
Dump statistics . . . . . . . . . . . . . 53 Performance data in group DFHMAPP . . . . 90
Enqueue statistics . . . . . . . . . . . . 53 Performance data in group DFHPROG . . . . 90
Transaction statistics . . . . . . . . . . . 53 | Performance data in group DFHSOCK . . . . 92
Program statistics . . . . . . . . . . . . 53 Performance data in group DFHSTOR . . . . 92
Front end programming interface (FEPI) statistics . 54 Performance data in group DFHSYNC . . . . 95
File statistics . . . . . . . . . . . . . . 54 Performance data in group DFHTASK . . . . 95
Journalname and log stream statistics. . . . . . 55 Performance data in group DFHTEMP . . . . 103
LSRPOOL statistics . . . . . . . . . . . . 56 Performance data in group DFHTERM . . . . 104
Recovery manager statistics . . . . . . . . . 56 | Performance data in group DFHWEBB . . . . 106
Terminal statistics . . . . . . . . . . . . 57 Exception class data . . . . . . . . . . . 107
ISC/IRC system and mode entry statistics . . . . 57 Exception data field descriptions . . . . . . 108
Summary connection type for statistics fields . . 57
General guidance for interpreting ISC/IRC Chapter 7. Tivoli Performance
statistics . . . . . . . . . . . . . . 58 Reporter for OS/390 . . . . . . . . 113
Are enough sessions defined? . . . . . . . 59 Overview. . . . . . . . . . . . . . . 113
Is the balance of contention winners to Using Tivoli Performance Reporter for OS/390 to
contention losers correct? . . . . . . . . . 60 report on CICS performance . . . . . . . . 115
Is there conflicting usage of APPC modegroups? 61 Monitoring response time . . . . . . . . 115
What if there are unusually high numbers in the Monitoring processor and storage use . . . . 116
statistics report? . . . . . . . . . . . . 62 Monitoring volumes and throughput . . . . 116
ISC/IRC attach time entries . . . . . . . . . 63 Combining CICS and DB2 performance data . . 117
Shared temporary storage queue server statistics . . 64 Monitoring exception and incident data . . . 118
| Coupling facility data tables server statistics . . . 64 Unit-of-work reporting . . . . . . . . . 119
| Named counter sequence number server statistics 64 Monitoring availability . . . . . . . . . 119
Monitoring SYSEVENT data . . . . . . . 119
Chapter 6. The CICS monitoring facility 65
Introduction to CICS monitoring . . . . . . . 65 Chapter 8. Managing Workloads . . . 123
The classes of monitoring data . . . . . . . . 65 MVS workload manager. . . . . . . . . . 123
Performance class data . . . . . . . . . 65 Benefits of using MVS Workload Manager. . . 123
Exception class data . . . . . . . . . . 66

iv CICS TS for OS/390: CICS Performance Guide


MVS workload management terms . . . . . 124 CICS hang . . . . . . . . . . . . . 159
Requirements for MVS workload management 125 Effect of program loading on CICS . . . . . . 159
Resource usage . . . . . . . . . . . . 125 What is paging? . . . . . . . . . . . . 159
Span of workload manager operation . . . . 125 Paging problems . . . . . . . . . . . 160
Defining performance goals . . . . . . . 126 Recovery from storage violation . . . . . . . 161
Setting up service definitions . . . . . . . 127 Dealing with limit conditions . . . . . . . . 161
Guidelines for classifying CICS transactions . . 131 Identifying performance constraints . . . . . . 162
Using a service definition base . . . . . . 131 Hardware constraints. . . . . . . . . . 162
Using MVS workload manager . . . . . . 131 Software constraints . . . . . . . . . . 163
| CICSPlex SM workload management . . . . . 133 Resource contention . . . . . . . . . . . 164
| Benefits of using CICSPlex SM workload Solutions for poor response time . . . . . . . 165
| management . . . . . . . . . . . . 133 Symptoms and solutions for resource contention
| Using CICSPlex SM workload management . . 134 problems . . . . . . . . . . . . . . . 166
DASD constraint . . . . . . . . . . . 167
Chapter 9. Understanding RMF Communications network constraint. . . . . 167
workload manager data . . . . . . . 135 Remote systems constraints. . . . . . . . 167
Virtual storage constraint . . . . . . . . 167
Explanation of terms used in RMF reports. . . . 135
Real storage constraint . . . . . . . . . 168
The response time breakdown in percentage
Processor cycles constraint . . . . . . . . 168
section . . . . . . . . . . . . . . 135
The state section . . . . . . . . . . . 137
Interpreting the RMF workload activity data . . . 137 Chapter 12. CICS performance
RMF reporting intervals . . . . . . . . . 137 analysis. . . . . . . . . . . . . . 169
Example: very large percentages in the response Assessing the performance of a DB/DC system 169
time breakdown . . . . . . . . . . . . 140 System conditions . . . . . . . . . . . 170
Possible explanations . . . . . . . . . . 141 Application conditions . . . . . . . . . 170
Possible actions. . . . . . . . . . . . 142 Methods of performance analysis. . . . . . . 170
Example: response time breakdown data is all zero 142 Full-load measurement . . . . . . . . . . 171
Possible explanations . . . . . . . . . . 143 CICS auxiliary trace . . . . . . . . . . 171
Possible actions. . . . . . . . . . . . 143 RMF . . . . . . . . . . . . . . . 172
Example: execution time greater than response Comparison charts . . . . . . . . . . 173
time . . . . . . . . . . . . . . . . 144 Single-transaction measurement . . . . . . . 174
Possible explanation . . . . . . . . . . 144 CICS auxiliary trace . . . . . . . . . . 175
Possible actions. . . . . . . . . . . . 144
Example: large SWITCH LOCAL Time in CICS Chapter 13. Tuning the system . . . . 177
execution phase . . . . . . . . . . . . 144 Determining acceptable tuning trade-offs . . . . 177
Possible explanations . . . . . . . . . . 145 Making the change to the system . . . . . . . 177
Possible actions. . . . . . . . . . . . 145 Reviewing the results of tuning . . . . . . . 178
Example: fewer ended transactions with increased
response times . . . . . . . . . . . . . 145
Possible explanation . . . . . . . . . . 145 Part 4. Improving the performance
Possible action . . . . . . . . . . . . 145 of a CICS system . . . . . . . . . 179

Part 3. Analyzing the performance Chapter 14. Performance checklists 181


of a CICS system . . . . . . . . . 147 Input/output contention checklist . . . . . . 181
Virtual storage above and below 16MB line
checklist . . . . . . . . . . . . . . . 182
Chapter 10. Overview of performance Real storage checklist. . . . . . . . . . . 183
analysis. . . . . . . . . . . . . . 149 Processor cycles checklist . . . . . . . . . 184
Establishing a measurement and evaluation plan 150
Investigating the overall system . . . . . . . 152 Chapter 15. MVS and DASD . . . . . 187
Other ways to analyze performance . . . . . . 153 Tuning CICS and MVS . . . . . . . . . . 187
Reducing MVS common system area
Chapter 11. Identifying CICS requirements . . . . . . . . . . . . 189
constraints . . . . . . . . . . . . 155 Splitting online systems: availability . . . . . . 189
Major CICS constraints . . . . . . . . . . 155 Limitations . . . . . . . . . . . . . 190
Response times . . . . . . . . . . . . . 156 Recommendations . . . . . . . . . . . 190
Storage stress . . . . . . . . . . . . . 157 Making CICS nonswappable . . . . . . . . 190
Controlling storage stress . . . . . . . . 158 How implemented . . . . . . . . . . 190
Short-on-storage condition . . . . . . . . 158 Limitations . . . . . . . . . . . . . 190
Purging of tasks . . . . . . . . . . . 159 How monitored . . . . . . . . . . . 190

Contents v
Isolating (fencing) real storage for CICS (PWSS and SNA transaction flows (MSGINTEG, and
PPGRTR) . . . . . . . . . . . . . . . 190 ONEWTE) . . . . . . . . . . . . . . 208
Recommendations . . . . . . . . . . . 191 Effects . . . . . . . . . . . . . . . 208
How implemented . . . . . . . . . . 191 Where useful . . . . . . . . . . . . 208
How monitored . . . . . . . . . . . 191 Limitations . . . . . . . . . . . . . 208
Increasing the CICS region size . . . . . . . 192 How implemented . . . . . . . . . . 209
How implemented . . . . . . . . . . 192 How monitored . . . . . . . . . . . 209
How monitored . . . . . . . . . . . 192 SNA chaining (TYPETERM RECEIVESIZE,
Giving CICS a high dispatching priority or BUILDCHAIN, and SENDSIZE) . . . . . . . 209
performance group . . . . . . . . . . . 192 Effects . . . . . . . . . . . . . . . 209
How implemented . . . . . . . . . . 193 Where useful . . . . . . . . . . . . 210
How monitored . . . . . . . . . . . 193 Limitations . . . . . . . . . . . . . 210
Using job initiators . . . . . . . . . . . 193 Recommendations . . . . . . . . . . . 210
Effects . . . . . . . . . . . . . . . 194 How implemented . . . . . . . . . . 210
Limitations . . . . . . . . . . . . . 194 How monitored . . . . . . . . . . . 210
How implemented . . . . . . . . . . 194 Number of concurrent logon/logoff requests
How monitored . . . . . . . . . . . 194 (OPNDLIM) . . . . . . . . . . . . . . 210
Region exit interval (ICV) . . . . . . . . . 194 Effects . . . . . . . . . . . . . . . 211
Main effect . . . . . . . . . . . . . 195 Where useful . . . . . . . . . . . . 211
Secondary effects . . . . . . . . . . . 195 Limitations . . . . . . . . . . . . . 211
Where useful . . . . . . . . . . . . 196 Recommendations . . . . . . . . . . . 211
Limitations . . . . . . . . . . . . . 196 How implemented. . . . . . . . . . . 211
Recommendations . . . . . . . . . . . 196 How monitored . . . . . . . . . . . 211
How implemented . . . . . . . . . . 197 Terminal scan delay (ICVTSD) . . . . . . . . 211
How monitored . . . . . . . . . . . 197 Effects . . . . . . . . . . . . . . . 212
Use of LLA (MVS library lookaside) . . . . . . 197 Where useful . . . . . . . . . . . . 213
Effects of LLACOPY . . . . . . . . . . 198 Limitations . . . . . . . . . . . . . 213
The SIT Parameter LLACOPY . . . . . . . 198 Recommendations . . . . . . . . . . . 213
DASD tuning . . . . . . . . . . . . . 199 How implemented . . . . . . . . . . 214
Reducing the number of I/O operations . . . 199 How monitored . . . . . . . . . . . 214
Tuning the I/O operations . . . . . . . . 199 Negative poll delay (NPDELAY) . . . . . . . 214
Balancing I/O operations . . . . . . . . 200 NPDELAY and unsolicited-input messages in
TCAM. . . . . . . . . . . . . . . 214
Chapter 16. Networking and VTAM 201 Effects . . . . . . . . . . . . . . . 214
Terminal input/output area (TYPETERM Where useful . . . . . . . . . . . . 215
IOAREALEN or TCT TIOAL) . . . . . . . . 201 Compression of output terminal data streams . . 215
Effects . . . . . . . . . . . . . . . 201 Limitations . . . . . . . . . . . . . 215
Limitations . . . . . . . . . . . . . 202 Recommendations . . . . . . . . . . . 215
Recommendations . . . . . . . . . . . 202 How implemented . . . . . . . . . . 216
How implemented . . . . . . . . . . 203 How monitored . . . . . . . . . . . 216
How monitored . . . . . . . . . . . 203 Automatic installation of terminals . . . . . . 216
Receive-any input areas (RAMAX) . . . . . . 203 Maximum concurrent autoinstalls (AIQMAX) 216
Effects . . . . . . . . . . . . . . . 203 The restart delay parameter (AIRDELAY) . . . 216
Where useful . . . . . . . . . . . . 204 The delete delay parameter (AILDELAY) . . . 217
Limitations . . . . . . . . . . . . . 204 Effects . . . . . . . . . . . . . . . 218
Recommendations . . . . . . . . . . . 204 Recommendations . . . . . . . . . . . 218
How implemented . . . . . . . . . . 204 How monitored . . . . . . . . . . . 219
How monitored . . . . . . . . . . . 204
Receive-any pool (RAPOOL) . . . . . . . . 204 | Chapter 17. CICS Web support . . . . 221
Effects . . . . . . . . . . . . . . . 205 | CICS Web performance in a sysplex . . . . . . 221
Where useful . . . . . . . . . . . . 205 | CICS Web support performance in a single address
Limitations . . . . . . . . . . . . . 205 | space . . . . . . . . . . . . . . . . 222
Recommendations . . . . . . . . . . . 206 | CICS Web use of DOCTEMPLATE resources . . . 222
How implemented . . . . . . . . . . 206 | CICS Web support use of temporary storage . . . 223
How monitored . . . . . . . . . . . 206 | CICS Web support of HTTP 1.0 persistent
High performance option (HPO) with VTAM. . . 207 | connections . . . . . . . . . . . . . . 223
Effects . . . . . . . . . . . . . . . 207 | CICS Web security. . . . . . . . . . . . 223
Limitations . . . . . . . . . . . . . 207 | CICS Web 3270 support . . . . . . . . . . 223
Recommendations . . . . . . . . . . . 207 | Secure sockets layer support . . . . . . . . 224
How implemented . . . . . . . . . . 207
How monitored . . . . . . . . . . . 207 Chapter 18. VSAM and file control . . 225
vi CICS TS for OS/390: CICS Performance Guide
VSAM considerations: general objectives . . . . 225 How implemented . . . . . . . . . . 240
Local shared resources (LSR) or Nonshared How monitored . . . . . . . . . . . 240
resources (NSR) . . . . . . . . . . . 225 Hiperspace buffers . . . . . . . . . . . 240
Number of strings . . . . . . . . . . . 227 Effects . . . . . . . . . . . . . . . 241
Size of control intervals . . . . . . . . . 229 Limitations . . . . . . . . . . . . . 241
Number of buffers (NSR) . . . . . . . . 230 Recommendations . . . . . . . . . . . 241
Number of buffers (LSR) . . . . . . . . 230 How implemented . . . . . . . . . . 241
CICS calculation of LSR pool parameters . . . 231 Subtasking: VSAM (SUBTSKS=1) . . . . . . . 241
Data set name sharing . . . . . . . . . 232 Effects . . . . . . . . . . . . . . . 242
AIX considerations . . . . . . . . . . 233 Where useful . . . . . . . . . . . . 243
Situations that cause extra physical I/O . . . 233 Limitations . . . . . . . . . . . . . 243
Other VSAM definition parameters . . . . . 234 Recommendations . . . . . . . . . . . 243
VSAM resource usage (LSRPOOL) . . . . . . 234 How implemented . . . . . . . . . . 244
Effects . . . . . . . . . . . . . . . 234 | How monitored . . . . . . . . . . . 244
Where useful . . . . . . . . . . . . 234 Data tables . . . . . . . . . . . . . . 244
Limitations . . . . . . . . . . . . . 234 Effects . . . . . . . . . . . . . . . 244
Recommendations . . . . . . . . . . . 234 Recommendations . . . . . . . . . . . 244
How implemented . . . . . . . . . . 234 How implemented . . . . . . . . . . 245
VSAM buffer allocations for NSR (INDEXBUFFERS How monitored . . . . . . . . . . . 245
and DATABUFFERS) . . . . . . . . . . . 235 | Coupling facility data tables . . . . . . . . 245
Effects . . . . . . . . . . . . . . . 235 | Locking model . . . . . . . . . . . . 247
Where useful . . . . . . . . . . . . 235 | Contention model . . . . . . . . . . . 247
Limitations . . . . . . . . . . . . . 235 | Effects . . . . . . . . . . . . . . . 248
Recommendations . . . . . . . . . . . 235 | Recommendations . . . . . . . . . . . 248
How implemented . . . . . . . . . . 235 | How implemented . . . . . . . . . . 249
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 249
VSAM buffer allocations for LSR . . . . . . . 236 | CFDT statistics . . . . . . . . . . . . 250
Effects . . . . . . . . . . . . . . . 236 | RMF reports. . . . . . . . . . . . . 251
Where useful . . . . . . . . . . . . 236 | VSAM record-level sharing (RLS). . . . . . . 251
Recommendations . . . . . . . . . . . 236 | Effects . . . . . . . . . . . . . . . 252
How implemented . . . . . . . . . . 236 | How implemented . . . . . . . . . . 253
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 254
VSAM string settings for NSR (STRINGS) . . . . 237
Effects . . . . . . . . . . . . . . . 237 | Chapter 19. Java program objects . . 255
Where useful . . . . . . . . . . . . 237 | Overview. . . . . . . . . . . . . . . 255
Limitations . . . . . . . . . . . . . 237 | Performance considerations. . . . . . . . . 255
Recommendations . . . . . . . . . . . 237 | DLL initialization . . . . . . . . . . . 255
How implemented . . . . . . . . . . 237 | LE runtime options . . . . . . . . . . 256
How monitored . . . . . . . . . . . 237 | API costs . . . . . . . . . . . . . . 257
VSAM string settings for LSR (STRINGS) . . . . 238 | CICS system storage . . . . . . . . . . 257
Effects . . . . . . . . . . . . . . . 238 | Workload balancing of IIOP method call requests 258
Where useful . . . . . . . . . . . . 238 | CICS dynamic program routing . . . . . . 258
Limitations . . . . . . . . . . . . . 238 | TCP/IP port sharing . . . . . . . . . . 258
Recommendations . . . . . . . . . . . 238 | Dynamic domain name server registration for
How implemented . . . . . . . . . . 238 | TCP/IP . . . . . . . . . . . . . . 258
How monitored . . . . . . . . . . . 238
Maximum keylength for LSR (KEYLENGTH and
| Chapter 20. Java virtual machine
MAXKEYLENGTH) . . . . . . . . . . . 239
Effects . . . . . . . . . . . . . . . 239 | (JVM) programs . . . . . . . . . . 259
Where useful . . . . . . . . . . . . 239 | Overview. . . . . . . . . . . . . . . 259
Recommendations . . . . . . . . . . . 239 | Performance considerations. . . . . . . . . 259
How implemented . . . . . . . . . . 239 | Storage usage . . . . . . . . . . . . 260
Resource percentile for LSR (SHARELIMIT) . . . 239 | How monitored . . . . . . . . . . . . 261
Effects . . . . . . . . . . . . . . . 239
Where useful . . . . . . . . . . . . 240 Chapter 21. Database management 263
Recommendations . . . . . . . . . . . 240 DBCTL minimum threads (MINTHRD). . . . . 263
How implemented . . . . . . . . . . 240 Effects . . . . . . . . . . . . . . . 263
VSAM local shared resources (LSR) . . . . . . 240 Where useful . . . . . . . . . . . . 263
Effects . . . . . . . . . . . . . . . 240 Limitations . . . . . . . . . . . . . 263
Where useful . . . . . . . . . . . . 240 Implementation . . . . . . . . . . . 263
Recommendations . . . . . . . . . . . 240 How monitored . . . . . . . . . . . 264

Contents vii
DBCTL maximum threads (MAXTHRD) . . . . 264 How implemented . . . . . . . . . . 286
Effects . . . . . . . . . . . . . . . 264 Maximum task specification (MXT) . . . . . . 287
Where useful . . . . . . . . . . . . 264 Effects . . . . . . . . . . . . . . . 287
Limitations . . . . . . . . . . . . . 264 Limitations . . . . . . . . . . . . . 287
Implementation . . . . . . . . . . . 264 Recommendations . . . . . . . . . . . 287
How monitored . . . . . . . . . . . 264 How implemented . . . . . . . . . . 288
DBCTL DEDB parameters (CNBA, FPBUF, FPBOF) 264 How monitored . . . . . . . . . . . 288
Where useful . . . . . . . . . . . . 265 Transaction class (MAXACTIVE) . . . . . . . 288
Recommendations . . . . . . . . . . . 265 Effects . . . . . . . . . . . . . . . 288
How implemented . . . . . . . . . . 266 Limitations . . . . . . . . . . . . . 288
How monitored . . . . . . . . . . . 266 Recommendations . . . . . . . . . . . 288
CICS DB2 attachment facility . . . . . . . . 266 How implemented . . . . . . . . . . 289
Effects . . . . . . . . . . . . . . . 267 How monitored . . . . . . . . . . . 289
Where useful . . . . . . . . . . . . 267 Transaction class purge threshold
How implemented . . . . . . . . . . 267 (PURGETHRESH) . . . . . . . . . . . . 289
How monitored . . . . . . . . . . . 267 Effects . . . . . . . . . . . . . . . 290
CICS DB2 attachment facility (TCBLIMIT, and Where useful . . . . . . . . . . . . 290
THREADLIMIT) . . . . . . . . . . . . 268 Recommendations . . . . . . . . . . . 290
Effect . . . . . . . . . . . . . . . 268 How implemented . . . . . . . . . . 290
Limitations . . . . . . . . . . . . . 268 How monitored . . . . . . . . . . . 290
Recommendations . . . . . . . . . . . 268 Task prioritization . . . . . . . . . . . . 291
How monitored . . . . . . . . . . . 269 Effects . . . . . . . . . . . . . . . 291
CICS DB2 attachment facility (PRIORITY) . . . . 269 Where useful . . . . . . . . . . . . 292
Effects . . . . . . . . . . . . . . . 269 Limitations . . . . . . . . . . . . . 292
Where useful . . . . . . . . . . . . 269 Recommendations . . . . . . . . . . . 292
Limitations . . . . . . . . . . . . . 269 How implemented . . . . . . . . . . 293
Recommendations . . . . . . . . . . . 269 How monitored . . . . . . . . . . . 293
How implemented . . . . . . . . . . 269 Simplifying the definition of CICS dynamic
How monitored . . . . . . . . . . . 269 storage areas . . . . . . . . . . . . 293
Extended dynamic storage areas . . . . . . 294
Chapter 22. Logging and journaling 271 Dynamic storage areas(below the line) . . . . 295
Coupling facility or DASD-only logging? . . . . 271 Using modules in the link pack area (LPA/ELPA) 297
Integrated coupling migration facility . . . . 271 Effects . . . . . . . . . . . . . . . 297
Monitoring the logger environment . . . . . . 271 Limitations . . . . . . . . . . . . . 297
Average blocksize . . . . . . . . . . . . 273 Recommendations . . . . . . . . . . . 297
Number of log streams in the CF structure . . . 274 How implemented . . . . . . . . . . 298
AVGBUFSIZE and MAXBUFSIZE parameters 274 Map alignment . . . . . . . . . . . . . 298
Recommendations . . . . . . . . . . . 275 Effects . . . . . . . . . . . . . . . 298
Limitations . . . . . . . . . . . . . 275 Limitations . . . . . . . . . . . . . 298
How implemented . . . . . . . . . . 276 How implemented . . . . . . . . . . 299
How monitored . . . . . . . . . . . 276 How monitored . . . . . . . . . . . 299
LOWOFFLOAD and HIGHOFFLOAD parameters Resident, nonresident, and transient programs . . 299
on log stream definition . . . . . . . . . . 276 Effects . . . . . . . . . . . . . . . 299
Recommendations . . . . . . . . . . . 277 Recommendations . . . . . . . . . . . 300
How implemented . . . . . . . . . . 278 How monitored . . . . . . . . . . . 300
How monitored . . . . . . . . . . . 278 Putting application programs above the 16MB line 300
Staging data sets . . . . . . . . . . . . 278 Effects . . . . . . . . . . . . . . . 300
Recommendations . . . . . . . . . . . 279 Where useful . . . . . . . . . . . . 301
Activity keypoint frequency (AKPFREQ) . . . . 279 Limitations . . . . . . . . . . . . . 301
Limitations . . . . . . . . . . . . . 280 How implemented . . . . . . . . . . 301
Recommendations . . . . . . . . . . . 281 Transaction isolation and real storage requirements 301
How implemented . . . . . . . . . . 281 Limiting the expansion of subpool 229 using
How monitored . . . . . . . . . . . 281 VTAM pacing . . . . . . . . . . . . . 302
DASD-only logging . . . . . . . . . . . 281 Recommendations . . . . . . . . . . . 302
How implemented . . . . . . . . . . 303
Chapter 23. Virtual and real storage 283
Tuning CICS virtual storage . . . . . . . . 283 Chapter 24. MRO and ISC . . . . . . 305
Splitting online systems: virtual storage . . . . 284 CICS intercommunication facilities . . . . . . 305
Where useful . . . . . . . . . . . . 285 Limitations . . . . . . . . . . . . . 306
Limitations . . . . . . . . . . . . . 285 How implemented . . . . . . . . . . 306
Recommendations . . . . . . . . . . . 286 How monitored . . . . . . . . . . . 307

viii CICS TS for OS/390: CICS Performance Guide


Intersystems session queue management . . . . 307 How monitored . . . . . . . . . . . 324
Relevant statistics . . . . . . . . . . . 307 The 75 percent rule . . . . . . . . . . 325
Ways of approaching the problem and Temporary storage data sharing . . . . . . . 325
recommendations . . . . . . . . . . . 308 CICS transient data (TD) . . . . . . . . . 326
Monitoring the settings . . . . . . . . . 309 Recovery options . . . . . . . . . . . 326
Using transaction classes DFHTCLSX and Intrapartition transient data considerations . . 327
DFHTCLQ2 . . . . . . . . . . . . . . 309 Extrapartition transient data considerations . . 329
Effects . . . . . . . . . . . . . . . 309 Limitations . . . . . . . . . . . . . 330
How implemented . . . . . . . . . . 309 How implemented . . . . . . . . . . 330
Terminal input/output area (SESSIONS Recommendations . . . . . . . . . . . 330
IOAREALEN) for MRO sessions . . . . . . . 310 How monitored . . . . . . . . . . . 331
Effects . . . . . . . . . . . . . . . 310 | Global ENQ/DEQ . . . . . . . . . . . . 331
Where useful . . . . . . . . . . . . 310 | How implemented . . . . . . . . . . 331
Limitations . . . . . . . . . . . . . 310 | Recommendations . . . . . . . . . . . 331
Recommendations . . . . . . . . . . . 310 CICS monitoring facility . . . . . . . . . . 331
How implemented . . . . . . . . . . 310 Limitations . . . . . . . . . . . . . 331
Batching requests (MROBTCH) . . . . . . . 311 Recommendations . . . . . . . . . . . 332
Effects . . . . . . . . . . . . . . . 311 How implemented . . . . . . . . . . 332
Recommendations . . . . . . . . . . . 311 How monitored . . . . . . . . . . . 332
Extending the life of mirror transactions CICS trace . . . . . . . . . . . . . . 332
(MROLRM) . . . . . . . . . . . . . . 312 Effects . . . . . . . . . . . . . . . 333
Deletion of shipped terminal definitions Limitations . . . . . . . . . . . . . 333
(DSHIPINT and DSHIPIDL) . . . . . . . . 312 Recommendations . . . . . . . . . . . 333
Effects . . . . . . . . . . . . . . . 313 How implemented . . . . . . . . . . 333
Where useful . . . . . . . . . . . . 313 How monitored . . . . . . . . . . . 334
Limitations . . . . . . . . . . . . . 313 CICS recovery . . . . . . . . . . . . . 334
Recommendations . . . . . . . . . . . 313 Limitations . . . . . . . . . . . . . 334
How implemented . . . . . . . . . . 314 Recommendation . . . . . . . . . . . 334
How monitored . . . . . . . . . . . 314 How implemented . . . . . . . . . . 334
How monitored . . . . . . . . . . . 334
Chapter 25. Programming CICS security . . . . . . . . . . . . . 334
considerations . . . . . . . . . . . 315 Effects . . . . . . . . . . . . . . . 335
Limitations . . . . . . . . . . . . . 335
BMS map suffixing and the device-dependent
Recommendations . . . . . . . . . . . 335
suffix option. . . . . . . . . . . . . . 315
How implemented . . . . . . . . . . 335
Effects . . . . . . . . . . . . . . . 315
How monitored . . . . . . . . . . . 335
Recommendation . . . . . . . . . . . 315
CICS storage protection facilities . . . . . . . 335
How implemented . . . . . . . . . . 315
Storage protect . . . . . . . . . . . . 335
How monitored . . . . . . . . . . . 315
Transaction isolation . . . . . . . . . . 336
COBOL RESIDENT option . . . . . . . . . 316
Command protection . . . . . . . . . . 336
Effects . . . . . . . . . . . . . . . 316
Recommendation . . . . . . . . . . . 336
Limitations . . . . . . . . . . . . . 317
Transaction isolation and applications . . . . 336
Recommendations . . . . . . . . . . . 317
How implemented . . . . . . . . . . 317
| CICS business transaction services . . . . . . 336
How monitored . . . . . . . . . . . 317
| Effects . . . . . . . . . . . . . . . 337
PL/I shared library . . . . . . . . . . . 317
| Recommendations . . . . . . . . . . . 337
How implemented . . . . . . . . . . 318
| How implemented . . . . . . . . . . 337
How monitored . . . . . . . . . . . 318
VS COBOL II . . . . . . . . . . . . . 318 Chapter 27. Improving CICS startup
How implemented . . . . . . . . . . 318 and normal shutdown time . . . . . 339
How monitored . . . . . . . . . . . 318 Startup procedures to be checked. . . . . . . 339
| Language Environment (LE) . . . . . . . . 318 Automatic restart management . . . . . . . 341
| LE run time options for AMODE (24) programs 319 Buffer considerations . . . . . . . . . . . 342
| Using DLLs in C++ . . . . . . . . . . 319
Part 5. Appendixes . . . . . . . . 343
Chapter 26. CICS facilities . . . . . . 321
CICS temporary storage (TS) . . . . . . . . 321
Appendix A. CICS statistics tables 345
Effects . . . . . . . . . . . . . . . 321
Interpreting CICS statistics . . . . . . . . . 345
Limitations . . . . . . . . . . . . . 322
Summary report . . . . . . . . . . . 345
Recommendations . . . . . . . . . . . 322
Autoinstall global statistics . . . . . . . . 347
How implemented . . . . . . . . . . 324
CICS DB2 . . . . . . . . . . . . . 352

Contents ix
DBCTL session termination . . . . . . . . 364 Storage Reports . . . . . . . . . . . . 533
Dispatcher domain . . . . . . . . . . 367 Loader and Program Storage Report. . . . . . 543
Dump domain . . . . . . . . . . . . 373 Storage Subpools Report . . . . . . . . . 547
System dumps . . . . . . . . . . . . 373 Transaction Classes Report . . . . . . . . . 549
Transaction dumps . . . . . . . . . . 376 Transactions Report . . . . . . . . . . . 551
Enqueue domain . . . . . . . . . . . 378 Transaction Totals Report . . . . . . . . . 552
Front end programming interface (FEPI) . . . 381 Programs Report . . . . . . . . . . . . 554
File control . . . . . . . . . . . . . 385 Program Totals Report . . . . . . . . . . 556
ISC/IRC system and mode entries . . . . . 396 DFHRPL Analysis Report . . . . . . . . . 558
System entry . . . . . . . . . . . . 397 Programs by DSA and LPA Report . . . . . . 559
Mode entry . . . . . . . . . . . . . 405 Temporary Storage Report . . . . . . . . . 561
ISC/IRC attach time entries . . . . . . . 410 Temporary Storage Queues Report . . . . . . 566
Journalname . . . . . . . . . . . . . 411 Tsqueue Totals Report . . . . . . . . . . 567
Log stream . . . . . . . . . . . . . 413 Temporary Storage Queues by Shared TS Pool . . 567
LSRpool . . . . . . . . . . . . . . 416 Transient Data Report . . . . . . . . . . 569
Monitoring domain . . . . . . . . . . 428 Transient Data Queues Report . . . . . . . . 571
Program autoinstall . . . . . . . . . . 430 Transient Data Queue Totals Report . . . . . . 572
Loader . . . . . . . . . . . . . . 431 Journalnames Report . . . . . . . . . . . 573
Program . . . . . . . . . . . . . . 442 Logstreams Report . . . . . . . . . . . 574
Recovery manager. . . . . . . . . . . 445 Autoinstall and VTAM Report . . . . . . . . 577
Statistics domain . . . . . . . . . . . 451 Connections and Modenames Report . . . . . 580
Storage manager . . . . . . . . . . . 452 | TCP/IP Services Report . . . . . . . . . . 584
Table manager . . . . . . . . . . . . 464 LSR Pools Report . . . . . . . . . . . . 587
TCP/IP Services - resource statistics . . . . . 465 Files Report . . . . . . . . . . . . . . 592
TCP/IP Services - request statistics . . . . . 467 File Requests Report . . . . . . . . . . . 593
Temporary storage . . . . . . . . . . 468 Data Tables Reports . . . . . . . . . . . 595
Terminal control . . . . . . . . . . . 474 Coupling Facility Data Table Pools Report . . . . 597
Transaction class (TCLASS) . . . . . . . . 478 Exit Programs Report. . . . . . . . . . . 598
Transaction manager . . . . . . . . . . 482 Global User Exits Report . . . . . . . . . 599
Transient data . . . . . . . . . . . . 491 DB2 Connection Report . . . . . . . . . . 600
User domain statistics . . . . . . . . . 499 DB2 Entries Report . . . . . . . . . . . 606
VTAM statistics . . . . . . . . . . . 500 Enqueue Manager Report . . . . . . . . . 609
Recovery Manager Report . . . . . . . . . 612
Appendix B. Shared temporary Page Index Report. . . . . . . . . . . . 614
storage queue server statistics. . . . 503
Shared TS queue server: coupling facility statistics 503 Appendix F. MVS and CICS virtual
Shared TS queue server: buffer pool statistics. . . 505 storage . . . . . . . . . . . . . . 615
Shared TS queue server: storage statistics . . . . 506 MVS storage . . . . . . . . . . . . . 616
The MVS common area . . . . . . . . . 616
| Appendix C. Coupling facility data Private area and extended private area . . . . 619
| tables server statistics . . . . . . . 509 The CICS private area . . . . . . . . . . 619
High private area . . . . . . . . . . . 621
| Coupling facility data tables: list structure statistics 509
MVS storage above region . . . . . . . . . 623
| Coupling facility data tables: table accesses
The CICS region . . . . . . . . . . . . 623
| statistics . . . . . . . . . . . . . . . 511
CICS virtual storage . . . . . . . . . . 623
| Coupling facility data tables: request statistics . . 512
MVS storage . . . . . . . . . . . . . 624
| Coupling facility data tables: storage statistics . . 513
The dynamic storage areas . . . . . . . . . 625
CICS subpools . . . . . . . . . . . . 626
| Appendix D. Named counter sequence | Short-on-storage conditions caused by subpool
| number server . . . . . . . . . . . 515 | storage fragmentation . . . . . . . . . . 636
| Named counter sequence number server statistics 515 CICS kernel storage . . . . . . . . . . . 639
| Named counter server: storage statistics . . . . 516
Appendix G. Performance data . . . . 641
Appendix E. The sample statistics Variable costs . . . . . . . . . . . . . 641
program, DFH0STAT . . . . . . . . 519 Logging . . . . . . . . . . . . . . 642
Analyzing DFH0STAT Reports . . . . . . . 520 Syncpointing . . . . . . . . . . . . 643
| System Status Report . . . . . . . . . . . 521 Additional costs . . . . . . . . . . . . 644
Transaction Manager Report . . . . . . . . 526 Transaction initialization and termination . . . . 644
Dispatcher Report . . . . . . . . . . . . 528 Receive . . . . . . . . . . . . . . 644
Dispatcher TCBs Report . . . . . . . . . . 530 Attach/terminate . . . . . . . . . . . 644

x CICS TS for OS/390: CICS Performance Guide


Send . . . . . . . . . . . . . . . 644 Non-Recoverable TS Queue . . . . . . . 648
File control . . . . . . . . . . . . . . 644 Recoverable TS Queue . . . . . . . . . 648
READ . . . . . . . . . . . . . . . 645 Shared Temporary Storage . . . . . . . . 648
READ UPDATE . . . . . . . . . . . 645 Transient Data . . . . . . . . . . . . . 649
Non-recoverable files . . . . . . . . . . 645 Intrapartition Queues. . . . . . . . . . 649
Recoverable files . . . . . . . . . . . 645 Non-Recoverable TD Queue . . . . . . . 649
REWRITE . . . . . . . . . . . . . 645 Logically Recoverable TD Queue . . . . . . 649
Non-recoverable files . . . . . . . . . . 645 Physically Recoverable TD Queue . . . . . 649
Recoverable files . . . . . . . . . . . 645 Extrapartition queues. . . . . . . . . . 649
WRITE . . . . . . . . . . . . . . 646 Program Control . . . . . . . . . . . . 650
Non-Recoverable files . . . . . . . . . 646 Storage control . . . . . . . . . . . . . 650
Recoverable files . . . . . . . . . . . 646 Interregion Communication . . . . . . . . 650
DELETE . . . . . . . . . . . . . . 646 Transaction routing . . . . . . . . . . 650
Non-Recoverable files . . . . . . . . . 646 Function shipping (MROLRM=YES) . . . . . 651
Recoverable files . . . . . . . . . . . 646 Function shipping (MROLRM=NO) . . . . . 651
Browsing . . . . . . . . . . . . . . 647
UNLOCK . . . . . . . . . . . . . 647 Glossary . . . . . . . . . . . . . 653
| Coupling facility data tables . . . . . . . . 647
Record Level Sharing (RLS) . . . . . . . . 647
Index . . . . . . . . . . . . . . . 675
Temporary Storage . . . . . . . . . . . 648
Main Storage . . . . . . . . . . . . 648
Auxiliary Storage . . . . . . . . . . . 648 Sending your comments to IBM . . . 685

Contents xi
xii CICS TS for OS/390: CICS Performance Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:

IBM World Trade Asia Corporation


Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan

The following paragraph does not apply in the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore this statement may not apply
to you.

This publication could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM United Kingdom
Laboratories, MP151, Hursley Park, Winchester, Hampshire, England, SO21 2JN.
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.

© Copyright IBM Corp. 1983, 1999 xiii


The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Programming License Agreement, or any equivalent agreement
between us.

Programming Interface Information


This book is intended to help you to:
v Establish performance objectives and monitor them
v Identify performance constraints, and make adjustments to the operational CICS
system and its application programs.

This book also documents Product-sensitive Programming Interface and Associated


Guidance Information and Diagnosis, Modification or Tuning Information provided
by CICS.

Product-sensitive programming interfaces allow the customer installation to


perform tasks such as diagnosing, modifying, monitoring, repairing, tailoring, or
tuning of CICS. Use of such interfaces creates dependencies on the detailed design
or implementation of the IBM software product. Product-sensitive programming
interfaces should be used only for these specialized purposes. Because of their
dependencies on detailed design and implementation, it is to be expected that
programs written to such interfaces may need to be changed in order to run with
new product releases or versions, or as a result of service.

Product-sensitive Programming Interface and Associated Guidance Information is


identified where it occurs, either by an introductory statement to a chapter or
section or by the following marking:

Product-sensitive programming interface

End of Product-sensitive programming interface

Diagnosis, Modification or Tuning Information is provided to help you tune your


CICS system.

Attention: Do not use this Diagnosis, Modification or Tuning Information as a


programming interface.

Diagnosis, Modification or Tuning Information is identified where it occurs, either


by an introductory statement to a chapter or section or by the following marking:

Diagnosis, Modification or Tuning Information

End of Diagnosis, Modification or Tuning Information

xiv CICS TS for OS/390: CICS Performance Guide


Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, or other countries, or both:

ACF/VTAM DFSMS/MVS NetView


CICS GDDM OS/2
CICS/ESA Hiperspace OS/390
CICS/MVS IBM RACF
CICSPlex SM IMS/ESA RMF
DATABASE 2 MVS/DFP System/390
DB2 MVS/ESA VTAM

Other company, product, and service names may be trademarks or service marks
of others.

Notices xv
xvi CICS TS for OS/390: CICS Performance Guide
Preface
What this book is about
This book is intended to help you to:
v Establish performance objectives and monitor them
v Identify performance constraints, and make adjustments to the operational CICS
system and its application programs.

This book does not discuss the performance aspects of the CICS Transaction Server
for OS/390 Release 3 Front End Programming Interface. For more information
about the Front End Programming Interface, See the CICS Front End Programming
Interface User’s Guide. This book does not contain Front End Programming Interface
dump statistics.

Who this book is for


This book is for a person who is involved in:
v System design
v Monitoring and tuning CICS® performance.

What you need to know to understand this book


You need to have a good understanding of how CICS works. This assumes
familiarity with many of the books in the CICS Transaction Server for OS/390
Release 3 library, together with adequate practical experience of installing and
maintaining a CICS system.

How to use this book


If you want to establish performance objectives, monitor the performance of a
CICS system, and occasionally make adjustments to the system to keep it within
objectives, you should read through this book in its entirety.

If you have a performance problem and want to correct it, read Parts 3 and 4. You
may need to refer to various sections in Part 2.

Notes on terminology
The following abbreviations are used throughout this book:
v “CICS” refers to the CICS element in the CICS Transaction Server for OS/390®
v “MVS” refers to the operating system, which can be either an element of
OS/390, or MVS/Enterprise System Architecture System Product (MVS/ESA SP).
v “VTAM®” refers to ACF/VTAM.
v “DL/I” refers to the database component of IMS/ESA.

© Copyright IBM Corp. 1983, 1999 xvii


xviii CICS TS for OS/390: CICS Performance Guide
Bibliography
CICS Transaction Server for OS/390
CICS Transaction Server for OS/390: Planning for Installation GC33-1789
CICS Transaction Server for OS/390 Release Guide GC34-5352
CICS Transaction Server for OS/390 Migration Guide GC34-5353
CICS Transaction Server for OS/390 Installation Guide GC33-1681
CICS Transaction Server for OS/390 Program Directory GI10-2506
CICS Transaction Server for OS/390 Licensed Program Specification GC33-1707

CICS books for CICS Transaction Server for OS/390


General
CICS Master Index SC33-1704
CICS User’s Handbook SX33-6104
CICS Transaction Server for OS/390 Glossary (softcopy only) GC33-1705
Administration
CICS System Definition Guide SC33-1682
CICS Customization Guide SC33-1683
CICS Resource Definition Guide SC33-1684
CICS Operations and Utilities Guide SC33-1685
CICS Supplied Transactions SC33-1686
Programming
CICS Application Programming Guide SC33-1687
CICS Application Programming Reference SC33-1688
CICS System Programming Reference SC33-1689
CICS Front End Programming Interface User’s Guide SC33-1692
CICS C++ OO Class Libraries SC34-5455
CICS Distributed Transaction Programming Guide SC33-1691
CICS Business Transaction Services SC34-5268
Diagnosis
CICS Problem Determination Guide GC33-1693
CICS Messages and Codes GC33-1694
CICS Diagnosis Reference LY33-6088
CICS Data Areas LY33-6089
CICS Trace Entries SC34-5446
CICS Supplementary Data Areas LY33-6090
Communication
CICS Intercommunication Guide SC33-1695
CICS Family: Interproduct Communication SC33-0824
CICS Family: Communicating from CICS on System/390 SC33-1697
CICS External Interfaces Guide SC33-1944
CICS Internet Guide SC34-5445
Special topics
CICS Recovery and Restart Guide SC33-1698
CICS Performance Guide SC33-1699
CICS IMS Database Control Guide SC33-1700
CICS RACF Security Guide SC33-1701
CICS Shared Data Tables Guide SC33-1702
CICS Transaction Affinities Utility Guide SC33-1777
CICS DB2 Guide SC33-1939

© Copyright IBM Corp. 1983, 1999 xix


CICSPlex SM books for CICS Transaction Server for OS/390
General
CICSPlex SM Master Index SC33-1812
CICSPlex SM Concepts and Planning GC33-0786
CICSPlex SM User Interface Guide SC33-0788
CICSPlex SM View Commands Reference Summary SX33-6099
Administration and Management
CICSPlex SM Administration SC34-5401
CICSPlex SM Operations Views Reference SC33-0789
CICSPlex SM Monitor Views Reference SC34-5402
CICSPlex SM Managing Workloads SC33-1807
CICSPlex SM Managing Resource Usage SC33-1808
CICSPlex SM Managing Business Applications SC33-1809
Programming
CICSPlex SM Application Programming Guide SC34-5457
CICSPlex SM Application Programming Reference SC34-5458
Diagnosis
CICSPlex SM Resource Tables Reference SC33-1220
CICSPlex SM Messages and Codes GC33-0790
CICSPlex SM Problem Determination GC33-0791

Other CICS books


CICS Application Programming Primer (VS COBOL II) SC33-0674
CICS Application Migration Aid Guide SC33-0768
CICS Family: API Structure SC33-1007
CICS Family: Client/Server Programming SC33-1435
CICS Family: General Information GC33-0155
CICS 4.1 Sample Applications Guide SC33-1173
CICS/ESA 3.3 XRF Guide SC33-0661

If you have any questions about the CICS Transaction Server for OS/390 library,
see CICS Transaction Server for OS/390: Planning for Installation which discusses both
hardcopy and softcopy books and the ways that the books can be ordered.

Books from related libraries

ACF/VTAM
ACF/VTAM Installation and Migration Guide, GC31-6547-01
ACF/VTAM Network Implementation Guide, SC31-6548

CICSPlex System Manager for MVS/ESA


IBM CICSPlex System Manager for MVS/ESA Setup and Administration - Volume 1,
SC33-0784-01
IBM CICSPlex System Manager for MVS/ESA Setup and Administration - Volume 2,
SC33-0784-02

DATABASE 2
DB2 for OS/390 Administration Guide, SC26-8957

DATABASE 2 Performance Monitor (DB2PM)


DB2 PM Batch User’s Guide, SH12-6164
DB2 PM Command Reference, SH12-6167

xx CICS TS for OS/390: CICS Performance Guide


DB2 PM Online Monitor User’s Guide, SH12-6165
DB2 PM Report Reference, SH12-6163
DB2 for OS/390 Capacity Planning, SG24-2244
DB2 PM Usage Guide Update, SG24-2584

DFSMS/MVS
DFSMS/MVS NaviQuest User’s Guide, SC26-7194
DFSMS/MVS DFSMSdfp Storage Administration Reference, SC26-4920

IMS/ESA
IMS/ESA Version 5 Admin Guide: DB, SC26-8012
IMS/ESA Version 5 Admin Guide: System, SC26-8013
IMS/ESA Version 5 Performance Analyzer’s User’s Guide, SC26-9088
IMS/ESA Version 6 Admin Guide: DB, SC26-8725
IMS/ESA Version 6 Admin Guide: System, SC26-8720
IMS Performance Analyzer User’s Guide SC26-9088

MVS
OS/390 MVS Initialization and Tuning Guide, SC28-1751
OS/390 MVS Initialization and Tuning Reference, SC28-1752
OS/390 MVS JCL Reference, GC28-1757
OS/390 MVS System Management Facilities (SMF), GC28-1783
OS/390 MVS Planning: Global Resource Serialization, GC28-1759
OS/390 MVS Planning: Workload Management, GC28-1761
OS/390 MVS Setting Up a Sysplex, GC28-1779

OS/390 RMF
OS/390 RMF User’s Guide, GC28-1949-01
OS/390 Performance Management Guide, SC28-1951-00
OS/390 RMF Report Analysis, SC28-1950-01
OS/390 RMF Programmers Guide, SC28-1952-01

Tivoli Performance Reporter for OS/390


Tivoli Performance Reporter for OS/390: Administration Guide, SH19-6816
Tivoli Performance Reporter for OS/390: CICS Performance Feature Guide and
Reference, SH19-6820
SLR to Tivoli Performance Reporter for OS/390: Migration Cookbook, SG24-5128

NetView Performance Monitor (NPM)


NPM Reports and Record Formats, SH19-6965-01
NPM User’s Guide, SH19-6962-01

Tuning tools
Generalized Trace Facility Performance Analysis (GTFPARS) Program
Description/Operations Manual, SB21-2143
Network Performance Analysis and Reporting System Program Description/Operations,
SB21-2488
Network Program Products Planning, SC30-3351

Others
CICS Workload Management Using CICSPlex SM and the MVS/ESA Workload
Manager, GG24-4286
System/390 MVS Parallel Sysplex Performance, GG24-4356

Bibliography xxi
System/390 MVS/ESA Version 5 Workload Manager Performance Studies, SG24-4352
IBM 3704 and 3705 Control Program Generation and Utilities Guide, GC30-3008
IMSASAP II Description/Operations, SB21-1793
Screen Definition Facility II Primer for CICS/BMS Programs, SH19-6118
Systems Network Architecture Management Services Reference,SC30-3346
Teleprocessing Network Simulator General Information, GH20-2487

Determining if a publication is current


IBM regularly updates its publications with new and changed information. When
first published, both hardcopy and BookManager softcopy versions of a publication
are usually in step. However, due to the time required to print and distribute
hardcopy books, the BookManager version is more likely to have had last-minute
changes made to it before publication.

Subsequent updates will probably be available in softcopy before they are available
in hardcopy. This means that at any time from the availability of a release, softcopy
versions should be regarded as the most up-to-date.

For CICS Transaction Server books, these softcopy updates appear regularly on the
Transaction Processing and Data Collection Kit CD-ROM, SK2T-0730-xx. Each reissue
of the collection kit is indicated by an updated order number suffix (the -xx part).
For example, collection kit SK2T-0730-06 is more up-to-date than SK2T-0730-05. The
collection kit is also clearly dated on the cover.

Updates to the softcopy are clearly marked by revision codes (usually a “#”
character) to the left of the changes.

xxii CICS TS for OS/390: CICS Performance Guide


Summary of changes
Changes since CICS Transaction Server for OS/390 Release 2 are indicated by
| vertical lines to the left of the text.
|
| Changes for CICS Transaction Server for OS/390 Release 3
| The chapter on Service Level Reporter(SLR) has been removed.

| “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 replaces the
| chapter on Performance Reporter for MVS..

| Performance considerations resulting from enhancements to CICS Web support and


| the introduction of Secure Sockets Layer for Web security, are discussed in
| “Chapter 17. CICS Web support” on page 221.

| The performance implications of using Coupling Facilities Data Tables, including


| information about contention model and locking model, are discussed in
| “Chapter 18. VSAM and file control” on page 225.

| A chapter has been added, “Chapter 19. Java program objects” on page 255, to
| introduce performance considerations when using Java language support.

| “Chapter 20. Java virtual machine (JVM) programs” on page 259 describes
| performance implications for programs run using the MVS Java Virtual Machine
| (JVM).

| “Chapter 8. Managing Workloads” on page 123 has been revised to discuss more
| fully the implications and benefits of using the MVS workload manager, and to
| introduce the CICSPlex SM dynamic routing program used by the WLM.

| Additional or changed statistics for the following have been documented:


| v Dispatcher domain
| v Enqueue domain
| v Files
| v ISC/IRC
| v TCP/IP Services
| Separate appendixes have been created to show the statistics obtained for the
| following:
| v Coupling facility data tables server
| v Named counter sequence number server

| Changes have also been made to several reports in the sample statistics program,
| DFH0STAT.

© Copyright IBM Corp. 1983, 1999 xxiii


Changes for CICS Transaction Server for OS/390 Release 2
v The CICS DB2 attachment facility supplied with CICS Transaction Server for
OS/390 Release 2 provides resource definition online (RDO) support for DB2
resources as an alternative to resource definition table (RCT) definitions. CICS
DB2 statistics, collected using standard CICS interfaces, are provided in
Appendix A.
v “Chapter 21. Database management” on page 263 discusses relevant parameters
of the CICS DB2 attachment facility.
v Information about tuning the performance of DASD-only log streams has been
added to “Chapter 22. Logging and journaling” on page 271.
v A full description of User Domain statistics is provided.
v Additions have been made to performance data for groups DFHFILE,
DFHPROG, DFHTASK, and DFHTEMP in “Chapter 6. The CICS monitoring
facility” on page 65.
v “Appendix F. MVS and CICS virtual storage” on page 615 has an additional
section, “Short-on-storage conditions caused by subpool storage fragmentation”
on page 636.

Changes for the CICS Transaction Server Release 1 edition


v As part of the restructure of the temporary storage section, the TSMGSET system
initialization parameter has been deleted.
v The XRF function has not changed for CICS Transaction for OS/390 Release 1,
but the chapter, Tuning XRF, has been removed from this book. For information
about tuning XRF see the CICS/ESA 4.1 edition of the Performance Guide.
v For VSAM RLS files, the IMBED cluster attribute has been withdrawn, and the
REPLICATE cluster is no longer recommended. You can achieve the effects of
Imbed and Replication by using caching controllers.
v The enterprise performance data manager introduced in CICS Transaction Server
for OS/390 Release 1 has been renamed Performance Reporter for MVS. See
Chapter 7.
v The role of the system initialization parameters, DSHIPINT and DSHIPIDL is
discussed in “Chapter 24. MRO and ISC” on page 305.
v Information about automatic restart management (ARM), as a sysplex-wide
restart mechanism is given in “Chapter 27. Improving CICS startup and normal
shutdown time” on page 339.
v Journal control statistics have been replaced by Journalname statistics and Log
Stream statistics. They represent the activity on journals within MVS log streams
and SMF data sets. See “Journalname” on page 411, and “Log stream” on
page 413.
v An Appendix has been added to explain the shared temporary storage server
statistics that are produced when determining how much available storage can
be allocated to the server. See Appendix B. Shared temporary storage queue server
statistics.
v A temporary storage domain has been introduced, and a number of TSMAIN
subpools are to be added to the list of CICS subpools in the ECDSA in
“Appendix F. MVS and CICS virtual storage” on page 615.
v A different methodology has been used to produce the latest data presented in
“Appendix G. Performance data” on page 641.

xxiv CICS TS for OS/390: CICS Performance Guide


Changes for the CICS/ESA 4.1 edition
Changes for the CICS/ESA Version 4 Release 1 edition include the following:
v Additional or changed statistics in the following areas have been documented:
– Autoinstalled statistics
– DBCTL statistics
– Dispatcher statistics
– DL/I statistics
– FEPI pool statistics
– FEPI connection statistics
– FEPI target statistics
– File control statistics
– ISC/IRC system and mode entry statistics
– Journal control statistics
– Loader statistics
– LSR pool statistics
– Program autoinstalled statistics
– Storage manager statistics
– Suspending mirrors and MROLM
– Terminal control statistics
– Terminal autoinstalled statistics
– Transaction statistics
– Transaction class statistics
– Transaction manager statistics
– Transient data statistics
– VTAM statistics.
v The domain manager statistics have been removed from this release.
v The description of the data produced by the CICS monitoring facility has been
transferred from the Customization Guide and is included in “Interpreting CICS
monitoring” on page 73.
v “Chapter 9. Understanding RMF workload manager data” on page 135 has been
added to explain CICS-related data in an RMF workload activity report.
v “Use of LLA (MVS library lookaside)” on page 197 includes a section on
persistent sessions delay interval (PSINT).
v “Intersystems session queue management” on page 307 has been added to
“Chapter 24. MRO and ISC” on page 305.
v A new appendix has been added giving details of the sample statistics program
(DFH0STAT). See “Appendix E. The sample statistics program, DFH0STAT” on
page 519.
v The storage chapter has been modified, and a new section about kernel storage
has been added in “CICS kernel storage” on page 639.

Summary of changes xxv


xxvi CICS TS for OS/390: CICS Performance Guide
Part 1. Setting performance objectives
This book describes how CICS performance might be improved. It also provides
reference information to help you achieve such improvement.

Good performance is the achievement of agreed service levels. This means that
system availability and response times meet user’s expectations using resources
available within the budget.

The performance of a CICS system should be considered:


v When you plan to install a new system
v When you want to review an existing system
v When you contemplate major changes to a system.

There are several basic steps in tuning a system, some of which may be just
iterative until performance is acceptable. These are:
1. Agree what good performance is.
2. Set up performance objectives (described in Chapter 1. Establishing
performance objectives).
3. Decide on measurement criteria (described in Chapter 3. Performance
monitoring and review).
4. Measure the performance of the production system.
5. Adjust the system as necessary.
6. Continue to monitor the performance of the system and anticipate future
constraints (see “Monitoring for the future” on page 15).

Parts 1 and 2 of this book describe how to monitor and assess performance.

Parts 3 and 4 suggest ways to improve performance.

This part contains the following chapters:


v “Chapter 1. Establishing performance objectives” on page 3
v “Chapter 2. Gathering data for performance objectives” on page 7
v “Chapter 3. Performance monitoring and review” on page 11.

Recommendations given in this book, based on current knowledge of CICS, are general in
nature, and cannot be guaranteed to improve the performance of any particular system.

© Copyright IBM Corp. 1983, 1999 1


2 CICS TS for OS/390: CICS Performance Guide
Chapter 1. Establishing performance objectives
The process of establishing performance objectives is described in this chapter in
the following sections:
v “Defining some terms”
v “Defining performance objectives and priorities” on page 4
v “Analyzing the current workload” on page 5
v “Translating resource requirements into system objectives” on page 5

Performance objectives often consist of a list of transactions and expected timings for
each. Ideally, through them, good performance can be easily recognized and you
know when to stop further tuning. They must, therefore, be:
v Practically measurable
v Based on a realistic workload
v Within the budget.

Such objectives may be defined in terms such as:


v Desired or acceptable response times, for example, within which 90% of all
responses occur
v Average or peak number of transactions through the system
v System availability, including mean time to failure, and downtime after a failure.

After you have defined the workload and estimated the resources required, you
must reconcile the desired response with what you consider attainable. These
objectives must then be agreed and regularly reviewed with users.

Establishing performance objectives is an iterative process involving the activities


described in the rest of this chapter.

Defining some terms


For performance measurements we need to be very specific about what we are
measuring. Therefore, it is necessary to define a few terms.

The word user here means the terminal operator. A user, so defined, sees CICS
performance as the response time, that is, the time between the last input action (for
example, a keystroke) and the expected response (for example, a message on the
screen). Several such responses might be required to complete a user function, and
the amount of work that a user perceives as a function can vary enormously. So,
the number of functions per period of time is not a good measure of performance,
unless, of course, there exists an agreed set of benchmark functions.

A more specific unit of measure is therefore needed. The words transaction and task
are used to describe units of work within CICS. Even these can lead to ambiguities,
because it would be possible to define transactions and tasks of varying size.
However, within a particular system, a series of transactions can be well defined
and understood so that it becomes possible to talk about relative performance in
terms of transactions per second (or minute, or hour).

© Copyright IBM Corp. 1983, 1999 3


In this context there are three modes of CICS operation.
Nonconversational mode is of the nature of one question, one answer; resources
Nonconversational
├────────────────── Transaction ──────────────────┤
│ │
├───────────────────── Task ──────────────────────┤
│ ┌────────┐ │
├────── Input ──────┤ Work ├────── Output ──────┤
└────────┘

are allocated, used, and released immediately on completion of the task. In this
mode the words transaction and task are more or less synonymous.
Conversational mode is potentially wasteful in a system that does not have
Conversational
├────────────────── Transaction ──────────────────┤
│ │
├───────────────────── Task ──────────────────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘

abundant resources. There are further questions and answers during which
resources are not released. Resources are, therefore, tied up unnecessarily waiting
for users to respond, and performance may suffer accordingly. Transaction and task
are, once again, more or less synonymous.
Pseudoconversational mode allows for slow response from the user. Transactions
Pseudoconversational
├────────────────── Transaction ──────────────────┤
│ │
├───────── Task ─────────┼──────── Task ──────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘

are broken up into more than one task, yet the user need not know this. The
resources in demand are released at the end of each task, giving a potential for
improved performance.

The input/output surrounding a task may be known as the dialog.

Defining performance objectives and priorities


Performance objectives and priorities depend on user’s expectations. From the
point of view of CICS, these objectives state response times to be seen by the
terminal user, and the total throughput per day, hour, or minute.

The first step in defining performance objectives is to specify what is required of


the system. In doing this, you must consider the available hardware and software
resources so that reasonable performance objectives can be agreed. Alternatively
you should ascertain what additional resource is necessary to attain users’
expectations, and what that resource would cost. This cost might be important in
negotiations with users to reach an acceptable compromise between response time
and required resource.

An agreement on acceptable performance criteria between the data processing and


user groups in an organization is often formalized and called a service level
agreement.

4 CICS TS for OS/390: CICS Performance Guide


Common examples in these agreements are, on a network with remote terminals,
that 90% of all response times sampled are under six seconds in the prime shift, or
that the average response time does not exceed 12 seconds even during peak
periods. (These response times could be substantially lower in a network consisting
only of local terminals.)

You should consider whether to define your criteria in terms of the average, the
90th percentile, or even the worst-case response time. Your choice may depend on
the audit controls of your installation and the nature of the transactions in
question.

Analyzing the current workload


Break down the work to be done into transactions. Develop a profile for each
transaction that includes:
v The workload, that is, the amount of work done by CICS to complete this
transaction. In an ideal CICS system (with optimum resources), most
transactions perform a single function with an identifiable workload.
v The volume, that is, the number of times this transaction is expected to be
executed during a given period. For an active system, you can get this from the
CICS statistics.

Later, transactions with common profiles can be merged, for convenience into
transaction categories.

Establish the priority of each transaction category, and note the periods during
which the priorities change.

Determine the resources required to do the work, that is:


v Physical resources managed by the operating system (real storage, DASD I/O,
terminal I/O)
v Logical resources managed by the subsystem, such as control blocks and buffers.

To determine transaction resource demands, you can make sample measurements


on a dedicated machine using the CICS monitoring facility. Use these results to
suggest possible changes that could have the greatest effect if applied before
system-wide contention arises. You can also compare your test results with those in
the production environment.

See “Chapter 2. Gathering data for performance objectives” on page 7 for more
detailed recommendations on this step.

Translating resource requirements into system objectives


You have to translate the information you have gathered into system-oriented
objectives for each transaction category. Such objectives include statements about
the transaction volumes to be supported (including any peak periods) and the
response times to be achieved.

Any assumptions that you make about your installation must be used consistently
in future monitoring. These assumptions include computing-system factors and
business factors.

Chapter 1. Establishing performance objectives 5


Computing-system factors include the following:
v System response time: this depends on the design and implementation of the code,
and the power of the processor.
v Network response time: this can amount to seconds, while responses in the
processor are likely to be in fractions of seconds. This means that a system can
never deliver good responses through an overloaded network, however good the
processor.
v DASD response time: this is generally responsible for most of the internal
processing time required for a transaction. You must consider all I/O operations
that affect a transaction.
v Existing workload: this may affect the performance of new transactions, and vice
versa. In planning the capacity of the system, consider the total load on each
major resource, not just the load for the new application.
Response times can vary for a number of reasons, and the targets should,
therefore, specify an acceptable degree of tolerance. Allow for transactions that
are known to make heavy demands on the processor and database I/O.
To reconcile expectations with performance, it may be necessary to change the
expectations or to vary the mix or volume of transactions.

Business factors are concerned with work fluctuations. Allow for daily peaks (for
example, after receipt of mail), weekly peaks (for example, Monday peak after
weekend mail), and seasonal peaks as appropriate to the business. Also allow for
the peaks of work after planned interruptions, such as preventive maintenance and
public holidays.

6 CICS TS for OS/390: CICS Performance Guide


Chapter 2. Gathering data for performance objectives
During the design, development, and test of a total system, information is gathered
about the complexity of processing with particular emphasis on I/O activity. This
information is used for establishing performance objectives.

The following phases of installation planning are discussed in this chapter:


v “Requirements definition phase”
v “External design phase”
v “Internal design phase”
v “Coding and testing phase” on page 8
v “Post-development review” on page 8
v “Information supplied by end users” on page 8

Requirements definition phase


In this phase, careful estimates are your only input, as follows:
v Number of transactions for each user function
v Number of I/O operations per user function (DASD and terminals)
v Time required to key in user data (including user “thinking time”)
v Line speeds (number of characters per second) for remote terminals
v Number of terminals and operators required to achieve the required rate of
input
v Maximum rate of transactions per minute/hour/day/week
v Average and maximum workloads (that is, processing per transaction)
v Average and maximum volumes (that is, total number of transactions)
v Likely effects of performance objectives on operations and system programming.

External design phase


During the external design phase, you should:
1. Estimate the network, processor, and DASD loading based on the dialog
between users and tasks (that is, the input to each transaction, and consequent
output).
2. Revise your disk access estimates. After external design, only the logical data
accesses are defined (for example, EXEC CICS READ).
3. Estimate coupling facility resources usage for the MVS system logger and
resource files, or any cross-system coupling facility (XCF) activity.

Remember that, after the system has been brought into service, no amount of
tuning can compensate for poor initial design.

Internal design phase


More detailed information is available to help:

© Copyright IBM Corp. 1983, 1999 7


v Refine your estimate of loading against the work required for each transaction
dialog. Include screen control characters for field formatting.
v Refine disk access estimates against database design. After internal design, the
physical data accesses can be defined at least for the application-oriented
accesses.
v Add the accesses for CICS temporary storage (scratchpad) data, program library,
and CICS transient data to the database disk accesses.
v Consider if additional loads could cause a significant constraint.
v Refine estimates on processor use.

Coding and testing phase


During the coding and testing phase, you should:
1. Refine the internal design estimates of disk and processing resources.
2. Refine the network loading estimates.
3. Run the monitoring tools and compare results with estimates. See “Chapter 4.
An overview of performance-measurement tools” on page 23 for information on
the CICS monitoring tools.

Post-development review
Review the performance of the complete system in detail. The main purposes are
to:
v Validate performance against objectives
v Identify resources whose use requires regular monitoring
v Feed the observed figures back into future estimates.
To achieve this, you should:
1. Identify discrepancies from the estimated resource use
2. Identify the categories of transactions that have caused these discrepancies
3. Assign priorities to remedial actions
4. Identify resources that are consistently heavily used
5. Provide utilities for graphic representation of these resources
6. Project the loadings against the planned future system growth to ensure that
adequate capacity is available
7. Update the design document with the observed performance figures
8. Modify the estimating procedures for future systems.

Information supplied by end users


Comments from users are a necessary part of the data for performance analysis
and improvement. Reporting procedures must be established, and their use
encouraged.

Log exceptional incidents. These incidents should include system, line, or


transaction failure, and response times that are outside specified limits. In addition,
you should log incidents that threaten performance (such as deadlocks, deadlock
abends, stalls, indications of going short-on-storage (SOS) and maximum number
of multiregion operation (MRO) sessions used) as well as situations such as

8 CICS TS for OS/390: CICS Performance Guide


recoveries, including recovery from DL/I deadlock abend and restart, which mean
that additional system resources are being used.

The data logged should include the date and time, location, duration, cause (if
known), and the action taken to resolve the problem.

Chapter 2. Gathering data for performance objectives 9


10 CICS TS for OS/390: CICS Performance Guide
Chapter 3. Performance monitoring and review
This chapter describes in the following sections some monitoring techniques; and
how to use them.
v “Deciding on monitoring activities and techniques”
v “Developing monitoring activities and techniques” on page 12
v “Planning the review process” on page 13
v “When to review?” on page 13
v “Monitoring for the future” on page 15
v “Reviewing performance data” on page 16
v “Confirming that the system-oriented objectives are reasonable” on page 16
v “Typical review questions” on page 17
v “Anticipating and monitoring system changes and growth” on page 19

Once set, as described in “Chapter 1. Establishing performance objectives” on


page 3, performance objectives should be monitored using appropriate methods.

Deciding on monitoring activities and techniques


In this book, monitoring is specifically used to describe regular checking of the
performance of a CICS production system, against objectives, by the collection and
interpretation of data. Subsequently, analysis describes the techniques used to
investigate the reasons for performance deterioration. Tuning may be used for any
actions that result from this analysis.

Monitoring should be ongoing because it:


v Establishes transaction profiles (that is, workload and volumes) and statistical
data for predicting system capacities
v Gives early warning through comparative data to avoid performance problems
v Measures and validates any tuning you may have done in response to an earlier
performance problem.

A performance history database (see “Tivoli Performance Reporter for OS/390” on


page 31 for an example) is a valuable source from which to answer questions on
system performance, and to plan further tuning.

Monitoring may be described in terms of strategies, procedures, and tasks.

Strategies may include:


v Continuous or periodic summaries of the workload. You can track all
transactions or selected representatives.
v Snapshots at normal or peak loads. Peak loads should be monitored for two
reasons:
1. Constraints and slow responses are more pronounced at peak volumes.
2. The current peak load is a good indicator of the future average load.

© Copyright IBM Corp. 1983, 1999 11


Procedures, such as good documentation practices, should provide a management
link between monitoring strategies and tasks. The following should be noted:
v The growth of transaction rates and changes in the use of applications
v Consequent extrapolation to show possible future trends
v The effects of nonperformance system problems such as application abends,
frequent signon problems, and excessive retries.

Tasks (not to be confused with the task component of a CICS transaction) include:
v Running one or more of the tools described in “Chapter 4. An overview of
performance-measurement tools” on page 23
v Collating the output
v Examining it for trends.

You should allocate responsibility for these tasks between operations personnel,
programming personnel, and analysts. You must identify the resources that are to
be regarded as critical, and set up a procedure to highlight any trends in the use of
these resources.

Because the tools require resources, they may disturb the performance of a
production system.

Give emphasis to peak periods of activity, for both the new application and the
system as a whole. It may be necessary to run the tools more frequently at first to
confirm that the expected peaks correspond with the actual ones.

It is not normally practical to keep all the detailed output. Arrange for summarized
reports to be filed with the corresponding CICS statistics, and for the output from
the tools to be held for an agreed period, with customary safeguards for its
protection.

Conclusions on performance should not be based on one or two snapshots of


system performance, but rather on data collected at different times over a
prolonged period. Emphasis should be placed on peak loading. Because different
tools use different measurement criteria, early measurements may give apparently
discrepant results.

Your monitoring procedures should be planned ahead of time. These procedures


should explain the tools to be used, the analysis techniques to be used, the
operational extent of those activities, and how often they are to be performed.

Developing monitoring activities and techniques


When you are developing a master plan for monitoring and performance analysis,
you should establish:
v A master schedule of monitoring activity. You should coordinate monitoring
with operations procedures to allow for feedback of online events as well as
instructions for daily or periodic data gathering.
v The tools to be used for monitoring. The tools used for data gathering should
provide for dynamic monitoring, daily collection of statistics, and more detailed
monitoring. (See “When to review?” on page 13.)
v The kinds of analysis to be performed. This must take into account any controls
you have already established for managing the installation, for example, the use
of the Performance Reporter, and so on. You should document what data is to be

12 CICS TS for OS/390: CICS Performance Guide


extracted from the monitoring output, identifying the source and usage of the
data. Although the formatted reports provided by the monitoring tools help to
organize the volume of data, you may need to design worksheets to assist in
data extraction and reduction.
v A list of the personnel who are to be included in any review of the findings. The
results and conclusions from analyzing monitor data should be made known to
the user liaison group and to system performance specialists.
v A strategy for implementing changes to the CICS system design resulting from
tuning recommendations. This has to be incorporated into installation
management procedures, and would include items such as standards for testing
and the permitted frequency of changes to the production environment.

Planning the review process


Establish a schedule for monitoring procedures. This schedule should be as simple
as possible. The activities done as part of the planning should include the
following:
v Listing the CICS requests made by each type of task. This helps you decide
which requests or which resources (the high-frequency or high-cost ones) need
to be looked at in statistics and CICS monitoring facility reports.
v Drawing up checklists of review questions.
v Estimating resource usage and system loading for new applications. This is to
enable you to set an initial basis from which to start comparisons.

When to review?
You should plan for the following broad levels of monitoring activity:
v Dynamic (online) monitoring.
v Daily monitoring.
v Periodic (weekly and monthly) monitoring.
v Keeping sample reports as historical data. You can also keep historical data in a
database such as the Performance Reporter database.

Dynamic monitoring
Dynamic monitoring, is “on-the-spot” monitoring that you can, and should, carry
out at all times. This type of monitoring generally includes the following:
v Observing the system’s operation continuously to discover any serious
short-term deviation from performance objectives.
Use the CEMT transaction (CEMT INQ|SET MONITOR), together with end-user
feedback. You can also use the Resource Measurement Facility (RMF) to collect
information about processor, channel, coupling facility, and I/O device usage.
v Obtaining status information. Together with status information obtained by
using the CEMT transaction, you can get status information on system
processing during online execution. This information could include the queue
levels, active regions, active terminals, and the number and type of
conversational transactions. You could get this information with the aid of an
automated program invoked by the master terminal operator. At prearranged
times in the production cycle (such as before scheduling a message, at shutdown
of part of the network, or at peak loading), the program could capture the
transaction processing status and measurements of system resource levels.

Chapter 3. Performance monitoring and review 13


v The System Management product, CICSPlex® SM, can accumulate information
produced by the CICS monitoring facility to assist in dynamic monitoring
activities. The data can then be immediately viewed online, giving instant
feedback on the performance of the transactions. To allow CICSPlex SM to
collect CICS monitoring information, CICS monitoring must be active using
CEMT SET MONITOR ON.

Daily monitoring
The overall objective here is to measure and record key system parameters daily.
The daily monitoring data usually consists of counts of events and gross level
timings. In some cases, the timings are averaged for the entire CICS system.
v Record both the daily average and the peak period (usually one hour) average
of, for example, messages, tasks, processor usage, I/O events, and storage used.
Compare these against your major performance objectives and look for adverse
trends.
v List the CICS-provided statistics at the end of every CICS run. You should date
and time-stamp the data that is provided, and file it for later review. For
example, in an installation that has settled down, you might review daily data at
the end of the week; generally, you can carry out reviews less frequently than
collection, for any one type of monitoring data. If you know there is a problem,
you might increase the frequency; for example, reviewing daily data
immediately it becomes available.
You should be familiar with all the facilities in CICS for providing statistics at
times other than at shutdown. The main facilities, using the CEMT transaction,
are invocation from a terminal (with or without reset of the counters) and
automatic time-initiated requests.
v File an informal note of any incidents reported during the run. These may
include a shutdown of CICS that causes a gap in the statistics, a complaint from
your end users of poor response times, a terminal going out of service, or any
other item of significance. This makes it useful when reconciling disparities in
detailed performance figures that may be discovered later.
v Print the system console log for the period when CICS was active, and file a
copy of the console log in case it becomes necessary to review the CICS system
performance in the light of the concurrent batch activity.
v Run one of the performance analysis tools described in “Chapter 4. An overview
of performance-measurement tools” on page 23 for at least part of the day if
there is any variation in load from day to day. File the summaries of the reports
produced by the tools you use.
v Transcribe onto a graph any items identified as being consistently heavily used
in the post-development review phase (described in “Chapter 2. Gathering data
for performance objectives” on page 7).
v Collect CICS statistics, monitoring data, and RMF™ data into the Performance
Reporter database.

Weekly monitoring
Here, the objective is to periodically collect detailed statistics on the operation of
your system for comparison with your system-oriented objectives and workload
profiles.
v Run the CICS monitoring facility with performance class active, and process it. It
may not be necessary to do this every day, but it is important to do it regularly
and to keep the sorted summary output as well as the detailed reports.

14 CICS TS for OS/390: CICS Performance Guide


Whether you do this on the same day of the week depends on the nature of the
system load. If there is an identifiable heavy day of the week, this is the one that
you should monitor. (Bear in mind, however, that the use of the monitoring
facility causes additional load, particularly with performance class active.)
If the load is apparently the same each day, run the CICS monitoring facility
daily for a period sufficient to confirm this. If there really is little difference from
day to day in the CICS load, check the concurrent batch loads in the same way
from the logs. This helps you identify any obscure problems because of peak
volumes or unusual transaction mixes on specific days of the week. The first few
weeks’ output from the CICS statistics also give guidance for this.
It may not be necessary to review the detailed monitor report output every time,
but you should always keep this output in case the summary data is insufficient
to answer questions raised by the statistics or by user comments. Label the CICS
monitoring facility output tape (or a dump of the DASD data set) and keep it for
an agreed period in case further investigations are required.
v Run RMF, because this shows I/O usage, channel usage, and so on. File the
summary reports and archive the output tapes for some agreed period.
v Review the CICS statistics, and any incident reports.
v Review the graph of critical parameters. If any of the items is approaching a
critical level, check the performance analysis and RMF outputs for more detail
and follow any previously agreed procedures (for example, notify your
management).
v Tabulate or produce a graph of values as a summary for future reference.
v Produce weekly Performance Reporter reports.

Monthly monitoring
v Run RMF.
v Review the RMF and performance analysis listings. If there is any indication of
excessive resource usage, follow any previously agreed procedures (for example,
notify your management), and do further monitoring.
v Date- and time-stamp the RMF output and keep it for use in case performance
problems start to arise. You can also use the output in making estimates, when
detailed knowledge of component usage may be important. These aids provide
detailed data on the usage of resources within the system, including processor
usage, use of DASD, and paging rates.
v Produce monthly Performance Reporter reports showing long-term trends.

Monitoring for the future


When performance is acceptable, you should establish procedures to monitor
system performance measurements and anticipate performance constraints before
they become response-time problems. Exception-reporting procedures are a key to
an effective monitoring approach.

In a complex production system there is usually too much performance data for it
to be comprehensively reviewed every day. Key components of performance
degradation can be identified with experience, and those components are the ones
to monitor most closely. You should identify trends of usage and other factors
(such as batch schedules) to aid in this process.

Chapter 3. Performance monitoring and review 15


Consistency of monitoring is also important. Just because performance is good for
six months after a system is tuned is no guarantee that it will be good in the
seventh month.

Reviewing performance data


The aims of the review procedure are to provide continuous monitoring, and to
have a good level of detailed data always available so that there is minimal delay
in problem analysis.

Generally, there should be a progressive review of data. You should review daily
data weekly, and weekly data monthly, unless any incident report or review raises
questions that require an immediate check of the next level of detail. This should
be enough to detect out-of-line situations with a minimum of effort.

The review procedure also ensures that additional data is available for problem
determination, should it be needed. The weekly review should require
approximately one hour, particularly after experience has been gained in the
process and after you are able to highlight the items that require special
consideration. The monthly review will probably take half a day at first. After the
procedure has been in force for a period, it will probably be completed more
quickly. However, when new applications are installed or when the transaction
volumes or numbers of terminals are increased, the process is likely to take longer.

Review the data from the RMF listings only if there is evidence of a problem from
the gross-level data, or if there is an end-user problem that can’t be solved by the
review process. Thus, the only time that needs to be allocated regularly to the
detailed data is the time required to ensure that the measurements were correctly
made and reported.

When reviewing performance data, try to:


v Establish the basic pattern in the workload of the installation
v Identify variations from the pattern.

Do not discard all the data you collect, after a certain period. Discard most, but
leave a representative sample. For example, do not throw away all weekly reports
after three months; it is better to save those dealing with the last week of each
month. At the end of the year, you can discard all except the last week of each
quarter. At the end of the following year, you can discard all the previous year’s
data except for the midsummer week. Similarly, you should keep a representative
selection of daily figures and monthly figures.

The intention is that you can compare any report for a current day, week, or month
with an equivalent sample, however far back you want to go. The samples become
more widely spaced but do not cease.

Confirming that the system-oriented objectives are reasonable


After the system is initialized and monitoring is operational, you need to find out
if the objectives themselves are reasonable (that is, achievable, given the hardware
available), based upon actual measurements of the workload.

When you measure performance against objectives and report the results to users,
you have to identify any systematic differences between the measured data and

16 CICS TS for OS/390: CICS Performance Guide


what the user sees. This means an investigation of the differences between internal
(as seen by CICS) and external (as seen by the end user) measures of response
time.

If the measurements differ greatly from the estimates, you must revise application
response-time objectives or plan a reduced application workload, or upgrade your
system. If the difference is not too large, however, you can embark on tuning the
total system. Parts 3 and 4 of this book tell you how to do this tuning activity.

Typical review questions


Use the following questions as a basis for your own checklist. Most of these
questions are answered by the TIVOLI Performance Reporter for OS/390.

Some of the questions are not strictly to do with performance. For instance, if the
transaction statistics show a high frequency of transaction abends with usage of the
abnormal condition program, this could perhaps indicate signon errors and,
therefore, a lack of terminal operator training. This, in itself, is not a performance
problem, but is an example of the additional information that can be provided by
monitoring.
1. How frequently is each available function used?
a. Has the usage of transaction identifiers altered?
b. Does the mix vary from one time of the day to another?
c. Should statistics be requested more frequently during the day to verify this?

A different approach must be taken:


v In systems where all messages are channeled through the same initial task
and program (for user security routines, initial editing or formatting,
statistical analysis, and so on)
v For conversational transactions, where a long series of message pairs is
reflected by a single transaction
v In transactions where the amount of work done relies heavily on the input
data.

In these cases, you have to identify the function by program or data set usage,
with appropriate reference to the CICS program statistics, file statistics, or other
statistics. In addition, you may be able to put user tags into the monitoring
data (for example, a user character field in the case of the CICS monitoring
facility), which can be used as a basis for analysis by products such as the
TIVOLI Performance Reporter.

The questions asked above should be directed at the appropriate set of


statistics.
2. What is the usage of the telecommunication lines?
a. Do the CICS terminal statistics indicate any increase in the number of
messages on the terminals on each of the lines?
b. Does the average message length on the CICS performance class monitor
reports vary for any transaction type? This can easily happen with an
application where the number of lines or fields output depends on the input
data.

Chapter 3. Performance monitoring and review 17


c. Is the number of terminal errors acceptable? If you are using a terminal
error program or node error program, does this indicate any line problems?
If not, this may be a pointer to terminal operator difficulties in using the
system.
3. What is the DASD usage?
a. Is the number of requests to file control increasing? Remember that CICS
records the number of logical requests made. The number of physical I/Os
depends on the configuration of indexes, and on the data records per
control interval and the buffer allocations.
b. Is intrapartition transient data usage increasing? Transient data involves a
number of I/Os depending on the queue mix. You should at least review
the number of requests made to see how it compares with previous runs.
c. Is auxiliary temporary storage usage increasing? Temporary storage uses
control interval access, but writes the control interval out only at syncpoint
or when the buffer is full.
4. What is the virtual storage usage?
a. How large are the dynamic storage areas?
b. Is the number of GETMAIN requests consistent with the number and types
of tasks?
c. Is the short-on-storage (SOS) condition being reached often?
d. Have any incidents been reported of tasks being purged after deadlock
timeout interval (DTIMOUT) expiry?
e. How much program loading activity is there?
f. From the monitor report data, is the use of dynamic storage by task type as
expected?
g. Is storage usage similar at each execution of CICS?
h. Are there any incident reports showing that the first invocation of a
function takes a lot longer than subsequent ones? This may arise when
programs are loaded that then have to open data sets, particularly in
IMS/ESA, for example. Can this be reconciled with application design?
5. What is the processor usage?
a. Is the processor usage as measured by the monitor report consistent with
previous observations?
b. Are batch jobs that are planned to run, able to run successfully?
c. Is there any increase in usage of functions running at a higher priority than
CICS? Include in this MVS readers and writers, MVS JES, and VTAM if
running above CICS, and overall I/O, because of the lower-priority regions.
6. What is the coupling facility usage?
a. What is the average storage usage?
b. What is the ISC link utilization?
7. Do any figures indicate design, coding, or operational errors?
a. Are any of the resources mentioned above heavily used? If so, was this
expected at design time? If not, can the heavy use be explained in terms of
heavier use of transactions?
b. Is the heavy usage associated with a particular application? If so, is there
evidence of planned growth or peak periods?
c. Are browse transactions issuing more than the expected number of
requests? In other words, is the count of browse requests issued by a
transaction greater than what you expected users to cause?

18 CICS TS for OS/390: CICS Performance Guide


d. Is the CICS CSAC transaction (provided by the DFHACP abnormal
condition program) being used frequently? Is this because invalid
transaction identifiers are being entered? For example, errors are signaled if
transaction identifiers are entered in lowercase on IBM® 3270 terminals but
automatic translation of input to uppercase has not been specified.
A high use of the DFHACP program without a corresponding count of
CSAC may indicate that transactions are being entered without proper
operator signon. This may, in turn, indicate that some terminal operators
need more training in using the system.

In addition to the above, you should regularly review certain items in the CICS
statistics, such as:
v Times the MAXTASK limit reached (transaction manager statistics)
v Peak tasks (transaction class statistics)
v Times cushion released (storage manager statistics)
v Storage violations (storage manager statistics)
v Maximum RPLs posted (VTAM statistics)
v Short-on-storage count (storage manager statistics)
v Wait on string total (file control statistics)
v Use of DFHSHUNT log streams.
| v Times aux. storage exhausted (temporary storage statistics)
| v Buffer waits (temporary storage statistics)
| v Times string wait occurred (temporary storage statistics)
| v Times NOSPACE occurred (transient data global statistics)
| v Intrapartition buffer waits (transient data global statistics)
| v Intrapartition string waits (transient data global statistics)

You should also satisfy yourself that large numbers of dumps are not being
produced.

Furthermore, you should review the effects of and reasons for system outages and
their duration. If there is a series of outages, you may be able to detect a common
cause of them.

Anticipating and monitoring system changes and growth


No production system is static. Each system is constantly changing because of new
function being added, increased transaction volumes because of a growth in the
number of terminal users, addition of new applications or software components,
and changes to other aspects of the data processing complex (batch, TSO, and so
on). As much as possible, the effects of these changes need to be anticipated,
planned for, and monitored.

To find out what application changes are planned, interviewing system or


application development managers can be useful in determining the effect of new
function or applications and the timing of those changes. Associated with this is
the effect of new software to be installed, as well as the known hardware plans for
installing new equipment.

When a major change to the system is planned, increase the monitoring frequency
before and after the change. A major change includes the addition of:
v A new application or new transactions

Chapter 3. Performance monitoring and review 19


v New terminals
v New software releases.

You should look at individual single-thread transactions as well as the overall


behavior of the production system.

If the system performance has altered as a result of a major change to the system,
data for before-and-after comparison of the appropriate statistics provides the best
way of identifying the reasons for the alteration.

Consider having extra tools installed to make it easier to project and test future
usage of the system. Tools such as the Teleprocessing Network Simulator (TPNS)
program can be used to test new functions under volume conditions before they
actually encounter production volumes. Procedures such as these can provide you
with insight as to the likely performance of the production system when the
changes are implemented, and enable you to plan option changes, equipment
changes, scheduling changes, and other methods for stopping a performance
problem from arising.

20 CICS TS for OS/390: CICS Performance Guide


Part 2. Tools that measure the performance of CICS
This part gives an overview of the various tools that can be used to find out which
resources are in contention.
v “Chapter 4. An overview of performance-measurement tools” on page 23
v “Chapter 5. Using CICS statistics” on page 39
v “Chapter 6. The CICS monitoring facility” on page 65
v “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113
v “Chapter 8. Managing Workloads” on page 123
v “Chapter 9. Understanding RMF workload manager data” on page 135.

© Copyright IBM Corp. 1983, 1999 21


22 CICS TS for OS/390: CICS Performance Guide
Chapter 4. An overview of performance-measurement tools
This overview discusses methods of measuring performance in the following
sections:
v “CICS performance data” on page 24
v “Operating system performance data” on page 27
v “Performance data for other products” on page 32
After reasonable performance objectives have been agreed, you have to set up
methods to determine whether the production system is meeting those objectives.

Performance of a production system depends on the utilization of resources such


as CPU, real storage, ISC links, coupling facility, and the network.

You have to monitor all of these factors to determine when constraints in the
system may develop. A variety of programs could be written to monitor all these
resources. Many of these programs are currently supplied as part of IBM products
such as CICS or IMS/ESA, or are supplied as separate products. This chapter
describes some of the products that can give performance information on different
components of a production system.

The list of products in this chapter is far from being an exhaustive summary of
performance monitoring tools, yet the data provided from these sources comprises
a large amount of information. To monitor all this data is an extensive task.
Furthermore, only a small subset of the information provided is important for
identifying constraints and determining necessary tuning actions, and you have to
identify this specific subset for your particular CICS system.

You also have to bear in mind that there are two different types of tools:
1. Tools that directly measure whether you are meeting your objectives
2. Additional tools to look into internal reasons why you might not be meeting
objectives.

None of the tools can directly measure whether you are meeting end-user response
time objectives. The lifetime of a task within CICS is comparable, that is, usually
related to, response time, and bad response time is usually correlated with long
lifetime within CICS, but this correlation is not exact because of other contributors
to response time.

Obviously, you want tools that help you to measure your objectives. In some cases,
you may choose a tool that looks at some internal function that contributes
towards your performance objectives, such as task lifetime, rather than directly
measuring the actual objective, because of the difficulty of measuring it.

When you have gained experience of the system, you should have a good idea of
the particular things that are most significant in that particular system and,
therefore, what things might be used as the basis for exception reporting. Then,
one way of simply monitoring the important data might be to set up
exception-reporting procedures that filter out the data that is not essential to the
tuning process. This involves setting standards for performance criteria that
identify constraints, so that the exceptions can be distinguished and reported while

© Copyright IBM Corp. 1983, 1999 23


normal performance data is filtered out. These standards vary according to
individual system requirements and service level agreements.

You often have to gather a considerable amount of data before you can fully
understand the behavior of your own system and determine where a tuning effort
can provide the best overall performance improvement. Familiarity with the
analysis tools and the data they provide is basic to any successful tuning effort.

Remember, however, that all monitoring tools cost processing effort to use. Typical
costs are 5% additional processor cycles for the CICS monitoring facility
(performance class), and up to 1% for the exception class. The CICS trace facility
overhead is highly dependent on the workload used. The overhead can be in
excess of 25%.

In general, then, we recommend that you use the following tools in the sequence
of priorities shown below:
1. CICS statistics
2. CICS monitoring data
3. CICS internal and auxiliary trace.

In this chapter, the overview of the various tools for gathering or analyzing data is
arranged as follows:
v CICS performance data
v Operating system performance data
v Performance data for other products.

CICS performance data


v “CICS statistics”
v “The CICS monitoring facility”
v “The sample statistics program (DFH0STAT)” on page 25
v “CICS trace facilities” on page 26.

CICS statistics
CICS statistics are the simplest and the most important tool for permanently
monitoring a CICS system. They collect information on the CICS system as a
whole, without regard to tasks.

The CICS statistics domain writes five types of statistics to SMF data sets: interval,
end-of-day, requested, requested reset, and unsolicited statistics.

Each of these sets of data is described and a more general description of CICS
statistics is given in “Chapter 5. Using CICS statistics” on page 39and “Appendix A.
CICS statistics tables” on page 345.

The CICS monitoring facility


The CICS monitoring facility collects information about CICS tasks, and is
described more completely in “Chapter 6. The CICS monitoring facility” on
page 65.

24 CICS TS for OS/390: CICS Performance Guide


The CICS Customization Guide contains programming information on the data set
formats and the CICS Operations and Utilities Guide describes the monitoring utility
programs, DFHMNDUP and DFH$MOLS.

The sample statistics program (DFH0STAT)


You can use the statistics sample program, DFH0STAT, to help you determine and
adjust the values needed for CICS storage parameters, for example, using DSALIM
and EDSALIM. The program produces a report showing critical system parameters
from the CICS dispatcher, an analysis of the CICS storage manager and loader
statistics, and an overview of the MVS storage in use. The program demonstrates
the use of the EXEC CICS INQUIRE and EXEC CICS COLLECT STATISTICS
commands to produce an analysis of a CICS system. You can use the sample
program as provided or modify it to suit your needs. It can be used to provide
data about the following:
v System Status, Monitoring and Statistics
v Transaction Manager and Dispatcher
v Storage
v Loader
v Storage Subpools
v Transaction Classes
v Transactions
v Transaction Totals including Subspace usage information
v Programs
v Program Totals
v DFHRPL Analysis
| v Programs by DSA and LPA
v Temporary Storage
v Temporary Storage Queues
| v Temporary Storage Queues by TSPOOL
v Transient Data
v Transient Data Queues
v Transient Data Queues Total
| v User Domain
v Journalnames
v Logstreams
v Connections and Modenames
| v TCP/IP Services
v Autoinstall and VTAM
v LSR Pools
v Files
| v Coupling Facility Data Table Pools
| v DB2® Connections and Entries
v Data Tables
v Exit Programs
v Global User Exits
v Enqueue Manager

Chapter 4. An overview of performance-measurement tools 25


v Recovery Manager.

See “Appendix E. The sample statistics program, DFH0STAT” on page 519 for the
details and interpretation of the report.

CICS trace facilities


For the more complex problems that involve system interactions, you can use the
CICS trace to record the progress of CICS transactions through the CICS
management modules. Whereas a dump gives a “snapshot” of conditions at a
particular moment, CICS trace provides a history of events leading up to a specific
situation. CICS includes facilities for selective activation or deactivation of some
groups of traces.

The CICS trace facilities can also be useful for analyzing performance problems
such as excessive waiting on events in the system, or constraints resulting from
inefficient system setup or application program design.

Several types of tracing are provided by CICS, and are described in the CICS
Problem Determination Guide. Trace is controlled by:
v The system initialization parameters (see the CICS System Definition Guide).
v CETR (see the CICS Supplied Transactions manual). CETR also provides for trace
selectivity by, for instance, transaction type or terminal name.
v CEMT SET INTTRACE, CEMT SET AUXTRACE, or CEMT SET GTFTRACE (see
the CICS Supplied Transactions manual).
v EXEC CICS SET TRACEDEST, EXEC CICS SET TRACEFLAG, or EXEC CICS
SET TRACETYPE (see the CICS System Programming Reference for programming
information).

Three destinations are available for trace data:


1. The internal trace table, in main storage above the 16MB line
2. Auxiliary trace data sets, defined as BSAM data sets on tape or disk
3. The MVS generalized trace facility (GTF) data sets, which can be accessed
through the MVS interactive problem control system (IPCS).

Other CICS data


The measurement tools previously described do not provide all the data necessary
for a complete evaluation of current system performance. They do not provide
information on how and under what conditions each resource is being used, nor
do they provide information about the existing system configuration while the data
is being collected. It is therefore extremely important to use as many techniques as
possible to get information about the system. Additional sources of information
include the following:
v Hardware configuration
v VTOC listings
v LISTCAT (VSAM)
v CICS table listings, especially:
– SIT (and overrides in the CICS startup procedure)
| – FCT (file control table)for any BDAM files
| v CICS resource definitions from the CSD file:

26 CICS TS for OS/390: CICS Performance Guide


| – Use the DFHCSDUP LIST command to print resource definitions, groups, and
| lists. For information about the CSD file utility program, DFHCSDUP, see the
| CICS Resource Definition Guide.
| v Link pack area (LPA) map
v Load module cross-reference of the CICS nucleus
v SYS1.PARMLIB listing
| v MVS Workload Manager (WLM) service definition
| v MVS System Logger configuration - LOGR couple data set listing
v Dump of the CICS address space. See the CICS Operations and Utilities Guide
for information on how to get an address space dump for CICS when the CICS
address space abends.

This data, used with the data produced by the measurement tools, provides the
basic information that you should have for evaluating your system’s performance.

Operating system performance data


v “System management facility (SMF)”
v “Resource measurement facility (RMF)”.
v “Generalized trace facility (GTF)” on page 29
v “Tivoli Performance Reporter for OS/390” on page 31

| System management facility (SMF)


| System management facilities (SMF) collects and records system and job-related
| information that your installation can use in:
| v Billing users
| v Reporting reliability
| v Analyzing the configuration
| v Scheduling jobs
| v Summarizing direct access volume activity
| v Evaluating data set activity
| v Profiling system resource use
| v Maintaining system security.
| For more information on SMF, see the OS/390 MVS System Management Facilities
| (SMF) manual, GC28-1783-05.

Resource measurement facility (RMF)


The Resource Measurement Facility (RMF) collects system-wide data that describes
the processor activity (WAIT time), I/O activity (channel and device usage), main
storage activity (demand and swap paging statistics), and system resources
manager (SRM) activity (workload).

RMF is a centralized measurement tool that monitors system activity to collect


performance and capacity planning data. The analysis of RMF reports provides the
basis for tuning the system to user requirements. They can also be used to track
resource usage.

RMF measures the following activities:

Chapter 4. An overview of performance-measurement tools 27


v Processor usage
v Address space usage
v Channel activity:
– Request rate and service time per physical channel
– Logical-to-physical channel relationships
– Logical channel queue depths and reasons for queuing.
v Device activity and contention for the following devices:
– Unit record
– Graphics
– Direct access storage
– Communication equipment
– Magnetic tapes
– Character readers.
v Detailed system paging
v Detailed system workload
v Page and swap data set
v Enqueue
v CF activity
v XCF activity.

RMF allows the OS/390 user to:


v Evaluate system responsiveness:
– Identify bottlenecks. The detailed paging report associated with the page and
swap data set activity can give a good picture of the behavior of a virtual
storage environment.
v Check the effects of tuning:
– Results can be observed dynamically on a screen or by postprocessing
facilities.
v Perform capacity planning evaluation:
– The workload activity reports include the interval service broken down by
key elements such as processor, input/output, and main storage service.
– Analysis of the resource monitor output (for example, system contention
indicators, swap-out broken down by category, average ready users per
domain) helps in understanding user environments and forecasting trends.
– The post-processing capabilities make the analysis of peak load periods and
trend analysis easier.
v Manage the larger workloads and increased resources that MVS can support
v Identify and measure the usage of online channel paths
v Optimize the usefulness of expanded storage capability.

RMF measures and reports system activity and, in most cases, uses a sampling
technique to collect data. Reporting can be done with one of three monitors:
1. Monitor I measures and reports the use of system resources (that is, the
processor, I/O devices, storage, and data sets on which a job can enqueue
during its execution). It runs in the background and measures data over a
period of time. Reports can be printed immediately after the end of the
measurement interval, or the data can be stored in SMF records and printed

28 CICS TS for OS/390: CICS Performance Guide


later with the RMF postprocessor. The RMF postprocessor can be used to
generate reports for “exceptions”: conditions where user-specified values are
exceeded.
2. Monitor II, like Monitor I, measures and reports the use of system resources. It
runs in the background under TSO or on a console. It provides “snapshot”
reports about resource usage, and also allows its data to be stored in SMF
records. The RMF postprocessor can be used to generate exception reports.
3. Monitor III primarily measures the contention for system resources and the
delay of jobs that such contention causes. It collects and reports the data in real
time at a display station, with optional printed copy backup of individual
displays. Monitor III can also provide exception reports, but its data cannot be
stored in SMF records. It must be used if XCF or CF reports are needed.

RMF should be active in the system 24 hours a day, and you should run it at a
dispatching priority above other address spaces in the system so that:
v The reports are written at the interval requested
v Other work is not delayed because of locks held by RMF.

A report is generated at the time interval specified by the installation. The largest
system overhead of RMF occurs during the report generation: the shorter the
interval between reports, the larger the burden on the system. An interval of 60
minutes is recommended for normal operation. When you are addressing a specific
problem, reduce the time interval to 10 or 15 minutes. The RMF records can be
directed to the SMF data sets with the NOREPORT and RECORD options; the
report overhead is not incurred and the SMF records can be formatted later.

Note: There may be some discrepancy between the CICS initialization and
termination times when comparing RMF reports against output from the
CICS monitoring facility.

For further details of RMF, see the OS/390 Resource Measurement Facility (RMF)
Users Guide, SC28-1949.

Guidance on how to use RMF with the CICS monitoring facility is given in “Using
CICS monitoring SYSEVENT information with RMF” on page 67. In terms of CPU
costs this is an inexpensive way to collect performance information. Shorter reports
throughout the day are needed for RMF because a report of a full day’s length
includes startup and shutdown and does not identify the peak period.

Generalized trace facility (GTF)


As described above, CICS trace entries can be recorded via GTF, and reports
produced via IPCS. More generally, GTF is an integral part of the MVS system, and
traces the following system events: DASD seek addresses on start I/O instructions,
system resources manager (SRM) activity, page faults, I/O activity, and supervisor
services. Execution options specify the system events to be traced. The amount of
processing time to be used by GTF can vary considerably, depending on the
number of events to be traced. You should request the time-stamping of GTF
records with the TIME=YES operand on the EXEC statement for all GTF tracing.

GTF should run at a dispatching priority (DPRTY) of 255 so that records are not
lost. If GTF records are lost and the DPRTY is specified at 255, specify the BUF
operand on the execute statement as greater than 10 buffers.

Chapter 4. An overview of performance-measurement tools 29


GTF is generally used to monitor short periods of system activity and you should
run it accordingly.

You can use these options to get the data normally needed for CICS performance
studies:

TRACE=SYS,RNIO,USR (VTAM)
TRACE=SYS (Non-VTAM)

If you need data on the units of work dispatched by the system and on the length
of time it takes to execute events such as SVCs, LOADs, and so on, the options are:

TRACE=SYS,SRM,DSP,TRC,PCI,USR,RNIO

The TRC option produces the GTF trace records that indicate GTF interrupts of
other tasks that it is tracing. This set of options uses a higher percentage of
processor resources, and you should use it only when you need a detailed analysis
or timing of events.

No data-reduction programs are provided with GTF. To extract and summarize the
data into a meaningful and manageable form, you can either write a
data-reduction program or use one of the program offerings that are available.

For further details, see the OS/390 MVS Diagnosis: Tools and Service Aids.

GTF reports
You can produce reports from GTF data using the interactive problem control
system (IPCS). The reports generated by IPCS are useful in evaluating both system
and individual job performance. It produces job and system summary reports as
well as an abbreviated detail trace report. The summary reports include
information on MVS dispatches, SVC usage, contents supervision, I/O counts and
timing, seek analysis, page faults, and other events traced by GTF. The detail trace
reports can be used to follow a transaction chronologically through the system.

Other reports are available that:


v Map the seek addresses for a specific volume
v Map the arm movement for a specific volume
v Map the references to data sets and members within partitioned data sets
v Map the page faults and module reference in the link pack area (LPA).

These reports are described later in this section.

Before GTF is run, you should plan the events to be traced. If specific events such
as start I/Os (SIOs) are not traced, and the SIO-I/O timings are required, the trace
must be re-created to get the data needed for the reports.

If there are any alternative paths to a control unit in the system being monitored,
you should include the PATHIO input statement in the report execution statement.
Without the PATHIO operand, there are multiple I/O lines on the report for the
device with an alternative path: one line for the primary device address and one
for the secondary device address. If this operand is not included, the I/Os for the
primary and alternate device addresses have to be combined manually to get the
totals for that device.

30 CICS TS for OS/390: CICS Performance Guide


Seek histogram report
The seek histogram report (SKHST) can help you find out if there is any arm
contention on that volume, that is, if there are any long seeks on the volume being
mapped. It produces two reports: the first shows the number of seeks to a
particular address, and the second shows the distance the arm moves between
seeks. These reports can be used to determine if you should request a volume map
report to investigate further the need to reorganize a specific volume.

Volume map report


The volume map report (VOLMAP) displays information about data sets on the
volume being mapped and about seek activity to each data set on that volume. It
also maps the members of a partitioned data set and the count of seeks issued to
each member. This report can be very useful in reorganizing the data sets on a
volume and in reorganizing the members within a partitioned data set to reduce
the arm movement on that specific volume.

Reference map report


The reference map report (REFMAP) shows the page fault activity in the link pack
area (LPA) of MVS. This reference is by module name and separates the data faults
from the instruction faults. The report also shows the count of references to the
specific module. This reference is selected from the address in the stored PSW of
the I/O and EXT interrupt trace events from GTF. This report can be useful if you
want to make changes to the current MVS pack list in order to reduce real storage
or to reduce the number of page faults that are being encountered in the pageable
link pack area of MVS.

Tivoli Performance Reporter for OS/390


Tivoli Performance Reporter for OS/390 is an IBM product that collects and
analyzes data from CICS and other IBM systems and products. With the Tivoli
Performance Reporter you can build reports which help you with the following:
v System overviews
v Service levels
v Availability
v Performance and tuning
v Capacity planning
v Change and problem management
v Accounting.

A large number of ready-made reports are available, and in addition you can
generate your own reports to meet specific needs.

In the reports the Tivoli Performance Reporter uses data from CICS monitoring
and statistics. Tivoli Performance Reporter also collects data from the MVS system
and from products such as RMF, TSO, IMS™ and NetView. This means that data
from CICS and other systems can be shown together, or can be presented in
separate reports.

Reports can be presented as plots, bar charts, pie charts, tower charts, histograms,
surface charts, and other graphic formats. The Tivoli Performance Reporter for
OS/390 simply passes the data and formatting details to Graphic Data Display

Chapter 4. An overview of performance-measurement tools 31


Manager (GDDM) which does the rest. The Tivoli Performance Reporter can also
produce line graphs and histograms using character graphics where GDDM is not
available, or the output device does not support graphics. For some reports, where
you need the exact figures, numeric reports such as tables and matrices are more
suitable.

See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information about the Tivoli Performance Reporter for OS/390 as a CICS
performance measurement tool.

Performance data for other products


v “ACF/VTAM”
v “NetView for MVS” on page 33
v “NetView performance monitor (NPM)” on page 34
v “LISTCAT (VSAM)” on page 34
v “DB monitor (IMS)” on page 35
v “DATABASE 2 Performance Monitor (DB2PM)” on page 36
v “Teleprocessing network simulator (TPNS)” on page 37.

This section gives an overview of the tools that can be used to monitor information
on various access methods and other programs used with CICS and the operating
system.

ACF/VTAM
ACF/VTAM® (program number 5735-RC2) provides information about buffer
usage either to GTF in SMF trace data or to the system console through DISPLAY
and BFRUSE commands. Other tuning statistics can also be recorded on the system
console through the MODIFY procname, TNSTAT command. (This command is
described in the ACF/VTAM Diagnostic Techniques manual.)

Virtual telecommunication access method (VTAM) trace


The VTAM trace facility is provided as part of VTAM, and tracks messages
through different points to and from CICS. The time-stamps that are included can
be particularly useful in determining where a transaction spends large amounts of
time.

Network performance, analysis, and reporting system


(NETPARS)
NETPARS is a program offering (program number 5798-CZX) that analyzes
network log data from the NetView® Performance Monitor (NPM). Further
information on NETPARS is given in the Network Performance, Analysis, and
Reporting System (NETPARS) Description/Operations.

32 CICS TS for OS/390: CICS Performance Guide


VTAM performance, analysis, and reporting system II
(VTAMPARS II)
The VTAMPARS program offering (program number 5798-DFE) provides
information on network traffic through the VTAM component of a network.
Information on terminal connect time, message characteristics and rates, and so
forth, can be collected and analyzed. Further information on VTAMPARS is given
in the VTAM Performance, Analysis, and Reporting System (VTAMPARS) Program
Description/Operations.

Generalized performance analysis reporting (GPAR)


Generalized Performance Analysis Reporting (GPAR) (program number 5798-CPR)
is a prerequisite for VTAMPARS. GPAR is designed as a base for reporting
programs (IBM or user-written). It helps summarize sequential activity traces like
GTF traces. It also contains facilities to print user-tailored graphs from any
performance data log or non-VSAM sequential data set.

VTAM storage management (SMS) trace


The VTAM storage management (SMS) trace facility collects information on
VTAM’s usage of its buffers, including which buffers are used in the various buffer
pools, and the number of buffer expansions and depletions.

VTAM tuning statistics


Information provided in the VTAM tuning statistics includes data on the
performance between VTAM and the network control program (NCP), the number
of reads and writes and what caused that activity, and message counts.

NetView for MVS


NetView is a network management program offering (program number 5665-362)
which provides a cohesive set of SNA host network management services in a
single product. NetView includes the functions of the network communication
control facility (NCCF), network logical data manager (NLDM), and network
problem determination application (NPDA), as well as functions of the VTAM
node control application (VNCA) and network management productivity facility
(NMPF). Support is provided for problem determination of the IBM 3720
Communication Controller and for online configuration control and testing of IBM
586X modems.

NetView’s set of network management functions consists of the following:


v Command facility
v Session monitor
v Hardware monitor
v Status monitor
v Online HELP and Help Desk facility
v Browse facility.

NetView’s capabilities include:

Chapter 4. An overview of performance-measurement tools 33


v Terminal access facility support of large screen and color applications (for
example, the NetView performance monitor)
v CLISTs driven by application messages
v Disk log enhancements
v Support for 586X Models 2 and 3 and 5812 modems
v Token-ring network support
v Virtual route blockage indication
v Session setup failure notification
v Extended recovery facility
v Automatic operations and recovery
v Real-time update of the domain status panel
v Easy-to-use installation procedure.
The benefits provided by NetView include:
v Improved cohesion and usability in support of network management functions
v Enhanced installation, operation, and utilization of network management
functions in MVS environments.

For further information on NetView, see the Systems Network Architecture


Management Services Reference, and Network Program Products Planning manual.

NetView performance monitor (NPM)


The NetView Performance Monitor (NPM) program product (program number
5665-333) is designed to aid network support personnel in managing VTAM-based
communications networks. It collects and reports on data in the host and NCP.

NPM data can be used to:


v Identify network traffic bottlenecks
v Display screens showing volume and response times for various resources
v Generate color graphs of real-time and historical data
v Alert users to response time threshold exceptions.

NPM performance data can also help to:


v Determine the performance characteristics of a network and its components
v Identify network performance problems
v Tune communications networks for better performance as well as verify the
effects of problem resolutions
v Gauge unused capacity when planning for current network changes
v Produce timely and meaningful reports on network status for multiple levels of
management.

Further information on NPM is given in NetView Performance Monitor At A Glance.

LISTCAT (VSAM)
VSAM LISTCAT provides information that interprets the actual situation of VSAM
data sets. This information includes counts of the following:
v Whether and how often control interval (CI) or control area (CA) splits occur
(splits should occur very rarely, especially in CA).

34 CICS TS for OS/390: CICS Performance Guide


v Physical accesses to the data set.
v Extents for a data set (secondary allocation). You should avoid this secondary
allocation, if possible, by making the primary allocation sufficiently large.
v Index levels.

Virtual storage access method (VSAM) or ICF catalog


Information kept in the VSAM or Integrated Catalog Facility (ICF) catalog includes
items on record sizes, data set activity, and data set organization.

DB monitor (IMS)
The IMS DB monitor report print program (DFSUTR30) provides information on
batch activity (a single-thread environment) to IMS databases, and is activated
through the DLMON system initialization parameter. As in the case of CICS
auxiliary trace, this is for more in-depth investigation of performance problems by
single-thread studies of individual transactions.

The DB monitor cannot be started and stopped from a terminal. After the DB
monitor is started in a CICS environment, the only way to stop it is to shut down
CICS. The DB monitor cannot be started or stopped dynamically.

When the DB monitor runs out of space on the IMSMON data set, it stops
recording. The IMSMON data set is a sequential data set, for which you can
allocate space with IEFBR14. The DCB attributes are:

DCB=(RECFM=VB,LRECL=2044,BLKSIZE=2048)

If you are running the DB monitor in a multithread (more than one) environment,
the only statistics that are valid are the VSAM buffer pool statistics.

Program isolation (PI) trace


The program isolation (PI) trace can point out database contention problems
arising from the nature of task’s access to a particular database. Because only one
task can have access to a record at one time, and any other task waits till the
record is freed, high contention can mean high response time. This trace is part of
IMS, and can be activated by the CEMT SET PITRACE ON|OFF command.
Information on the format of the PI trace report is given in the IMS/ESA Version 3
System Administration Guide.

IMS System Utilities/Database Tools (DBT)


The IMS System Utilities/Database Tools (DBT) program product (program
number 5668-856) is a powerful package of database programs and products
designed to enhance data integrity, data availability, and performance of IMS
databases. It provides the important tools that are needed to support both
full-function (index, HDAM, HIDAM, and HISAM) and fastpath (DEDB)
databases.

DBT can help you maintain data integrity by assisting the detection and repair of
errors before a problem disrupts operations. It speeds database reorganization by
providing a clear picture of how data is stored in the database, by allowing the
user to simulate various database designs before creating a new database, and by
providing various sort, unload, and reload facilities. DBT also improves

Chapter 4. An overview of performance-measurement tools 35


programming productivity by providing monitoring capabilities and by reducing
the need to write reformatting programs. It increases the user’s understanding of
the database for analysis, tuning, and reorganization. It also helps enhance the
overall database performance.

For further information, see the IMS System Utilities/Database Tools (DBT) General
Information manual.

IMS monitor summary and system analysis II (IMSASAP II)


IMSASAP II (program number 5798-CHJ) is a performance analysis and tuning aid
for IMS/ESA® database and data communication systems. It is a report program
that executes under IMS/ESA for Generalized Performance Analysis Reporting
(GPAR). IMSASAP II processes IMS/ESA DB and DC monitor data to provide
summary, system analysis, and program analysis level reports that assist in the
analysis of an IMS/ESA system environment. The monitor concept has proven to
be a valuable aid in the performance analysis and tuning of IMS systems.
IMSASAP II extends this capability by providing comprehensive reports (from
management summaries to detail program traces) to meet a broad range of
IMS/ESA system analysis objectives.

IMSASAP:
v Produces a comprehensive set of reports, organized by level of detail and area of
analysis, to satisfy a wide range of IMS/ESA system analysis requirements
v Provides report selection and reporting options to satisfy individual
requirements and to assist in efficient analysis
v Produces alphanumerically collated report items in terms of ratios, rates, and
percentages to facilitate a comparison of results without additional computations
v Reports on schedules in progress including wait-for-input and batch message
processing programs
v Provides reports on IMS/ESA batch programs.

Further information on IMSASAP is given in the IMSASAP II Program


Description/Operations manual.

DATABASE 2 Performance Monitor (DB2PM)


DATABASE 2™ Performance Monitor (program number 5665-354) analyses DB2
performance data and generates a comprehensive set of reports. These include the
following:
v A set of graphs showing DB2 statistics, accounting, and frequency distribution
performance data
v A summary of DB2 system activity, including system tasks (statistics data)
v A summary of DB2 application work, reported either by user or by application
(accounting data)
v A set of transit time reports detailing DB2 workload performance
v System- and application-related DB2 I/O activity
v Locking activity, reported both by DB2 application type and by database
v SQL activity
v Selective tracing and formatting of DB2 records.

36 CICS TS for OS/390: CICS Performance Guide


For further information, see the DATABASE 2 Performance Monitor (DB2PM) General
Information manual.

Teleprocessing network simulator (TPNS)


The Teleprocessing Network Simulator (TPNS) (program number 5662-262) is a
program that simulates terminal activity such as that coming through the NCP.
TPNS can be used to operate an online system at different transaction rates, and
can monitor system performance at those rates. TPNS also keeps information on
response times, which can be analyzed after a simulation.

Further information on TPNS is given in the Teleprocessing Network Simulator


(TPNS) General Information manual.

Chapter 4. An overview of performance-measurement tools 37


38 CICS TS for OS/390: CICS Performance Guide
Chapter 5. Using CICS statistics
This chapter discusses CICS statistics in the following sections: Methods for
collecting statistics are described, and statistics that can be used for tuning your
CICS system are included.
v “Introduction to CICS statistics”
v “Processing CICS statistics” on page 45
v “Interpreting CICS statistics” on page 45

Introduction to CICS statistics


CICS management modules control how events are managed by CICS. As events
occur, CICS produces information that is available to you as system and resource
statistics.

The resources controlled by CICS include files, databases, journals, transactions,


programs, and tasks. Resources that CICS manages, and values that CICS uses in
its record-keeping role, are defined in one of the following ways:
v Online, by the CICS CEDA transaction.
v Offline, by the CICS system definition (CSD) utility program, DFHCSDUP. See
the CICS Customization Guide for programming information about DFHCSDUP.
v Offline, by CICS control table macros.

Statistics are collected during CICS online processing for later offline analysis. The
statistics domain writes statistics records to a System Management Facilities (SMF)
data set. The records are of SMF type 110, sub-type 002. Monitoring records and
some journaling records are also written to the SMF data set as type 110 records.
You might find it useful to process statistics and monitoring records together. For
programming information about SMF, and about other SMF data set
considerations, see the CICS Customization Guide.

Types of statistics data


CICS produces five types of statistics:
Interval statistics
Are gathered by CICS during a specified interval. CICS writes the interval
statistics to the SMF data set automatically at the expiry of the interval if:
v Statistics recording status was set ON by the STATRCD system
initialization parameter (and has not subsequently been set OFF by a
CEMT or EXEC CICS SET STATISTICS RECORDING command). The
default is STATRCD=OFF.
v ON is specified in CEMT SET STATISTICS.
v The RECORDING option of the EXEC CICS SET STATISTICS command
is set to ON.
End-of-day statistics
Are a special case of interval statistics where all statistics counters are
collected and reset. There are three ways to get end-of-day statistics:
v The end-of-day expiry time

© Copyright IBM Corp. 1983, 1999 39


v When CICS quiesces (normal shutdown)
v When CICS terminates (immediate shutdown).
The end of day value defines a logical point in the 24 hour operation of
CICS. You can change the end of day value using CEMT SET STATISTICS
or the EXEC CICS SET STATISTICS command. End-of-day statistics are
always written to the SMF data set, regardless of the settings of any of the
following:
v The system initialization parameter, STATRCD, or
v CEMT SET STATISTICS or
v The RECORDING option of EXEC CICS SET STATISTICS.
The statistics that are written to the SMF data set are those collected since
the last event which involved a reset. The following are examples of resets:
v At CICS startup
v Issue of RESETNOW RECORDNOW in CEMT or EXEC CICS
STATISTICS commands.
v Interval statistics

The default end-of-day value is 000000 (midnight).

End-of-day statistics are always written to the SMF data set, regardless of
the settings of any of the following:
v The system initialization parameter, STATRCD, or
v CEMT SET STATISTICS or
v The RECORDING option of EXEC CICS SET STATISTICS.
Requested statistics
are statistics that the user has asked for by using one of the following
commands:
v CEMT PERFORM STATISTICS RECORD
v EXEC CICS PERFORM STATISTICS RECORD
v EXEC CICS SET STATISTICS ON|OFF RECORDNOW.
These commands cause the statistics to be written to the SMF data set
immediately, instead of waiting for the current interval to expire. The
PERFORM STATISTICS command can be issued with any combination of
resource types or you can ask for all resource types with the ALL option.
For more details about CEMT commands see the CICS Supplied
Transactions; for programming information about the equivalent EXEC
CICS commands, see the CICS System Programming Reference.
Requested reset statistics
differ from requested statistics in that all statistics are collected and
statistics counters are reset. You can reset the statistics counters using the
following commands:
v CEMT PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS SET STATISTICS ON|OFF RESETNOW RECORDNOW

The PERFORM STATISTICS command must be issued with the ALL option
if RESETNOW is present.

You can also invoke requested reset statistics when changing the recording
status from ON to OFF, or vice versa, using CEMT SET STATISTICS

40 CICS TS for OS/390: CICS Performance Guide


ON|OFF RECORDNOW RESETNOW, or EXEC CICS SET STATISTICS
ON|OFF RECORDNOW RESETNOW.

Note: It is valid to specify RECORDNOW RESETNOW options only when


there is a genuine change of status from STATISTICS ON to OFF, or
vice versa. In other words, coding EXEC CICS SET STATISTICS ON
RECORDNOW RESETNOW when statistics is already ON will cause
an error response.

RESETNOW RECORDNOW on the SET STATISTICS command can only be


invoked if the RECORDING option is changed. See also Figure 1.

Note: Issuing the RESETNOW command by itself in the SET STATISTICS


command causes the loss of the statistics data that has been
collected since the last interval. Interval collections take place only if
you set the RECORDING status ON. To set the statistics recording
status ON or OFF, use either the RECORDING option on this
command or the SIT parameter STATRCD. Statistics are always
written, and counts reset, at the end of day. See Figure 1 for further
information.

RECORDING ON
Expiry of INTERVAL
Writes to the SMF data set
Resets counters
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified

Expiry of ENDOFDAY
Writes to SMF data set
Resets counters

RECORDING OFF
Expiry of INTERVAL
No action
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified

Expiry of ENDOFDAY
Writes to SMF data set
Resets counters

08 09 10 11 12 13 14 15 16 17 18 19 20 21
I E
Resetting statistics counters

Figure 1. Summary of statistics reset functions

Unsolicited statistics
are automatically gathered by CICS for dynamically
allocated and deallocated resources. CICS writes these

Chapter 5. Using CICS statistics 41


statistics to SMF just before the resource is deleted
regardless of the status of statistics recording.
Unsolicited statistics are produced for:
autoinstalled terminals
Whenever an autoinstalled terminal entry in the TCT
is deleted (after the terminal logs off), CICS collects
statistics covering the autoinstalled period since the
last interval. The period covers any delay interval
specified by the system initialization parameter,
AILDELAY.
If an autoinstall terminal logs on again before the
expiry of the delay interval, the accumulation of
statistics continues until the next interval. At that
interval, the accumulation of statistics is restarted.
DBCTL
Whenever CICS disconnects from DBCTL, CICS
collects the statistics covering the whole of the
DBCTL connection period.
DB2 Whenever CICS disconnects from DB2, CICS collects
the statistics for the DB2 connection and all
DB2ENTRYs covering the period from the last
interval.
Whenever a DB2ENTRY is discarded, CICS collects
the statistics for that DB2ENTRY covering the period
from the last interval.
FEPI connection
Unsolicited connection statistics are produced when
a connection is destroyed. This could occur when a
DISCARD TARGET, DISCARD NODE, DISCARD
POOL, DELETE POOL, DISCARD NODELIST, or
DISCARD TARGETLIST command is used.
FEPI pools
Unsolicited pool statistics are produced when a pool
is discarded by using the DISCARD POOL or
DELETE POOL command.
FEPI targets
Unsolicited target statistics are produced when a
target is destroyed or removed from a pool. This
occurs when a DELETE POOL, DISCARD POOL,
DISCARD TARGET, or DISCARD TARGETLIST
command is used.
files Whenever CICS closes a file, CICS collects statistics
covering the period from the last interval.
JOURNALNAMES
Unsolicited journalname statistics are produced
when a journalname is discarded by using the
DISCARD JOURNALNAME command.

42 CICS TS for OS/390: CICS Performance Guide


LOGSTREAMS
Unsolicited logstream statistics are produced when
the logstream is discarded from the MVS system
logger.
LSRpools
When CICS closes a file which is in an LSRPOOL,
CICS collects the statistics for the LSRPOOL. The
following peak values are reset at each interval
collection:
v Peak number of requests waiting for a string
v Maximum number of concurrent active file control
strings.

The other statistics, which are not reset at an interval


collection, cover the entire period from the time the
LSRPOOL is created (when the first file is opened)
until the LSRPOOL is deleted (when the last file is
closed).
PROGRAMS
When an installed program definition is discarded,
CICS collects the statistics covering the installed
period since the last interval.
| TCP/IP Services
| Whenever CICS closes a TCP/IP service, CICS
| collects the statistics covering the period since the
| last interval.
TRANSACTIONS
When an installed transaction definition is
discarded, CICS collects the statistics covering the
installed period since the last interval.
TRANSACTION CLASSES
When an installed transaction class definition is
discarded, CICS collects the statistics covering the
installed period since the last interval.
TRANSIENT DATA QUEUES
Unsolicited transient data queue statistics are
produced when a transient data queue is discarded
by using DISCARD TDQUEUE, or when an
extrapartition transient data queue is closed.

Note: To ensure that accurate statistics are recorded unsolicited statistics (USS)
must be collected. An unsolicited record resets the statistics fields it contains.
In particular, during a normal CICS shutdown, files are closed before the
end of day statistics are gathered. This means that file and LSRPOOL end of
day statistics will be zero, while the correct values will be recorded as
unsolicited statistics.

Resetting statistics counters


When statistics are written to the SMF data set, the counters are reset in one of the
following ways:
v Reset to zero

Chapter 5. Using CICS statistics 43


v Reset to 1
v Reset to current values (this applies to peak values)
v Are not reset
v Exceptions to the above.

For detailed information about the reset characteristics, see “Appendix A. CICS
statistics tables” on page 345.

The arrival of the end-of-day time, as set by the ENDOFDAY parameters, always
causes the current interval to be ended (possibly prematurely) and a new interval
to be started. Only end-of-day statistics are collected at the end-of-day time, even if
it coincides exactly with the expiry of an interval.

Changing the end-of-day value changes the times at which INTERVAL statistics are
recorded immediately. In Figure 2, when the end-of-day is changed from midnight
to 1700 just after 1400, the effect is for the interval times to be calculated from the
new end-of-day time. Hence the new interval at 1500 as well as for the times after
new end-of-day time.

When you change any of the INTERVAL values (and also when CICS is
initialized), the length of the current (or first) interval is adjusted so that it expires
after an integral number of intervals from the end-of-day time.

These rules are illustrated by the following example. I indicates an interval


recording and E indicates an end-of-day recording.

CICS initialized with


ENDOFDAY(000000)
INTERVAL(030000)

Change to
INTERVAL(020000)
Change to
ENDOFDAY(170000)

08 09 10 11 12 13 14 15 16 17 18 19 20 21
I I I I I E I I

Figure 2. Resetting statistics counters

If you want your end-of-day recordings to cover 24 hours, set INTERVAL to


240000.

Note: Interval statistics are taken precisely on a minute boundary. Thus users with
many CICS regions on a single MVS image could have every region writing
statistics at the same time, if you have both the same interval and the same
end of day period specified. This could cost up to several seconds of the
entire CPU. If the cost becomes too noticeable, in terms of user response
time around the interval expiry, you should consider staggering the
intervals. One way of doing this while still maintaining very close
correlation of intervals for all regions is to use a PLT program like the
supplied sample DFH$STED which changes the end-of-day, and thus each
interval expiry boundary, by a few seconds. See the CICS Operations and
Utilities Guide for further information about DFH$STED.

44 CICS TS for OS/390: CICS Performance Guide


Setting STATRCD=OFF reduces the number of times that statistics are written to
the SMF data set and the counters are reset, to the end-of-day only.

Processing CICS statistics


There are four ways of processing CICS statistics:
1. Use the CICS DFHSTUP offline utility. For guidance about retrieving CICS
statistics from SMF, and about running DFHSTUP, see the CICS Operations and
Utilities Guide.
2. Write your own program to report and analyze the statistics. For details about
the statistics record types, see the assembler DSECTs named in each set of
statistics. For programming information about the formats of CICS statistics
SMF records, see the CICS Customization Guide.
3. Use the sample statistics program (DFH0STAT).
You can use the statistics sample program, DFH0STAT, to help you determine
and adjust the values needed for CICS storage parameters, for example, using
DSALIM and EDSALIM. The program produces a report showing critical
system parameters from the CICS dispatcher, an analysis of the CICS storage
manager and loader statistics, and an overview of the MVS storage in use. The
program demonstrates the use of the EXEC CICS INQUIRE and EXEC CICS
COLLECT STATISTICS commands to produce an analysis of a CICS system.
You can use the sample program as provided or modify it to suit your needs.
In addition, the DFH0STAT report produces reports from the CICS statistics
reports. For more information about these, see “Appendix E. The sample
statistics program, DFH0STAT” on page 519.
4. Use the Performance Reporter program to process CICS SMF records to
produce joint reports with data from other SMF records. For more information,
see “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113.

Interpreting CICS statistics


In the following sections, as indicated in Table 1, guidance is given to help with the
interpretation of the statistics report. Information is presented in the order that it
appears in the DFHSTUP report. Some headings have been omitted where they
have little or no performance impact. Detailed information about the statistics
tables is given in “Appendix A. CICS statistics tables” on page 345.
Table 1. Performance statistics types
Statistic type page
CICS DB2 statistics 47
Dispatcher statistics 47
Dump statistics 53
Enqueue domain statistics 53
Front end programming interface statistics 54
Files 54
ISC/IRC attach time statistics 63
Journalname and logstream statistics 55
Loader statistics 49
LSRPOOLS 56
Chapter 5. Using CICS statistics 45
Table 1. Performance statistics types (continued)
Statistic type page
Programs 53
Recovery manager statistics 56
Shared TS queue server statistics 64
Statistics domain statistics 46
Storage manager statistics 48
Temporary storage 49
Terminals 57
Transaction class statistics 47
Transaction manager statistics 46
Transactions 53
Transient data (global) 50
Transient data (resource) 50
| User domain statistics 50
VTAM statistics 51

Statistics domain statistics


Statistics recording on to an SMF data set can be a very CPU-intensive activity. The
amount of activity depends more on the number of resources defined than the
extent of their use. This may be another reason to maintain CICS definitions by
removing redundant or over-allocated resources.

For more information about the statistics domain statistics, see page 451.

Transaction manager statistics


The “Times the MAXTASK limit reached” indicates whether MXT is constraining
your system, or any possible integrity exposures are resulting from forced
resolutions of UOWs relating to the transactions. The only time that you may need
to constrain your system in this way is to reduce virtual storage usage. As most
CICS virtual storage is above the 16MB line you may be able to run your system
without MXT constraints, but note that CICS does preallocate storage, above and
below the 16MB line, for each MXT whether or not it is used. Changing MXT
affects your calculations for the dynamic storage areas. See “Maximum task
specification (MXT)” on page 287 for more information.

For more information about transaction manager statistics, see page 482.

46 CICS TS for OS/390: CICS Performance Guide


Transaction class (TRANCLASS) statistics
If you are never at the limit of your transaction class setting then you might
consider resetting its value, or review whether there is any need to continue
specifying any transaction types with that class.

For more information, see the transaction class statistics on page 478

CICS DB2 statistics


In addition to the limited statistics output by the DSNC DISP STAT command and
those output to the STATSQUEUE destination of the DB2CONN during attachment
facility shutdown, a more comprehensive set of CICS DB2 statistics can be
collected using standard CICS statistics interfaces:
v The EXEC CICS COLLECT statistics command accepts the DB2CONN keyword
to allow CICS DB2 global statistics to be collected. CICS DB2 global statistics are
mapped by the DFHD2GDS DSECT.
v The EXEC CICS COLLECT statistics command accepts the DB2ENTRY()
keyword to allow CICS DB2 resource statistics to be collected for a particular
DB2ENTRY. CICS DB2 resource statistics are mapped by the DFHD2RDS DSECT.
v The EXEC CICS PERFORM STATISTICS command accepts the DB2 keyword to
allow the user to request that CICS DB2 global and resource statistics are written
out to SMF.

The CICS DB2 global and resource statistics are described in the CICS statistics
tables on page 352. For more information about CICS DB2 performance, see the
CICS DB2 Guide.

Dispatcher statistics

TCB statistics
The “Accum CPU time/TCB” is the amount of CPU time consumed by each CICS
TCB since the last time statistics were reset. Totaling the values of “Accum time in
MVS wait” and “Accum time dispatched” gives you the approximate time since
the last time CICS statistics were reset. The ratio of the “Accum CPU time /TCB”
to this time shows the percentage usage of each CICS TCB. The “Accum CPU
time/TCB” does not include uncaptured time, thus even a totally busy CICS TCB
would be noticeably less than 100% busy from this calculation. If a CICS region is
more than 70% busy by this method, you are approaching that region’s capacity.
The 70% calculation can only be very approximate, however, depending on such
factors as the workload in operation, the mix of activity within the workload, and
which release of CICS you are currently using. Alternatively, you can calculate if
your system is approaching capacity by using RMF to obtain a definititve
measurement, or you can use RMF with your monitoring system. For more
information, see OS/390 RMF V2R6 Performance Management Guide, SC28-1951.

Note: “Accum time dispatched” is NOT a measurement of CPU time because MVS
can run higher priority work, for example, all I/O activity and higher
priority regions, without CICS being aware.

Chapter 5. Using CICS statistics 47


| Modes of TCB are as follows:
| QR mode
| There is always one quasi-reentrant mode TCB. It is used to run
| quasi-reentrant CICS code and non-threadsafe application code.
| FO mode
| There is always one file-owning TCB. It is used for opening and closing
| user datasets.
| RO mode
| There is always one resource-owning TCB. It is used for opening and
| closing CICS datasets, loading programs, issuing RACF® calls, etc.
| CO mode
| The optional concurrent mode TCB is used for processes which can safely
| run in parallel with other CICS activity such as VSAM requests. The SIT
| keyword SUBTSKS has been defined to have numeric values (0 and 1) to
| specify whether there is to be a CO TCB.
| SZ mode
| The single optional SZ mode TCB is used by the FEPI interface.
| RP mode
| The single optional RP mode TCB is used to make ONC/RPC calls.
| J8 mode
| A task has a J8 mode TCB for its sole use if it needs to run a JVM.
| L8 mode
| L8 mode TCBs are not in use for CICS Transaction Server for OS/390
| Release 3.
| SO mode
| The SO mode TCB is used to make calls to the sockets interface of TCP/IP.
| SL mode
| The SL mode TCB is used to wait for activity on a set of listening sockets.
| S8 mode
| A task has an S8 TCB for its sole use if it needs to use the system Secure
| Sockets Layer..

| For more information about dispatcher statistics, see page 367.

Storage manager statistics


Dynamic program compression releases programs which are not being used
progressively as storage becomes shorter. However, short-on-storage conditions can
still occur and are reported as “Times went short on storage”. If this value is not
zero you might consider increasing the size of the dynamic storage area. Otherwise
you should consider the use of MXT and transaction classes to constrain your
system’s virtual storage.

Storage manager requests “Times request suspended”, and “Times cushion


released”, indicate that storage stress situations have occurred, some of which may
not have produced a short-on-storage condition. For example, a GETMAIN request
may cause the storage cushion to be released. However, loader can compress some
programs, obtain the cushion storage, and avoid the short-on-storage condition.

48 CICS TS for OS/390: CICS Performance Guide


Note: In the task subpools section, the “Current elem stg” is the number of bytes
actually used while “Current page stg” is the number of pages containing
one or more of these bytes.

For more information, see the CICS statistics tables on page 452.

Loader statistics
“Average loading time” = “Total loading time” / “Number of library load
requests”. This indicates the response time overhead suffered by tasks when
accessing a program which has to be brought into storage. If “Average loading
time” has increased over a period, consider MVS library lookaside usage.
“Not-in-use” program storage is freed progressively so that the “Amount of the
dynamic storage area occupied by not in use programs”, and the free storage in
the dynamic storage area are optimized for performance. Loader attempts to keep
not-in-use programs in storage long enough to reduce the performance overhead of
reloading the program. As the amount of free storage in the dynamic storage
decreases, the not-in-use programs are freemained in order of those least frequently
used to avoid a potential short-on-storage condition.

Note: The values reported are for the instant at which the statistics are gathered
and vary since the last report.

“Average Not-In-Use queue membership time” = “Total Not-In-Use queue


membership time” / “Number of programs removed by compression”. This is an
indication of how long a program is left in storage when not in use before being
removed by the dynamic program storage compression (DPSC) mechanism. If the
interval between uses of a program, that is, interval time divided by the number of
times used in the interval, is less than this value, there is a high probability that
the program is in storage already when it is next required.

Note: This factor is meaningful only if there has been a substantial degree of
loader domain activity during the interval and may be distorted by startup
usage patterns.

“Average suspend time” = “Total waiting time” / “Number of waited loader


requests”.

This is an indication of the response time impact which may be suffered by a task
due to contention for loader domain resources.

Note: This calculation is not performed on requests that are currently waiting.

For more information, see the CICS statistics tables on page 431.

Temporary storage statistics


If a data item is written to temporary storage (using WRITEQ TS), a temporary
storage queue is built.

The “Writes more than control interval” is the number of writes of records whose
length was greater than the control interval (CI) size of the TS data set. This value

Chapter 5. Using CICS statistics 49


should be used to adjust the CI size. If the reported value is large, increase the CI
size. If the value is zero, consider reducing the CI size until a small value is
reported.

The number of “times aux. storage exhausted” is the number of situations where
one or more transactions may have been suspended because of a NOSPACE
condition, or (using a HANDLE CONDITION NOSPACE command, the use of
RESP on the WRITEQ TS command, or WRITEQ TS NOSUSPEND command) may
have been forced to abend. If this item appears in the statistics, increase the size of
the temporary storage data set. “Buffer writes” is the number of WRITEs to the
temporary storage data set. This includes both WRITEs necessitated by recovery
requirements and WRITEs forced by the buffer being needed to accommodate
another CI. I/O activity caused by the latter reason can be minimized by
increasing buffer allocation using the system initialization parameter, TS=(b,s),
where b is the number of buffers and s is the number of strings.

The “Peak number of strings in use” item is the peak number of concurrent I/O
operations to the data set. If this is significantly less than the number of strings
specified in the TS system initialization parameter, consider reducing the system
initialization parameter to approach this number.

If the “Times string wait occurred” is not zero, consider increasing the number of
strings. For details about adjusting the size of the TS data set and the number of
strings and buffers, see the CICS System Definition Guide.

For more information, see the CICS statistics tables on page 468

Transient data statistics


You should monitor the data provided by CICS on the amount of I/O activity for
transient data, in the form of the number of READs and WRITEs to the transient
data intrapartition data set. If there is a large amount of READ activity, this
indicates that the buffer allocation may be insufficient, even though the “peak
concurrent string access” may be fewer than the number allocated.

You should aim to minimize the “Intrapartition buffer waits” and “string waits” by
increasing the number of buffers and the number of strings if you can afford any
associated increase in your use of real storage.

| For more information, see the CICS statistics tables on pages 503 and 468.

|
| User domain statistics
| The user domain attempts to minimize the number of times it calls the security
| domain to create user security blocks (such as the ACEE), because this operation is
| very expensive in both processor time and input/output operations. If possible,
| each unique representation of a user is shared between multiple transactions. A
| user-domain representation of a user can be shared if the following attributes are
| identical:
| v The userid.
| v The groupid.

50 CICS TS for OS/390: CICS Performance Guide


| v The applid. This is not necessarily the same for all the users in a region. The
| applid is shipped with the userid across MRO links.
| v The port of entry. This can be the netname for users signed on at VTAM
| terminals, or the console name for users signed on at consoles. It is null for
| other terminal types and for users associated with non-terminal transactions.

| The user domain keeps a count of the number of concurrent usages of a shared
| instance of a user. The count includes the number of times the instance has been
| associated with a CICS resource (such as a transient data queue) and the number
| of active transactions that are using the instance.

| Whenever CICS adds a new user instance to the user domain, the domain attempts
| to locate that instance in its user directory. If the user instance already exists with
| the parameters described above, that instance is reused. USGDRRC records how
| many times this is done. However, if the user instance does not already exist, it
| needs to be added. This requires an invocation of the security domain and the
| external security manager. USGDRNFC records how many times this is necessary.

| When the count associated with the instance is reduced to zero, the user instance is
| not immediately deleted: instead it is placed in a timeout queue controlled by the
| USRDELAY system initialization parameter. While it is in the timeout queue, the
| user instance is still eligible to be reused. If it is reused, it is removed from the
| timeout queue. USGTORC records how many times a user instance is reused while
| it was being timed out, and USGTOMRT records the average time that user
| instances remain on the timeout queue until they are removed.

| However, if a user instance remains on the timeout queue for a full USRDELAY
| interval without being reused, it is deleted. USGTOEC records how many times
| this happens.

| If USGTOEC is large compared to USGTORC, you should consider increasing the


| value of USRDELAY. But if USGTOMRT is much smaller than USRDELAY, you
| may be able to reduce USRDELAY without significant performance effect.

| You should be aware that high values of USRDELAY may affect your security
| administrator’s ability to change the authorities and attributes of CICS users,
| because those changes are not reflected in CICS until the user instance is refreshed
| in CICS by being flushed from the timeout queue after the USRDELAY interval.
| Some security administrators may require you to specify USRDELAY=0. This still
| allows some sharing of user instances if the usage count is never reduced to zero.
| Generally, however, remote users are flushed out immediately after the transaction
| they are executing has terminated, so that their user control blocks have to be
| reconstructed frequently. This results in poor performance. For more information,
| see “User domain statistics” on page 499.

VTAM statistics
The “peak RPLs posted” includes only the receive-any RPLs defined by the
RAPOOL system initialization parameter. In non-HPO systems, the value shown
can be larger than the value specified for RAPOOL, because CICS reissues each
receive-any request as soon as the input message associated with the posted RPL
has been disposed of. VTAM may well cause this reissued receive-any RPL to be
posted during the current dispatch of terminal control. While this does not
necessarily indicate a performance problem, a number much higher than the

Chapter 5. Using CICS statistics 51


number of receive-any requests specified via RAPOOL may indicate, for MVS, that
VTAM was required to queue incoming messages in subpool 229 when no
receive-any was available to accept the input. You should limit this VTAM
queueing activity by providing a sufficient number of receive-any requests to
handle all but the input message rate peaks.

In addition to indicating whether the value for the RAPOOL system initialization
parameter is large enough, you can also use the “maximum number of RPLs
posted” statistic (A03RPLX) to determine other information. This depends upon
whether your MVS system has HPO or not.

For HPO, RAPOOL(A,B) allows the user to tune the active count (B). The size of
the pool (A) should be dependent on the speed at which they get processed. The
active count (B) has to be able to satisfy VTAM at any given time, and is
dependent on the inbound message rate for receive-any requests.

Here is an example to illustrate the differences for an HPO and a non-HPO system.
Suppose two similar CICS executions use a RAPOOL value of 2 for both runs. The
number of RPLs posted in the MVS/HPO run is 2, while the MVS/non-HPO run
is 31. This difference is better understood when we look at the next item in the
statistics.

This item is not printed if the maximum number of RPLs posted is zero. In our
example, let us say that the MVS/HPO system reached the maximum 495 times.
The non-HPO MVS system reached the maximum of 31 only once. You might
deduce from this that the pool is probably too small (RAPOOL=2) for the HPO
system and it needs to be increased. An appreciable increase in the RAPOOL value,
from 2 to, say, 6 or more, should be tried. As you can see from the example given
below, the RAPOOL value was increased to 8 and the maximum was reached only
16 times:
MAXIMUM NUMBER OF RPLS POSTED 8
NUMBER OF TIMES REACHED MAXIMUM 16

In a non-HPO system, these two statistics are less useful, except that, if the
maximum number of RPLs posted is less than RAPOOL, RAPOOL can be reduced,
thereby saving virtual storage.

VTAM SOS simply means that a CICS request for service from VTAM was rejected
with a VTAM sense code indicating that VTAM was unable to acquire the storage
required to service the request. VTAM does not give any further information to
CICS, such as what storage it was unable to acquire.

This situation most commonly arises at network startup or shutdown when CICS
is trying to schedule requests concurrently, to a larger number of terminals than
during normal execution. If the count is not very high, it is probably not worth
tracking down. In any case, CICS automatically retries the failing requests later on.

If your network is growing, however, you should monitor this statistic and, if the
count is starting to increase, you should take action. Use D NET,BFRUSE to check
if VTAM is short on storage in its own region and increase VTAM allocations
accordingly if this is required.

The maximum value for this statistic is 99, at which time a message is sent to the
console and the counter is reset to zero. However, VTAM controls its own buffers
and gives you a facility to monitor buffer usage.

52 CICS TS for OS/390: CICS Performance Guide


If you feel that D NET,BFRUSE is insufficient, you can activate SMS tracing in
VTAM to sample buffer activity at regular intervals. If you have installed NetView,
you can also have dynamic displays of the data that is obtained with D NET,
BFRUSE.

For more information, see the CICS statistics tables on page 500.

Dump statistics
Both transaction and system dumps are very expensive and should be thoroughly
investigated and eliminated.

For more information, see the CICS statistics tables on page 373.

Enqueue statistics
The enqueue domain supports the CICS recovery manager. Enqueue statistics
contain the global data collected by the enqueue domain for enqueue requests.

Waiting for an enqueue on a resource can add significant delays in the execution of
a transaction. The enqueue statistics allow you to assess the impact of waiting for
enqueues in the system and the impact of retained enqueues on waiters. Both the
current activity and the activity since the last reset are available.

For more information, see the CICS statistics tables on page 378.

Transaction statistics
Use these statistics to find out which transactions (if any) had storage violations.

It is also possible to use these statistics for capacity planning purposes. But
remember, many systems experience both increasing cost per transaction as well as
increasing transaction rate.

For more information, see the CICS statistics tables on page 484.

Program statistics
“Average fetch time” is an indication of how long it actually takes MVS to perform
a load from the partitioned data set in the RPL concatenation into CICS managed
storage.

The average for each RPL offset of “Program size” / “Average fetch time”. is an
indication of the byte transfer rate during loads from a particular partitioned data
set. A comparison of these values may assist you to detect bad channel loading or
file layout problems.

For more information, see the CICS statistics tables on page 442.

Chapter 5. Using CICS statistics 53


Front end programming interface (FEPI) statistics
CICS monitoring and statistics data can be used to help tune FEPI applications,
and to control the resources that they use. FEPI statistics contain data about the
use of each FEPI pool, a particular target in a pool, and each FEPI connection.

For more information, see the CICS statistics tables on page 382.

File statistics
File statistics collect data about the number of application requests against your
data sets. They indicate the number of requests for each type of service that are
processed against each file. If the number of requests is totalled daily or for every
CICS execution, the activity for each file can be monitored for any changes that
occur. Note that these file statistics may have been reset during the day; to obtain a
figure of total activity against a particular file during the day, refer to the
DFHSTUP summary report. Other data pertaining to file statistics and special
processing conditions are also collected.

The wait-on-string number is only significant for files related to VSAM data sets.
For VSAM, STRNO=5 in the file definition means, for example, that CICS permits
five concurrent requests to this file. If a transaction issues a sixth request for the
same file, this request must wait until one of the other five requests has completed
(“wait-on-string”).

The number of strings associated with a file when specified through resource
definition online.

String number setting is important for performance. Too low a value causes
excessive waiting for strings by tasks and long response times. Too high a value
increases VSAM virtual storage requirements and therefore real storage usage.
However, as both virtual storage and real storage are above the 16MB line, this
may not be a problem. In general, the number of strings should be chosen to give
near zero “wait on string” count.

Note: Increasing the number of strings can increase the risk of deadlocks because
of greater transaction concurrency. To minimize the risk you should ensure
that applications follow the standards set in the CICS Application
Programming Guide.

A file can also “wait-on-string” for an LSRpool string. This type of wait is reflected
in the local shared resource pool statistics section (see “LSRPOOL statistics” on
page 56) and not in the file wait-on-string statistics.

If you are using data tables, an extra line appears in the DFHSTUP report for those
files defined as data tables. “Read requests”, “Source reads”, and “Storage
alloc(K)” are usually the numbers of most significance. For a CICS-maintained
table a comparison of the difference between “read requests” and “source reads”
with the total request activity reported in the preceding line shows how the
request traffic divides between using the table and using VSAM and thus indicates
the effectiveness of converting the file to a CMT. “Storage alloc(K)” is the total
storage allocated for the table and provides guidance to the cost of the table in
storage resource, bearing in mind the possibility of reducing LSRpool sizes in the
light of reduced VSAM accesses.

54 CICS TS for OS/390: CICS Performance Guide


For more information, see the CICS statistics tables on page 385.

Journalname and log stream statistics


CICS collects statistics on the data written to each journal and log stream which
can be used to analyze the activity of a single region. However, because log
streams can be shared across multiple MVS images, it can be more useful to
examine the statistics generated by MVS.

Journalname statistics contain data about the use of each journal, as follows:
v The journal type (MVS logger, SMF or dummy)
v The log stream name for MVS logger journal types only
v The number of API journal writes
v The number of bytes written
v The number of flushes of journal data to log streams or SMF.

Note that the CICS system journalname and log stream statistics for the last three
items on this list are always zero. These entries appear in journalname statistics to
inform you of the journal type and log stream name for the special CICS system
journals.

For more information on journalname statistics, see the CICS statistics tables on
page 411.

Log stream statistics contain data about the use of each log stream including the
following:
v The number of write requests to the log stream
v The number of bytes written to the log stream
v The number of log stream buffer waits
v The number of log stream browse and delete requests.

For more information on log stream statistics, see the CICS statistics tables on page
413.

Journalnames are a convenient means of identifying a destination log stream that is


to be written to. CICS applications write data to journals using their journalname.
CICS itself usually uses the underlying log stream name when issuing requests to
the CICS log manager, and this must be considered when interpreting journalname
and log stream resource statistics. For example, these may show many operations
against a log stream, but relatively few, if any, writes to a journalname which maps
to that log stream. This indicates that it is CICS that accesses the resource at the
log stream level, not an application writing to it through the CICS application
programming interface. These results can typically be seen when examining the
journalname resource statistics for DFHLOG and DFHSHUNT, and comparing
them with the resource statistics for their associated CICS system log streams.

For more information on logging and journaling, see “Chapter 22. Logging and
journaling” on page 271.

For information about the SMF Type 88 records produced by the MVS system
logger, see the OS/390 MVS System Management Facilities (SMF) manual.

Chapter 5. Using CICS statistics 55


LSRPOOL statistics
CICS supports the use of up to eight LSRpools. CICS produces two sets of statistics
for LSRpool activity: one set detailing the activity for each LSRpool, and one set
giving details for each file associated with an LSRpool. Statistics are printed for all
pools that have been built (a pool is built when at least one file using the pool has
been opened).

You should usually aim to have no requests that waited for a string. If you do then
the use of MXT may be more effective.

When the last open file in an LSRPOOL is closed, the pool is deleted. The
subsequent unsolicited statistics (USS) LSRPOOL record written to SMF can be
mapped by the DFHA08DS DSECT.

The fields relating to the size and characteristics of the pool (maximum key length,
number of strings, number and size of buffers) may be those which you have
specified for the pool, through resource definition online command DEFINE
LSRPOOL. Alternatively, if some, or all, of the fields were not specified, the values
of the unspecified fields are those calculated by CICS when the pool is built.

It is possible to change the LSRPOOL specification of a file when it is closed, but


you must then consider the characteristics of the pool that the file is to share if the
pool is already built, or the file open may fail. If the pool is not built and the pool
characteristics are specified by you, take care that these are adequate for the file. If
the pool is not built and CICS calculates all or some of the operands, it may build
the pool creations of that pool. The statistics show all creations of the pool, so any
changed characteristics are visible.

You should consider specifying separate data and index buffers if you have not
already done so. This is especially true if index CI sizes are the same as data CI
sizes.

You should also consider using Hiperspace™ buffers while retaining a reasonable
number of address space buffers. Hiperspace buffers tend to give CPU savings of
keeping data in memory, exploiting the relatively cheap expanded storage, while
allowing central storage to be used more effectively.

For more information, see the CICS statistics tables on pages 416.

Recovery manager statistics


Recovery manager statistics detail the syncpoint activity of all the transactions in
the system. From these statistics you can assess the impact of shunted UOWs
(units of work that suffered an indoubt failure and are waiting for
resynchronization with their recovery coordinator, or for the problem with the
resources to be resolved). Shunted UOWs still hold locks and enqueues until they
are resolved. Statistics are available on any forced resolutions of shunted UOWs to
help assess whether any integrity exposures may have been introduced. The
current activity and the activity since the last reset are available.

For more information, see the CICS statistics tables on page 445.

56 CICS TS for OS/390: CICS Performance Guide


Terminal statistics
There are a number of ways in which terminal statistics are important for
performance analysis. From them, you can get the number of inputs and outputs,
that is, the loading of the system by end users. Line-transmission faults and
transaction faults are shown (these both have a negative influence on performance
behavior).

For more information, see the CICS statistics tables on page 474.

ISC/IRC system and mode entry statistics


You can use the ISC/IRC system and mode entry statistics to detect some problems
in a CICS intersystem environment.

The following section attempts to identify the kind of questions you may have in
connection with system performance, and describes how answers to those
questions can be derived from the statistics report. It also describes what actions, if
any, you can take to resolve ISC/IRC performance problems.

Some of the questions you may be seeking an answer to when looking at these
statistics are these:
v Are there enough sessions defined?
v Is the balance of contention winners to contention losers correct?
v Is there conflicting usage of APPC modegroups?
v What can be done if there are unusually high numbers, compared with normal
or expected numbers, in the statistics report?

Summary connection type for statistics fields


The following two tables show the connection type that is relevant for each
statistics field:
Table 2. ISC/IRC system entries
System entry Field IRC LU6.1 APPC
Connection name A14CNTN X X X
AIDS in chain A14EALL X X X
Generic AIDS in chain A14ESALL X X X
ATIs satisfied by contention losers A14ES1 X
ATIs satisfied by contention winners A14ES2 X X
Peak contention losers A14E1HWM X X
Peak contention winners A14E2HWM X X
Peak outstanding allocates A14ESTAM X X X
Total number of allocates A14ESTAS X X X
Queued allocates A14ESTAQ X X X
Failed link allocates A14ESTAF X X X
Failed allocates due to sessions in use A14ESTAO X X X
Total bids sent A14ESBID X

Chapter 5. Using CICS statistics 57


Table 2. ISC/IRC system entries (continued)
System entry Field IRC LU6.1 APPC
Current bids in progress A14EBID X
Peak bids in progress A14EBHWM X
File control function shipping A14ESTFC X X X
requests
Interval control function shipping A14ESTIC X X X
requests
TD function shipping requests A14ESTTD X X X
TS function shipping requests A14ESTTS X X X
DLI function shipping requests A14ESTDL X X X
Terminal sharing requests A14ESTTC X X

All the fields below are specific to the mode group of the mode name given.
Table 3. ISC/IRC mode entries
Mode entry Field IRC LU6.1 APPC
Mode name A20MODE X
ATIs satisfied by contention losers A20ES1 X
ATIs satisfied by contention winners A20ES2 X
Peak contention losers A20E1HWM X
Peak contention winners A20E2HWM X
Peak outstanding allocates A20ESTAM X
Total specific allocate requests A20ESTAS X
Total specific allocates satisfied A20ESTAP X
Total generic allocates satisfied A20ESTAG X
Queued allocates A20ESTAQ X
Failed link allocates A20ESTAF X
Failed allocates due to sessions in use A20ESTAO X
Total bids sent A20ESBID X
Current bids in progress A20EBID X
Peak bids in progress A20EBHWM X

For more information about the usage of individual fields, see the CICS statistics
described under “ISC/IRC system and mode entries” on page 396.

General guidance for interpreting ISC/IRC statistics


Here is some guidance information on interpreting the ISC/IRC statistics:
1. Usage of A14xxx and A20xxx fields:
v In most cases, the guidance given in the following section relates to all
connection types, that is, IRC, LU6.1, and APPC. Where the guidance is
different for a particular connection type, the text indicates the relevant type
of connection.

58 CICS TS for OS/390: CICS Performance Guide


v The statistics fields that relate to IRC and LU6.1 are always prefixed A14,
whereas the APPC fields can be prefixed by A14 or A20. For more
information on which field relates to which connection type, see Table 2 on
page 57 and Table 3 on page 58.
2. Use of the terms “Contention Winner” and “Contention Loser”:
v APPC sessions are referred to as either contention winners or contention losers.
These are equivalent to secondaries (SEND sessions) and primaries
(RECEIVE sessions) when referring to LU6.1 and IRC.
3. Tuning the number of sessions defined:
v In the following sections, it is sometimes stated that, if certain counts are too
high, you should consider making more sessions available. In these cases, be
aware that, as the number of sessions defined in the system is increased, it
may have the following effects:
– Increased use of real and virtual storage.
– Increased use of storage on GATEWAY NCPs in the network.
– Increased use of storage by VTAM.
– Increased line loading in the network.
– The back-end CICS system (AOR) may not be able to cope with the
increased workload from the TOR.
– Possible performance degradation due to increased control block scanning
by CICS.
v The recommendation is to set the number of sessions available to the highest
value you think you may need and then, through monitoring the statistics
(both ISC/IRC and terminal statistics) over a number of CICS runs, reduce
the number of sessions available to just above the number required to avoid
problems.
4. Tuning the number of contention winner and contention loser sessions
available:
v Look at both sides of the connection when carrying out any tuning, because
changing the loading on one side could inversely affect the other. Any
change made to the number of contention winner sessions available in the
TOR has an effect on the number of contention loser sessions in the AOR.
5. Establish a connection profile for comparison and measurement.
One of the objectives of a tuning exercise should be to establish a profile of the
usage of CICS connections during both normal and peak periods. Such usage
profiles can then be used as a reference point when analyzing statistics to help
you:
v Determine changed usage patterns over a period of time
v Anticipate potential performance problems before they become critical.

Are enough sessions defined?


To help you determine whether you have enough sessions defined, you can check
a number of peak fields that CICS provides in the statistics report. These are:
1. “Peak outstanding allocates” (fields A14ESTAM and A20ESTAM) “Total number of
allocates” (field A14ESTAS) “Total specific allocate requests” (field A20ESTAS).
When reviewing the number of sessions for APPC modegroups, and the
number of “Peak outstanding allocates” appears high in relation to the “Total
number of allocates”, or the “Total specific allocate requests” within a statistics
reporting period, it could indicate that the total number of sessions defined is
too low.

Chapter 5. Using CICS statistics 59


2. “Peak contention winners” (fields A14E2HWM and A20E2HWM) “Peak contention
losers” (fields A14E1HWM and A20E1HWM)
If the number of (“Peak contention winners” + “Peak contention losers”) equals
the maximum number of sessions available (as defined in the SESSIONS
definition), this indicates that, at some point in the statistics reporting period,
all the sessions available were, potentially, in use. While these facts alone may
not indicate a problem, if CICS also queued or rejected some allocate requests
during the same period, the total number of sessions defined is too low.
3. “Failed allocates due to sessions in use” (fields A14ESTAO and A20ESTAO)
This value is incremented for allocates that are rejected with a SYSBUSY
response because no sessions are immediately available (that is, for allocate
requests with the NOSUSPEND or NOQUEUE option specified). This value is
also incremented for allocates that are queued and then rejected with an AAL1
abend code; the AAL1 code indicates the allocate is rejected because no session
became available within the specified deadlock timeout (DTIMOUT) time limit.
If the number of “Failed allocates due to sessions in use” is high within a
statistics reporting period, it indicates that not enough sessions were
immediately available, or available within a reasonable time limit.

Action: Consider making more sessions available with which to satisfy the allocate
requests. Enabling CICS to satisfy allocate requests without the need for queueing
may lead to improved performance.

However, be aware that increasing the number of sessions available on the front
end potentially increases the workload to the back end, and you should investigate
whether this is likely to cause a problem.

Is the balance of contention winners to contention losers


correct?
There are several ways to determine the answer to this, because CICS provides a
number of fields which show contention winner and contention loser usage.

The following fields should give some guidance as to whether you need to
increase the number of contention winner sessions defined:
1. “Current bids in progress” (fields A14EBID and A20EBID) “Peak bids in progress”
(fields A14EBHWM and A20EBHWM)
The value “Peak bids in progress” records the maximum number of bids in
progress at any one time during the statistics reporting period. “Current bids in
progress” is always less than or equal to the “Peak bids in progress”.
Ideally, these fields should be kept to zero. If either of these fields is high, it
indicates that CICS is having to perform a large number of bids for contention
loser sessions.
2. “Peak contention losers” (fields A14E1HWM and A20E1HWM).
If the number of “Peak contention losers” is equal to the number of contention
loser sessions available, the number of contention loser sessions defined may be
too low. Alternatively, for APPC/LU6.1, CICS could be using the contention
loser sessions to satisfy allocates due to a lack of contention winner sessions.
This should be tuned at the front-end in conjunction with winners at the
back-end. For details of how to specify the maximum number of sessions, and
the number of contention winners, see the information on defining SESSIONS
in the CICS Resource Definition Guide.

60 CICS TS for OS/390: CICS Performance Guide


Actions:

For APPC, consider making more contention winner sessions available, which
should reduce the need to use contention loser sessions to satisfy allocate requests
and, as a result, should also make more contention loser sessions available.

For LU6.1, consider making more SEND sessions available, which decreases the
need for LU6.1 to use primaries (RECEIVE sessions) to satisfy allocate requests.

For IRC, there is no bidding involved, as MRO can never use RECEIVE sessions to
satisfy allocate requests. If “Peak contention losers (RECEIVE)” is equal to the
number of contention loser (RECEIVE) sessions on an IRC link, the number of
allocates from the remote system is possibly higher than the receiving system can
cope with. In this situation, consider increasing the number of RECEIVE sessions
available.

Note: The usage of sessions depends on the direction of flow of work. Any tuning
which increases the number of winners available at the front-end should
also take into account whether this is appropriate for the direction of flow of
work over a whole period, such as a day, week, or month.

Is there conflicting usage of APPC modegroups?


There is a possibility of conflicting APPC modegroup usage, where a mixture of
generic and specific allocate requests is used within a CICS region.

A specific allocate is an allocate request that specifies a particular (specific) mode


group of sessions to allocate from, whereas a generic allocate does not specify any
particular mode group only the system to which an allocate is required. In the
latter case CICS determines the session and mode group to allocate.

The fields you need to investigate to answer this question, are:


“Total generic allocates satisfied” (field A20ESTAG)
“Total specific allocate requests” (field A20ESTAS)
“Peak outstanding allocates” (field A20ESTAM)
“Total specific allocates satisfied” (field A20ESTAP).
If the “Total generic allocates satisfied” is much greater than “Total specific allocate
requests”, and “Peak outstanding allocates” is not zero, it could indicate that
generic allocates are being made only, or mainly, to the first modegroup for a
connection.

This could cause a problem for any specific allocate, because CICS initially tries to
satisfy a generic allocate from the first modegroup before trying other modegroups
in sequence.

Action: Consider changing the order of the installed modegroup entries.


Modegroups for a connection are represented by TCT mode entries (TCTMEs),
with the modegroup name being taken from the MODENAME specified on the
SESSIONS definition. The order of the TCTMEs is determined by the order in
which CICS installs the SESSIONS definitions, which is in the order of the
SESSIONS name as stored on the CSD (ascending alphanumeric key sequence). See
Figure 3 on page 62 for an illustration of this. To change the order of the TCTMEs,
you must change the names of the SESSIONS definitions. You can use the CEDA
RENAME command with the AS option to rename the definition with a different

Chapter 5. Using CICS statistics 61


SESSIONS name within the CSD group. By managing the order in which the
TCTMEs are created you can ensure that specific allocates reference modegroups
lower down the TCTME chain, and avoid conflict with the generic ALLOCATEs.
Alternatively, make all allocates specific allocates.

Group installed
ISCGROUP in CSD in CICS region:

CONNECTION(CICA) TCTSE created


.
.
. Special TCTME
. for SNASVCMG
SESSIONS(SESSIONA)
CONN(CICA)
.
. First user TCTME
MODENAME(MODEGRPY) - - - - - - - - - created for
MODEGRPY
SESSIONS(SESSIONB)
CONN(CICA)
. Pointer to
. next modegroup
MODENAME(MODEGRPX) - - -

Second user
- - - - - TCTME created
for MODEGRPX

Figure 3. How the sequence of TCT mode entries is determined

What if there are unusually high numbers in the statistics


report?
When looking down the ISC/IRC system and mode entries statistics report, you may
notice a number of fields that appear to be unusually high in relation to all others.
This section lists some of those fields, and what action you can take to reduce their
numbers:
1. “Peak contention losers” (fields A14E1HWM and A20E1HWM).
If the number of “Peak contention losers” is equal to the number of contention
loser sessions available, the number of contention loser sessions defined may be
too low, or, if your links are APPC/LU6.1, CICS could be using the contention
loser sessions to satisfy allocates due to a lack of contention winner sessions.
Action: Consider making more contention winner sessions available with which
to satisfy the allocate requests. If IRC, increase the RECEIVES.
2. “Peak outstanding allocates” (fields A14ESTAM and A20ESTAM)
If the number of “Peak outstanding allocates” appears high, in relation to the
“Total number of allocates”, or the “Total specific allocate requests” for APPC
modegroups within a statistics reporting period, it could indicate that the total
number of sessions defined is too low, or that the remote system cannot cope
with the amount of work being sent to it.
Action: Consider making more sessions available with which to satisfy the
allocate requests, or reduce the number of allocates being made.
3. “Failed link allocates” (fields A14ESTAF and A20ESTAF)

62 CICS TS for OS/390: CICS Performance Guide


If this value is high within a statistics reporting period, it indicates something
was wrong with the state of the connection. The most likely cause is that the
connection is released, out of service, or has a closed mode group.
Action: Examine the state of the connection that CICS is trying to allocate a
session on, and resolve any problem that is causing the allocates to fail.
To help you to resolve a connection failure, check the CSMT log for the same
period covered by the statistics for any indication of problems with the
connection that the statistics relate to.
It may also be worth considering writing a connection status monitoring
program, which can run in the background and regularly check connection
status and take remedial action to re-acquire a released connection. This may
help to minimize outage time caused by connections being unavailable for use.
See the CICS System Programming Reference manual for programming
information about the EXEC CICS INQUIRE|SET CONNECTION and the
EXEC CICS INQUIRE|SET MODENAME commands that you would use in
such a program.
4. “Failed allocates due to sessions in use” (fields A14ESTAO and A20ESTAO)
This value is incremented for allocates that have been rejected with a SYSBUSY
response because no sessions were immediately available, and the allocate
requests were made with the NOSUSPEND or NOQUEUE option specified.
This value is also incremented for allocates that have been queued and then
rejected with an AAL1 abend code; the AAL1 code indicates the allocate was
rejected because no session was available within the specified deadlock timeout
(DTIMOUT) time limit.
If the number of “Failed allocates due to sessions in use” is high, within a
statistics reporting period, it indicates that not enough sessions were
immediately available, or available within a reasonable time limit.
Action: The action is to consider making more contention winner sessions
available. This action would result in a reduction in the amount of bidding
being carried out, and the subsequent usage of contention loser sessions.
Increase the sessions if IRC is used.
5. “Peak bids in progress” (fields A14EBHWM and A20EBHWM)
Ideally, these fields should be kept to zero. If either of these fields are high, it
indicates that CICS is having to perform a large amount of bidding for sessions.
Action: Consider making more contention winner sessions available, to satisfy
allocate requests.

ISC/IRC attach time entries


ISC/IRC Signon activity. If the number of “entries reused” in signon activity is low,
and the “entries timed out” value for signon activity is high, the value of the
USRDELAY system initialization parameter should be increased. The “average
reuse time between entries” gives some indication of the time that could be used
for the USRDELAY system initialization parameter.

ISC Persistent verification (PV) activity. If the number of “entries reused” in the PV
activity is low, and the “entries timed out” value is high, the PVDELAY system
initialization parameter should be increased. The “average reuse time between
entries” gives some indication of the time that could be used for the PVDELAY
system initialization parameter.

Chapter 5. Using CICS statistics 63


Note: If there are a lot of either signed-on or PV-entries timed out, and not many
reused, your performance may be degraded because of the need to make
calls to an external security manager, such as RACF for security checking.

For more information, see the CICS statistics tables on page 410.

Shared temporary storage queue server statistics


Shared temporary storage queue server statistics are provided by the AXM page
pool management routines for the pools AXMPGANY and AXMPGLOW. For more
information, see “Appendix B. Shared temporary storage queue server statistics” on
| page 503.

|
| Coupling facility data tables server statistics
| Coupling facility data tables server statistics are provided by the AXM page pool
| management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix C. Coupling facility data tables server statistics” on
| page 509.

|
| Named counter sequence number server statistics
| Named counter sequence number server statistics are provided by the AXM page
| pool management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix D. Named counter sequence number server” on
| page 515.

64 CICS TS for OS/390: CICS Performance Guide


Chapter 6. The CICS monitoring facility
This chapter is divided as follows:
v “Introduction to CICS monitoring”
v “The classes of monitoring data”
v “Event monitoring points” on page 69
v “The monitoring control table (MCT)” on page 71
v “Controlling CICS monitoring” on page 72
v “Processing of CICS monitoring facility output” on page 72
v “Performance implications” on page 73
v “Interpreting CICS monitoring” on page 73

Introduction to CICS monitoring


CICS monitoring collects data about the performance of all user- and
CICS-supplied transactions during online processing for later offline analysis. The
records produced by CICS monitoring are of the MVS System Management Facility
(SMF) type 110, and are written to an SMF data set.

Note: Statistics records and some journaling records are also written to the SMF
data set as type 110 records. You might find it particularly useful to process
the statistics records and the monitoring records together, because statistics
provide resource and system information that is complementary to the
transaction data produced by CICS monitoring. The contents of the statistics
fields, and the procedure for processing them, are described in
“Appendix A. CICS statistics tables” on page 345.

Monitoring data is useful both for performance tuning and for charging your users
for the resources they use.

The classes of monitoring data


Three types, or “classes”, of monitoring data may be collected. These are
performance class data, exception class data, and SYSEVENT data.

Performance class data


Performance class data is detailed transaction-level information, such as the
processor and elapsed time for a transaction, or the time spent waiting for I/O. At
least one performance record is written for each transaction that is being
monitored.

Performance class data provides detailed, resource-level data that can be used for
accounting, performance analysis, and capacity planning. This data contains
information relating to individual task resource usage, and is completed for each
task when the task terminates.

© Copyright IBM Corp. 1983, 1999 65


You can enable performance-class monitoring by coding MNPER=ON (together
with MN=ON) in the system initialization table (SIT). Alternatively you can use
either the (CEMT SET MONITOR ON PERF) or EXEC CICS SET MONITOR
STATUS(ON) PERFCLASS(PERF) commands.

This information could be used periodically to calculate the charges applicable to


different tasks. If you want to set up algorithms for charging users for resources
used by them, you could use this class of data collection to update the charging
information in your organization’s accounting programs. (For previous versions of
CICS, we did not recommend charging primarily on exact resource usage, because
of the overheads involved in getting these figures.)

Exception class data


| Exception class monitoring data is information on CICS resource shortages that are
| suffered by a transaction. This data highlights possible problems in CICS system
| operation and is intended to help you identify system constraints that affect the
| performance of your transactions. There is one exception record for each type of
| exception condition. The exception records are produced and written to SMF as
| soon as the resource shortage encountered by the transaction has been resolved.
| Exception records are produced for each of the following resource shortages:
v Wait for storage in the CDSA
v Wait for storage in the UDSA
v Wait for storage in the SDSA
v Wait for storage in the RDSA
v Wait for storage in the ECDSA
v Wait for storage in the EUDSA
v Wait for storage in the ESDSA
v Wait for storage in the ERDSA
v Wait for auxiliary temporary storage
v Wait for auxiliary temporary storage string
v Wait for auxiliary temporary storage buffer
| v Wait for coupling facility data tables locking (request) slot
| v Wait for coupling facility data tables non-locking (request) slot
v Wait for file buffer
v Wait for LSRPOOL string.
v Wait for file string

| If the monitoring performance class is also being recorded, the performance class
| record for the transaction includes the total elapsed time the transaction was
| delayed by a CICS system resource shortage. This is measured by the exception
| class and the number of exceptions encountered by the transaction. The exception
| class records can be linked to the performance class records either by the
| transaction sequence number or by the network unit-of-work id. For more
| information on the exception class records, see “Exception class data” on page 107.

You can enable exception-class monitoring by coding the MNEXC=ON (together


with MN=ON) system initialization parameters. Alternatively, you can use either
the CEMT command. (CEMT SET MONITOR ON EXCEPT) or EXEC CICS SET
MONITOR STATUS(ON) EXCEPTCLASS(EXCEPT).

66 CICS TS for OS/390: CICS Performance Guide


The SYSEVENT class of monitoring data
SYSEVENT data is a special kind of transaction timing information.

CICS invokes the MVS System Resource Manager (SRM) macro SYSEVENT at the
end of every transaction to record the elapsed time of the transaction.

You can enable SYSEVENT class monitoring by coding the MNEVE=ON (together
with MN=ON) system initialization parameters. Alternatively, you can use either
the CEMT command (CEMT SET MONITOR ON EVENT) or EXEC CICS SET
MONITOR STATUS(ON) EVENTCLASS(EVENT).

If the SYSEVENT option is used, at the end of each transaction CICS issues a Type
55 (X'37') SYSEVENT macro. This records each transaction ID, the associated
terminal ID, and the elapsed time duration of each transaction. This information is
collected by the SRM and output, depending on the Resource Measurement
Facility (RMF) options set, can be written to SMF data sets.

CICS Monitoring Facility (CMF) and the MVS workload


manager
If you are running CICS with the MVS workload manager in compatibility mode,
CICS supports the SYSEVENT class of monitoring by default, regardless of the
status of the monitoring options. You do not need to set monitoring and sysevent
monitoring on (with MN=ON and MNEVE=ON respectively).

If you are running CICS with the MVS workload manager environment in goal
mode, the MVS workload manager provides transaction activity report reporting
which replaces the SYSEVENT class of monitoring.

Using CICS monitoring SYSEVENT information with RMF


This section explains how to use the CICS monitoring facility with the Resource
Measurement Facility (RMF) to obtain transaction rate reporting.

CICS usage of RMF transaction reporting


CICS monitoring facility with RMF provides a very useful tool for performing
day-to-day monitoring of CICS transaction rates and response times.

The objective of using the CICS monitoring facility with RMF is to enable
transaction rates and internal response times to be monitored without incurring the
overhead of running the full CICS monitoring facility and associated reporting.
This approach may be useful when only transaction statistics are required, rather
than the very detailed information that CICS monitoring facility produces. An
example of this is the monitoring of a production system where the minimum
overhead is required.

CICS monitoring facility use of SYSEVENT


The CICS monitoring facility issues a SYSEVENT macro that gives to MVS/SRM
the following information:
v The time at which the user-task was attached.

Chapter 6. The CICS monitoring facility 67


v The subsystem identification. This is derived from the first four characters of the
CICS generic APPLID or from the four character name specified on the
MNSUBSYS parameter if it is specified in the system initialization table (SIT).
v The transaction identifier of the task. This is the name of the CICS RDO
transaction in the CICS program control table. This can be the name of a CICS
system transaction, such as CSMI, CSNC, or CSPG.
v The user identifier.
v The specific APPLID of the CICS region. This is derived from the system
initialization parameter, APPLID. It is expressed as the full 8 bytes of the
transaction class parameter.

MVS IEAICS member


An IEAICS member needs to be coded and placed in SYS1.PARMLIB on the MVS
system. For further information about this, see the OS/390 MVS Initialization and
Tuning Reference manual. Reporting groups are assigned to the CICS system as a
whole and to individual transactions.

How CMF SYSEVENT data is mapped to the IEAICSxx member of


SYS1.PARMLIB
Table 4. How CMF SYSEVENT data is mapped to IEAICSxx
SYSEVENT macro IEAICSxx member CICS monitoring facility data
Transaction start time N/A Time at which user-task attached
Subsystem name SUBSYS= First 4 characters of the Generic APPLID
Transaction name TRXNAME= Transaction ID of task
User identification USERID= User ID
Transaction class TRXCLASS= The specific APPLID of the CICS region

For more information about how to use RMF, refer to the MVS Resource
Measurement Facility (RMF), Version 4.1.1 - Monitor I & II Reference and Users Guide.
If records are directed to SMF, refer to the OS/390 MVS System Management
Facilities (SMF) manual. The following example shows the additional parameters
that you need to add to your IEAICS member for two MRO CICS systems:
SUBSYS=ACIC,RPGN=100 /* CICS SYSTEM ACIC HAS REPORTING */
TRXNAME=CEMT,RPGN=101 /* GROUP OF 100 AND THERE ARE */
TRXNAME=USER,RPGN=102 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=103 /* SEPARATE TRANSACTIONS */
SUBSYS=BCIC,RPGN=200 /* CICS SYSTEM BCIC HAS REPORTING */
TRXNAME=CEMT,RPGN=201 /* GROUP OF 200 AND THERE ARE */
TRXNAME=USER,RPGN=202 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=203 /* SEPARATE TRANSACTIONS */
Notes:
1. The reporting group (number 100) assigned to the ACIC subsystem reports on
all transactions in that system.
2. RMF reports on an individual transaction by name only if it is assigned a
unique reporting group. If multiple transactions are defined with one reporting
group, the name field is left blank in the RMF reports.

68 CICS TS for OS/390: CICS Performance Guide


ERBRMF member for Monitor I session
This member defines the options that are used on the RMF Monitor I background
session. This session does not include transaction reporting as used by CICS, but a
Monitor I session has first to be active. A WKLD has to be defined to allow TRX
reporting to be activated.

ERBRMF member for Monitor II session


This member defines the options that are used on the RMF Monitor II background
session. This session performs transaction reporting as used by CICS. TRX defaults
to TRX(ALLPGN) which reports on all transactions. Individual transactions can be
named if desired.

RMF operations
A RMF job has to be started and this includes the Monitor I session. The RMF job
should be started before initializing CICS. The RMF Monitor II session is started by
the command F RMF,S aa,MEMBER(xx) where ‘aa’ indicates alphabetic characters
| and ‘xx’ indicates alphanumeric characters.
|
| Using the CICS monitoring facility with Tivoli Performance Reporter for
| OS/390
| Tivoli Performance Reporter for OS/390 assists you in performance management
| and service-level management of a number of IBM products. The CICS
| Performance feature used by the Tivoli Performance Reporter provides reports for
| your use in analyzing the performance of CICS. See “Chapter 7. Tivoli Performance
| Reporter for OS/390” on page 113 for more information.

Event monitoring points


Product-sensitive programming interface

CICS monitoring data is collected at system-defined event monitoring points


(EMPs) in the CICS code. Although you cannot relocate these monitoring points,
you can choose which classes of monitoring data you want to be collected.
Programming information about CICS monitoring is in the CICS Customization
Guide.

If you want to gather more performance class data than is provided at the
system-defined event monitoring points, you can code additional EMPs in your
application programs. At these points, you can add or change up to 16384 bytes of
user data in each performance record. Up to this maximum of 16384 bytes you can
have, for each ENTRYNAME qualifier, any combination of the following:
v Between 0 and 256 counters
v Between 0 and 256 clocks
v A single 8192-byte character string.
You could use these additional EMPs to count the number of times a certain event
occurs, or to time the interval between two events. If the performance class was
active when a transaction was started, but was not active when a user EMP was
issued, the operations defined in that user EMP would still execute on that

Chapter 6. The CICS monitoring facility 69


transaction’s monitoring area. The DELIVER option would result in a loss of data
at this point, because the generated performance record cannot be output while the
performance class is not active. If the performance class was not active when a
transaction was started, the user EMP would have no effect.

User EMPs can use the EXEC CICS MONITOR command. For programming
information about this command, refer to the CICS Application Programming
Reference.

Additional EMPs are provided in some IBM program products, such as DL/I.
From CICS’s point of view, these are like any other user-defined EMP. EMPs in
user applications and in IBM program products are identified by a decimal
number. The numbers 1 through 199 are available for EMPs in user applications,
and the numbers from 200 through 255 are for use in IBM program products. The
numbers can be qualified with an ‘entryname’, so that you can use each number
more than once. For example, PROGA.1, PROGB.1, and PROGC.1, identify three
different EMPs because they have different entrynames.

For each user-defined EMP there must be a corresponding monitoring control table
(MCT) entry, which has the same identification number and entryname as the EMP
that it describes.

You do not have to assign entrynames and numbers to system-defined EMPs, and
you do not have to code MCT entries for them.

Here are some ideas about how you might make use of the CICS and user fields
provided with the CICS monitoring facility:
v If you want to time how long it takes to do a table lookup routine within an
application, code an EMP with, say, ID=50 just before the table lookup routine
and an EMP with ID=51 just after the routine. The system programmer codes a
TYPE=EMP operand in the MCT for ID=50 to start user clock 1. You also code a
TYPE=EMP operand for ID=51 to stop user clock 1. The application executes.
When EMP 50 is processed, user clock 1 is started. When EMP 51 is processed,
the clock is stopped.
v One user field could be used to accumulate an installation accounting unit. For
example, you might count different amounts for different types of transaction.
Or, in a browsing application, you might count 1 unit for each record scanned
and not selected, and 3 for each record selected.
You can also treat the fullword count fields as 32-bit flag fields to indicate
special situations, for example, out-of-line situations in the applications, operator
errors, and so on. CICS includes facilities to turn individual bits or groups of
bits on or off in these counts.
v The performance clocks can be used for accumulating the time taken for I/O,
DL/I scheduling, and so on. It usually includes any waiting for the transaction
to regain control after the requested operation has completed. Because the
periods are counted as well as added, you can get the average time waiting for
I/O as well as the total. If you want to highlight an unusually long individual
case, set a flag on in a user count as explained above.
v One use of the performance character string is for systems in which one
transaction ID is used for widely differing functions. The application can enter a
subsidiary ID into the string to indicate which particular variant of the
transaction applies in each case.
Some users have a single transaction ID so that all user input is routed through
a common prologue program for security checking, for example. In this case, it

70 CICS TS for OS/390: CICS Performance Guide


is very easy to record the subtransaction identifier during this prologue.
(However, it is equally possible to route transactions with different identifiers to
the same program, in which case this technique is not necessary.)

End of Product-sensitive programming interface

The monitoring control table (MCT)


Product-sensitive programming interface

You use the monitoring control table (MCT):


v To tell CICS about the EMPs that you have coded in your application programs
and about the data that is to be collected at these points
v To tell CICS that you want certain system-defined performance data not to be
recorded during a particular CICS run.

DFHMCT TYPE=EMP
There must be a DFHMCT TYPE=EMP macro definition for every user-coded EMP.
This macro has an ID operand, whose value must be made up of the
ENTRYNAME and POINT values specified on the EXEC CICS MONITOR
command. The PERFORM operand of the DFHMCT TYPE=EMP macro tells CICS
which user count fields, user clocks, and character values to expect at the
identified user EMP, and what operations to perform on them.

DFHMCT TYPE=RECORD
The DFHMCT TYPE=RECORD macro allows you to exclude specific system-defined
performance data from a CICS run. (Each performance monitoring record is
| approximately 1288 bytes long, without taking into account any user data that may
be added, or any excluded fields.)

Each field of the performance data that is gathered at the system-defined EMPs
belongs to a group of fields that has a group identifier. Each performance data
field also has its own numeric identifier that is unique within the group identifier.
For example, the transaction sequence number field in a performance record
belongs to the group DFHTASK, and has the numeric identifier ‘031’. Using these
identifiers, you can exclude specific fields or groups of fields, and reduce the size
of the performance records.

Full details of the MCT are provided in the CICS Resource Definition Guide, and
examples of MCT coding are included with the programming information in the
CICS Customization Guide.

Three sample monitoring control tables are also provided in


CICSTS13.CICS.SDFHSAMP:
v For terminal-owning regions (TORs) - DFHMCTT$
v For application-owning regions (AORs) - DFHMCTA$
v For application-owning regions (AORs) with DBCTL - DFHMCTD$
v For file-owning regions (FORs) - DFHMCTF$.

These samples show how to use the EXCLUDE and INCLUDE operands to reduce
the size of the performance class record in order to reduce the volume of data

Chapter 6. The CICS monitoring facility 71


written by CICS to SMF.
End of Product-sensitive programming interface

Controlling CICS monitoring


Product-sensitive programming interface

When CICS is initialized, you switch the monitoring facility on by specifying the
system initialization parameter MN=ON. MN=OFF is the default setting. You can
select the classes of monitoring data you want to be collected using the MNPER,
MNEXC, and MNEVE system initialization parameters. You can request the
collection of any combination of performance class data, exception class data, and
SYSEVENT data. The class settings can be changed whether monitoring itself is
ON or OFF. For guidance about system initialization parameters, refer to the CICS
System Definition Guide.

When CICS is running, you can control the monitoring facility dynamically. Just as
at CICS initialization, you can switch monitoring on or off, and you can change the
classes of monitoring data that are being collected. There are two ways of doing
this:
1. You can use the master terminal CEMT INQ|SET MONITOR command, which
is described in the CICS Supplied Transactions manual.
2. You can use the EXEC CICS INQUIRE and SET MONITOR commands;
programming information about these is in the CICS System Programming
Reference.

If you activate a class of monitoring data in the middle of a run, the data for that
class becomes available only for transactions that are started thereafter. You cannot
change the classes of monitoring data collected for a transaction after it has started.
It is often preferable, particularly for long-running transactions, to start all classes
of monitoring data at CICS initialization.
End of Product-sensitive programming interface

Processing of CICS monitoring facility output


Product-sensitive programming interface

See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information.

Or, instead, you may want to write your own application program to process
output from the CICS monitoring facility. The CICS Customization Guide gives
programming information about the format of this output.

CICS provides a sample program, DFH$MOLS, which reads, formats, and prints
monitoring data. It is intended as a sample program that you can use as a skeleton
if you need to write your own program to analyze the data set. Comments within
the program may help you if you want to do your own processing of CICS
monitoring facility output. See the CICS Operations and Utilities Guide for further
information on the DFH$MOLS program.
End of Product-sensitive programming interface

72 CICS TS for OS/390: CICS Performance Guide


Performance implications
For information on the performance implications of using the CICS monitoring
facility, see “CICS monitoring facility” on page 331.

Interpreting CICS monitoring


Product-sensitive programming interface

All of the exception class data and all of the system-defined performance class data
that can be produced by CICS monitoring is listed below. Each of the data fields is
presented as a field description, followed by an explanation of the contents. The
field description has the format shown in Figure 4, which is taken from the
performance data group DFHTASK.

001 (TYPE-C, 'TRAN', 4 BYTES)


| | | |
| | | Length of the field (as re-
| | | presented by CMODLENG in the
| | | dictionary entry).
| | |
| | Informal name for the field, as used,
| | perhaps, in column headings when the
| | monitoring output is postprocessed
| | (CMODHEAD of the dictionary entry).
| |
| Data type, which may be one of the following:
| A - a 32-bit count, a 64-bit count, a string of 64-bit counts
| C - a byte string
| P - a packed decimal value
| S - a clock comprising a 32-bit accumulation
| of 16-microsecond units followed by an
| 8-bit flag followed by a 24-bit count
| (modulo-16 777 216) of the number of
| intervals included in the accumulation.
| T - a time stamp derived directly from the
| output of an STCK instruction.
| (CMODTYPE of the dictionary entry)
|
Field identifier by which the field may be individually
excluded or included during MCT preparation (CMODIDNT of
the dictionary entry).

Figure 4. Format of the descriptions of the data fields

Note: References in Figure 4 to the associated dictionary entries apply only to the
performance class data descriptions. Exception class data is not defined in
the dictionary record.

Clocks and time stamps


In the descriptions that follow, the term clock is distinguished from the term time
stamp.

Chapter 6. The CICS monitoring facility 73


A clock is a 32-bit value, expressed in units of 16 microseconds, accumulated
during one or more measurement periods. The 32-bit value is followed by 8
reserved bits, which are in turn followed by a 24-bit value indicating the number
of such periods.

Neither the 32-bit timer component of a clock nor its 24-bit period count are
protected against wraparound. The timer capacity is about 18 hours, and the
period count runs modulo 16 777 216.

The 8 reserved bits have the following significance:


Bits 0, 1, 2 and 3
Used for online control of the clock when it is running, and should always
be zeros on output.
Bits 4 and 7
Not used.
Bits 5 and 6
Used to indicate, when set to 1, that the clock has suffered at least one
out-of-phase start (bit 5) or stop (bit 6).

A time stamp is an 8-byte copy of the output of an STCK instruction.

Note: All times produced in the offline reports are in GMT (Greenwich Mean
Time) not local time. Times produced by online reporting can be expressed
in either GMT or local time.

Performance class data


The performance class data is described below in order of group name. The group
name is always in field CMODNAME of the dictionary entry.

A user task can be represented by one or more performance class monitoring


records, depending on whether the MCT event monitoring option DELIVER or the
system initialization parameters MNCONV=YES or MNSYNC=YES have been
selected. In the descriptions that follow, the term “user task” means “that part or
whole of a transaction that is represented by a performance class record”, unless
the description states otherwise.

A discussion about transaction timing fields


The CMF performance class record provides detailed timing information for each
transaction as it is processed by CICS. A transaction can be represented by one or
more performance class records depending on the monitoring options selected. The
key transaction timing data fields are:
v The Transaction Start time and Stop time represent the start and end of a
transaction measurement interval. This is normally the period between
transaction attach and detach but the performance class record could represent a
part of a transaction depending on the monitoring options selected. The
"Transaction Response Time" can be calculated by subtracting the transaction
start time from the stop time.
v The Transaction Dispatch time is the time the transaction was dispatched.
v The Transaction Dispatch Wait time is the time the transaction was suspended
and waiting for redispatch.

74 CICS TS for OS/390: CICS Performance Guide


v The Transaction CPU time is the portion of Dispatch time when the task is using
processor cycles
v The Transaction Suspend time is the total time the task was suspended and
includes:
– All task suspend (wait) time, which includes:
- The wait time for redispatch (dispatch wait)
- The wait time for first dispatch (first dispatch delay)
- The total I/O wait and other wait times.
v The First Dispatch Delay is then further broken down into:
– First Dispatch Delay due to TRANCLASS limits
– First Dispatch Delay due to MXT limits.

The CMF performance class record also provides a more detailed breakdown of the
transaction suspend (wait) time into separate data fields. These include:
v Terminal I/O wait time
v File I/O wait time
v RLS File I/O wait time
v Journal I/O wait time
v Temporary Storage I/O wait time
| v Shared Temporary Storage I/O wait time
v Inter-Region I/O wait time
v Transient Data I/O wait time
v LU 6.1 I/O wait time
v LU 6.2 I/O wait time
v FEPI suspend time
| v Local ENQ delay time
| v Global ENQ delay time
| v RRMS/MVS Indoubt wait time
| v Socket I/O wait time
v RMI suspend time
v Lock Manager delay time
v EXEC CICS WAIT EXTERNAL wait time
v EXEC CICS WAITCICS and WAIT EVENT wait time
v Interval Control delay time
v ″Dispatchable Wait″ wait time
| v IMS(DBCTL) wait time
| v DB2 ready queue wait time
| v DB2 connection wait time
| v DB2 wait time
| v CFDT server syncpoint wait time
| v Syncpoint delay time
| v CICS BTS run process/activity synchronous wait time
| v CICS MAXOPENTCBS delay time
| v JVM suspend time

A note about response time


You can calculate the internal CICS response time by subtracting performance data
field 005 (start time) from performance data field 006 (stop time).

Figure 5 on page 76 shows the relationship of dispatch time, suspend time, and
CPU time with the response time.

Chapter 6. The CICS monitoring facility 75


Response Time

S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Delay Wait M
M E
E

Figure 5. Response time relationships

A note about wait (suspend) times


| The performance data fields 009, 010, 011, 063, 100, 101, 123, 128, 129, 133, 134, 156,
| 171, 174, 176, 177, 178, 181, 182, 183, 184, 186, 187, 188, 189, 191, 195, 196, 241, 250,
| and 254 all record the elapsed time spent waiting for a particular type of I/O
operation. For example, field 009 records the elapsed time waiting for terminal
I/O. The elapsed time includes not only that time during which the I/O operation
is actually taking place, but also the time during which the access method is
completing the outstanding event control block, and the time subsequent to that
until the waiting CICS transaction is redispatched. See Table 5 on page 77 for the
types of wait (suspend) fields. Figure 6 on page 78 shows an example of the
relationship between a typical transaction wait time field, and the transaction’s
suspend time, dispatch time, CPU and dispatch wait time fields.

76 CICS TS for OS/390: CICS Performance Guide


Table 5. Performance class wait (suspend) fields
Field-Id Group Name Description
009 DFHTERM TC I/O wait time
010 DFHJOUR JC I/O wait time
011 DFHTEMP TS I/O wait time
063 DFHFILE FC I/O wait time
100 DFHTERM IR I/O wait time
101 DFHDEST TD I/O wait time
| 123 DFHTASK Global ENQ delay time
| 128 DFHTASK Lock Manager delay time
| 129 DFHTASK Local ENQ delay time
133 DFHTERM TC I/O wait time - LU6.1
134 DFHTERM TC I/O wait time - LU6.2
156 DFHFEPI FEPI Suspend time
171 DFHTASK Resource manager interface (RMI) suspend time
174 DFHFILE RLS FC I/O wait time
| 176 DFHFILE Coupling Facility data tables server I/O wait time
| 177 DFHSYNC Coupling Facility data tables server syncpoint and
| resynchronization wait time
| 178 DFHTEMP Shared TS I/O wait time
| 181 DFHTASK EXEC CICS WAIT EXTERNAL wait time
| 182 DFHTASK EXEC CICS WAITCICS and WAIT EVENT wait time
| 183 DFHTASK Interval Control delay time
| 184 DFHTASK ″Dispatchable Wait″ wait time
| 186 DFHDATA IMS (DBCTL) wait time
| 187 DFHDATA DB2 ready queue wait time
| 188 DFHDATA DB2 connection time
| 189 DFHDATA DB2 wait time
| 191 DFHTASK RRMS/MVS wait time
| 195 DFHTASK CICS BTS run process/activity synchronous wait time
| 196 DFHSYNC Syncpoint delay time
| 241 DFHSOCK Socket I/O wait time
| 249 DFHTASK User task QR TCB wait-for-dispatch time
| 250 DFHTASK CICS MAXOPENTCB delay time
| 254 DFHTASK Java Virtual Machine (JVM) suspend time

Chapter 6. The CICS monitoring facility 77


Wait Times

Dispatch Dispatch
and and
Suspend Time
CPU CPU
Time Time
Dispatch
Wait

Figure 6. Wait (suspend) time relationships

Improvements to the CMF suspend time and wait time measurements allow you to
perform various calculations on the suspend time accurately. For example, the
"Total I/O Wait Time" can be calculated as follows:

Total I/O wait time =


(Terminal control I/O wait +
Temporary storage I/O wait +
| Shared temporary storage I/O wait +
Transient data I/O wait +
| Journal (MVS logger) I/O wait +
File control I/O wait +
| RLS file I/O wait +
| CF data table I/O wait +
| Socket I/O wait +
Interregion (MRO) I/O wait +
LU 6.1 TC I/O wait +
LU 6.2 TC I/O wait +
FEPI I/O wait)

The "other wait time" (that is, uncaptured wait (suspend) time) can be calculated as
follows:

Total other wait time =


(First dispatch delay +
| Local ENQ delay +
| Global ENQ delay +
Interval control delay +
Lock manager delay +
Wait external wait +
EXEC CICS WAITCICS and EXEC CICS WAIT EVENT wait +
| CICS BTS run synchronous wait +
| CFDT server synchronous wait +
Syncpoint delay time +

78 CICS TS for OS/390: CICS Performance Guide


CICS MAXOPENTCBS delay +
RRMS/MVS wait +
RMI suspend +
JVM suspend time
“Dispatchable wait”s wait)

Note: The First Dispatch Delay performance class data field includes the MXT and
TRANCLASS First Dispatch Delay fields.

| The Uncaptured wait time can be calculated as follows:

| Uncaptured wait time =


| (Suspend − (total I/O wait time + total other wait time))

| In addition to the transaction "Suspend (wait) Time" breakdown, the CMF


performance class data provides several other important transaction timing
measurements. They include:
v The Program load time is the program fetch time (dispatch time) for programs
invoked by the transaction
v The Exception wait time is the accumulated time from the exception conditions
as measured by the CMF exception class records. For more information, see
“Exception class data” on page 107.
v The RMI elapsed time is the elapsed time the transaction spent in all Resource
Managers invoked by the transaction using the Resource Manager Interface
(RMI)
| v The JVM elapsed time is the elapsed time the transaction spent in the Java
| Virtual Machine (JVM) for the the Java programs invoked by the transaction.
v The Syncpoint elapsed time is the elapsed time the transaction spent processing
a syncpoint.

A note about program load time


Figure 7 shows the relationship between the program load time (field 115) and the

Response Time

S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Wait Wait M
M E
E
PCload
Time

Figure 7. Program load time

dispatch time and the suspend time (fields 7 and 14).

Chapter 6. The CICS monitoring facility 79


A note about RMI elapsed and suspend time

RMI Elapsed Time

Dispatch and Dispatch and


RMI Suspend Time
CPU Time (Suspend) CPU Time

Dispatch
Wait

Figure 8. RMI elapsed and suspend time

Figure 8 shows the relationship between the RMI elapsed time and the suspend
| time (fields 170 and 171).

| Note: In CICS Transaction Server for OS/390 Release 3, or later, the DB2 wait, the
| DB2 connection wait, and the DB2 readyq wait time fields as well as the
| IMS wait time field are included in the RMI suspend time.

| JVM elapsed time and suspend time


| The JVM elapsed and suspend time fields provide an insight into the amount of
| time that a transaction spends in a Java Virtual Machine (JVM).

| Care must be taken when using the JVM elapsed time (group name DFHTASK,
| field id: 253) and JVM suspend time (group name DFHTASK, field id: 254) fields in
| any calculation with other CMF timing fields. This is because of the likelihood of
| double accounting other CMF timing fields in the performance class record within
| the JVM time fields. For example, if a Java application program invoked by a
| transaction issues a read file (non-RLS) request using the Java API for CICS (JCICS)
| classes, the file I/O wait time will be included in the both the file I/O wait time
| field (group name DFHFILE, field id: 063), the transaction suspend time field
| (group name DFHTASK, field id: 014) as well as the JVM suspend time field.

| The JVM elapsed and suspend time fields are best evaluated from the overall
| transaction performance view and their relationship with the transaction response
| time, transaction dispatch time, and transaction suspend time. The performance
| class data also includes the amount of processor (CPU) time that a transaction used
| whilst in a JVM on a CICS J8 mode TCB in the J8CPUT field (group name:
| DFHTASK, field id: 260).

| Note: The number of Java API for CICS (JCICS) requests issued by the user task is
| included in the CICS OO foundation class request count field (group name:
| DFHCICS, field id: 025).

80 CICS TS for OS/390: CICS Performance Guide


| A note about syncpoint elapsed time

Syncpoint Elapsed Time

Dispatch Dispatch
and Dispatch and
CPU and CPU
Suspend Time CPU Suspend Time
Time Time Time

Dispatch Dispatch
Wait Wait

Figure 9. Syncpoint elapsed time

Figure 9 shows the relationship between the syncpoint elapsed time (field 173) and
the suspend time (field 14).

A note about storage occupancy counts


An occupancy count measures the area under the curve of user-task storage in use
against elapsed time. The unit of measure is the “byte-unit”, where the “unit” is
equal to 1024 microseconds, or 1.024 milliseconds. Where ms is milliseconds, a user
task occupying, for example, 256 bytes for 125 milliseconds, is measured as
follows:

125 / 1.024 ms = 122 units * 256 = 31 232 byte-units.

Note: All references to “Start time” and “Stop time” in the calculations below refer
to the middle 4 bytes of each 8 byte start/stop time field. Bit 51 of Start time
or Stop time represents a unit of 16 microseconds.

To calculate response time and convert into microsecond units:


Response = ((Stop time − Start time) * 16)
To calculate number of 1024 microsecond “units”:
Units = (Response / 1024)
or
Units = ((Stop time − Start time) / 64)
To calculate the average user-task storage used from the storage
occupancy count:
| Average user-task storage used = (Storage Occupancy / Units)

To calculate units per second:


Units Per Second = (1 000 000 / 1024) = 976.5625
To calculate the response time in seconds:
Response time = (((Stop time − Start time) * 16) / 1 000 000)

During the life of a user task, CICS measures, calculates, and accumulates the
storage occupancy at the following points:
v Before GETMAIN increases current user-storage values
v Before FREEMAIN reduces current user-storage values
v Just before the performance record is moved to the buffer.

Chapter 6. The CICS monitoring facility 81


Response Time

S S
T T
A O
R P
T
.... .... .... ................... .............. ......... ......... ......... T
T I
I . . . M
M . . . E
E . . . . . . . .
. . . . . . . .
. . . . . . . .

G F G F F G F G

G = GETMAIN
F = FREEMAIN
Dotted line = Average storage occupancy

Figure 10. Storage occupancy

A note about program storage


The level of program storage currently in use is incremented at LOAD, LINK, and
XCTL events by the size (in bytes) of the referenced program, and is decremented
at RELEASE or RETURN events.

Note: On an XCTL event, the program storage currently in use is also decremented
by the size of the program issuing the XCTL, because the program is no
longer required.

Figure 11 on page 83 shows the relationships between the “high-water mark” data
fields that contain the maximum amounts of program storage in use by the user
task. Field PCSTGHWM (field ID 087) contains the maximum amount of program
storage in use by the task both above and below the 16MB line. Fields PC31AHWM
(139) and PC24BHWM (108) are subsets of PCSTGHWM, containing the maximum
amounts in use above and below the 16MB line, respectively. Further subset-fields
contain the maximum amounts of storage in use by the task in each of the CICS
dynamic storage areas (DSAs).

Note: The totaled values of all the subsets in a superset may not necessarily equate
to the value of the superset; for example, the value of PC31AHWM plus the
value of PC24BHWM may not equal the value of PCSTGHWM. This is
because the peaks in the different types of program storage acquired by the
user task do not necessarily occur simultaneously.

The “high-water mark” fields are described in detail in “User storage fields in
group DFHSTOR:” on page 92. For information about the program storage fields,
see “Program storage fields in group DFHSTOR:” on page 94.

82 CICS TS for OS/390: CICS Performance Guide


PCSTGHWM - high-water mark of program storage in all CICS DSAs

PC31AHWM - HWM of PC storage above 16MB

PC31CHWM - ECDSA HWM

PC31SHWM - ESDSA HWM

PC31RHWM - ERDSA HWM

16MB line

PC24BHWM - HWM of PC storage below 16MB

PC24CHWM - CDSA HWM

PC24SHWM - SDSA HWM

PC24RHWM - RDSA HWM

Figure 11. Relationships between the “high-water mark” program storage data fields

| Performance data in group DFHCBTS


| Group DFHCBTS contains the following performance data:
| 200 (TYPE-C, ‘PRCSNAME’, 36 BYTES)
| The name of the CICS business transaction service (BTS) process of which the
| user task formed part.
| 201 (TYPE-C, ‘PRCSTYPE’, 8 BYTES)
| The process-type of the CICS BTS process of which the user task formed part.
| 202 (TYPE-C, ‘PRCSID’, 52 BYTES)
| The CICS-assigned identifier of the CICS BTS root activity that the user task
| implemented.
| 203 (TYPE-C, ‘ACTVTYID’, 52 BYTES)
| The CICS-assigned identifier of the CICS BTS activity that the user task
| implemented.
| 204 (TYPE-C, ‘ACTVTYNM’, 16 BYTES)
| The name of the CICS BTS activity that the user task implemented.
| 205 (TYPE-A, ‘BARSYNCT’, 4 BYTES)
| The number of CICS BTS run process, or run activity, requests that the user
| task made in order to execute a process or activity synchronously.
| 206 (TYPE-A, ‘BARASYCT’, 4 BYTES)
| The number of CICS BTS run process, or run activity, requests that the user
| task made in order to execute a process or activity asynchronously.
| 207 (Type-A, ‘BALKPACT’, 4 BYTES)
| The number of CICS BTS link process, or link activity, requests that the user
| task issued.

Chapter 6. The CICS monitoring facility 83


| 208 (TYPE-A, ‘BADPROCT’, 4 BYTES)
| The number of CICS BTS define process requests issued by the user task.
| 209 (TYPE-A, ‘BADACTCT’, 4 BYTES)
| The number of CICS BTS define activity requests issued by the user task.
| 210 (TYPE-A, ‘BARSPACT’, 4 BYTES)
| The number of CICS BTS reset process and reset activity requests issued by the
| user task.
| 211 (TYPE-A, ‘BASUPACT’, 4 BYTES)
| The number of CICS BTS suspend process, or suspend activity, requests issued
| by the user task.
| 212 (TYPE-A, ‘BARMPACT’, 4 BYTES)
| The number of CICS BTS resume process, or resume activity, requests issued
| by the user task.
| 213 (TYPE-A, ‘BADCPACT’, 4 BYTES)
| The number of CICS BTS delete activity, cancel process, or cancel activity,
| requests issued by the user task.
| 214 (TYPE-A, ‘BAACQPCT’, 4 BYTES)
| The number of CICS BTS acquire process, or acquire activity, requests issued
| by the user task.
| 215 (Type-A, ‘BATOTPCT’, 4 BYTES)
| Total number of CICS BTS process and activity requests issued by the user
| task.
| 216 (TYPE-A, ‘BAPRDCCT’, 4 BYTES)
| The number of CICS BTS delete, get, or put, container requests for process data
| containers issued by the user task.
| 217 (TYPE-A, ‘BAACDCCT’, 4 BYTES)
| The number of CICS BTS delete, get, or put, container requests for current
| activity data containers issued by the user task.
| 218 (Type-A, ‘BATOTCCT’, 4 BYTES)
| Total number of CICS BTS delete, get or put, process container and activity
| container requests issued by the user task.
| 219 (TYPE-A, ‘BARATECT’, 4 BYTES)
| The number of CICS BTS retrieve-reattach event requests issued by the user
| task.
| 220 (TYPE-A, ‘BADFIECT’, 4 BYTES)
| The number of CICS BTS define-input event requests issued by the user task.
| 221 (TYPE-A, ‘BATIAECT’, 4 BYTES)
| The number of CICS BTS DEFINE TIMER EVENT, CHECK TIMER EVENT,
| DELETE TIMER EVENT, and FORCE TIMER EVENT requests issued by the
| user task.
| 222 (TYPE-A, ‘BATOTECT’, 4 BYTES)
| Total number of CICS BTS event-related requests issued by the user task.

| Performance data in group DFHCICS


Group DFHCICS contains the following performance data:
005 (TYPE-T, ‘START’, 8 BYTES)
Start time of measurement interval. This is one of the following:

84 CICS TS for OS/390: CICS Performance Guide


v The time at which the user task was attached
| v The time at which data recording was most recently reset in support of the
| MCT user event monitoring point DELIVER option or the monitoring
| options MNCONV, MNSYNC, or FREQUENCY.
For more information, see “Clocks and time stamps” on page 73.

Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
006 (TYPE-T, ‘STOP’, 8 BYTES)
Finish time of measurement interval. This is either the time at which the user
task was detached, or the time at which data recording was completed in
support of the MCT user event monitoring point DELIVER option or the
monitoring options MNCONV, MNSYNC or FREQUENCY. For more
information, see “Clocks and time stamps” on page 73.

Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
| 025 (TYPE-A, ‘CFCAPICT’, 4 BYTES)
| Number of CICS OO foundation class requests, including the Java API for
| CICS (JCICS) classes, issued by the user task.
089 (TYPE-C, ‘USERID’, 8 BYTES)
User identification at task creation. This can also be the remote user identifier
for a task created as the result of receiving an ATTACH request across an MRO
or APPC link with attach-time security enabled.
103 (TYPE-S, ‘EXWTTIME’, 8 BYTES)
Accumulated data for exception conditions. The 32-bit clock contains the total
elapsed time for which the user waited on exception conditions. The 24-bit
period count equals the number of exception conditions that have occurred for
this task. For more information, see “Exception class data” on page 107

Note: The performance class data field ‘exception wait time’ will be updated
when exception conditions are encountered even when the exception
class is inactive.
112 (TYPE-C, ‘RTYPE’, 4 BYTES)
Performance record type (low-order byte-3):
C Record output for a terminal converse
D Record output for a user EMP DELIVER request
F Record output for a long-running transaction
S Record output for a syncpoint
T Record output for a task termination.
130 (TYPE-C, ‘RSYSID’, 4 bytes)
The name (sysid) of the remote system to which this transaction was routed
either statically or dynamically.

This field also includes the connection name (sysid) of the remote system to
which this transaction was routed when using the CRTE routing transaction.
The field will be null for those CRTE transactions which establish or cancel the
transaction routing session.

Note: If the transaction was not routed or was routed locally, this field is set to
null. Also see the program name (field 71).

Chapter 6. The CICS monitoring facility 85


131 (TYPE-A, ‘PERRECNT’, 4 bytes)
The number of performance class records written by the CICS Transaction
Server for OS/390 Monitoring Facility (CMF) for the user task.
167 (TYPE-C, ‘SRVCLASS’, 8 bytes)
The MVS Workload Manager (WLM) service class for this transaction. This
field is null if the transaction was WLM-classified in another CICS region.
168 (TYPE-C, ‘RPTCLASS’, 8 bytes)
The MVS Workload Manager (WLM) report class for this transaction. This field
is null if the transaction was WLM-classified in another CICS region.

| Performance data in group DFHDATA


| Group DFHDATA contains the following performance data:
| 179 (TYPE-A, ‘IMSREQCT’, 4 bytes)
| The number of IMS (DBCTL) requests issued by the user task.
| 180 (TYPE-A, ‘DB2REQCT’, 8 bytes)
| The number of DB2 (EXEC SQL and IFI) requests issued by the user task.
| 186 (TYPE-S, ‘IMSWAIT’, 8 bytes)
| The elapsed time in which the user task waited for DBCTL to service the IMS
| requests issued by the user task.

| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 187 (TYPE-S, ‘DB2RDYQW’, 8 bytes)
| The elapsed time in which the user task waited for a DB2 thread to become
| available.

| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 188 (TYPE-S, ‘DB2CONWT’, 8 bytes)
| The elapsed time in which the user task waited for a CICS DB2 subtask to
| become available.

| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 189 (TYPE-S, ‘DB2WAIT’, 8 bytes)
| The elapsed time in which the user task waited for DB2 to service the DB2
| EXEC SQL and IFI requests issued by the user task.

| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.

86 CICS TS for OS/390: CICS Performance Guide


| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.

| Performance data in group DFHDEST


Group DFHDEST contains the following performance data:
041 (TYPE-A, ‘TDGETCT’, 4 BYTES)
Number of transient data GET requests issued by the user task.
042 (TYPE-A, ‘TDPUTCT’, 4 BYTES)
Number of transient data PUT requests issued by the user task.
043 (TYPE-A, ‘TDPURCT’, 4 BYTES)
Number of transient data PURGE requests issued by the user task.
091 (TYPE-A, ‘TDTOTCT’, 4 BYTES)
Total number of transient data requests issued by the user task. This field is
the sum of TDGETCT, TDPUTCT, and TDPURCT.
101 (TYPE-S, ‘TDIOWTT’, 8 BYTES)
Elapsed time in which the user waited for VSAM transient data I/O. For more
information see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76. This field is a subset of the task suspend time,
| SUSPTIME (014) field.

| Performance data in group DFHDOCH


| Group DFHDOCH contains the following performance data:
| 226 (TYPE-A, ‘DHCRECT’, 4 bytes)
| The number of document handler CREATE requests issued by the user task.
| 227 (TYPE-A, ‘DHINSCT’, 4 bytes)
| The number of document handler INSERT requests issued by the user task.
| 228 (TYPE-A, ‘DHSETCT’, 4 bytes)
| The number of document handler SET requests issued by the user task.
| 229 (TYPE-A, ‘DHRETCT’, 4 bytes)
| The number of document handler RETRIEVE requests issued by the user task.
| 230 (TYPE-A, ‘DHTOTCT’, 4 bytes)
| The total number of document handler requests issued by the user task.
| 240 (TYPE-A, ‘DHTOTDCL’, 4 bytes)
| The total length of all documents created by the user task.

| Performance data in group DFHFEPI


Group DFHFEPI contains the following performance data:
150 (TYPE-A,‘SZALLOCT’, 4 bytes)
Number of conversations allocated by the user task. This number is
incremented for each FEPI ALLOCATE POOL or FEPI CONVERSE POOL.
151 (TYPE-A,‘SZRCVCT’, 4 bytes)
Number of FEPI RECEIVE requests made by the user task. This number is also
incremented for each FEPI CONVERSE request.

Chapter 6. The CICS monitoring facility 87


152 (TYPE-A,‘SZSENDCT’, 4 bytes)
Number of FEPI SEND requests made by the user task. This number is also
incremented for each FEPI CONVERSE request.
153 (TYPE-A,‘SZSTRTCT’, 4 bytes)
Number of FEPI START requests made by the user task.
154 (TYPE-A,‘SZCHROUT’, 4 bytes)
Number of characters sent through FEPI by the user task.
155 (TYPE-A,‘SZCHRIN’, 4 bytes)
Number of characters received through FEPI by the user task.
156 (TYPE-S,‘SZWAIT’, 8 bytes)
Elapsed time in which the user task waited for all FEPI services. For more
information see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 157 (TYPE-A,‘SZALLCTO’, 4 bytes)
Number of times the user task timed out while waiting to allocate a
conversation.
158 (TYPE-A,‘SZRCVTO’, 4 bytes)
Number of times the user task timed out while waiting to receive data.
159 (TYPE-A,‘SZTOTCT’, 4 bytes)
Total number of all FEPI API and SPI requests made by the user task.

Performance data in group DFHFILE


Group DFHFILE contains the following performance data:
036 (TYPE-A, ‘FCGETCT’, 4 BYTES)
Number of file GET requests issued by the user task.
037 (TYPE-A, ‘FCPUTCT’, 4 BYTES)
Number of file PUT requests issued by the user task.
038 (TYPE-A, ‘FCBRWCT’, 4 BYTES)
Number of file browse requests issued by the user task. This number excludes
the START and END browse requests.
039 (TYPE-A, ‘FCADDCT’, 4 BYTES)
Number of file ADD requests issued by the user task.
040 (TYPE-A, ‘FCDELCT’, 4 BYTES)
Number of file DELETE requests issued by the user task.
063 (TYPE-S, ‘FCIOWTT’, 8 BYTES)
Elapsed time in which the user task waited for file I/O. For more information,
see “Clocks and time stamps” on page 73, and “A note about wait (suspend)
| times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 070 (TYPE-A, ‘FCAMCT’, 4 BYTES)
Number of times the user task invoked file access-method interfaces. This
number excludes requests for OPEN and CLOSE.

88 CICS TS for OS/390: CICS Performance Guide


093 (TYPE-A, ‘FCTOTCT’, 4 BYTES)
Total number of file control requests issued by the user task. This number
excludes any request for OPEN, CLOSE, ENABLE, or DISABLE of a file.

How EXEC CICS file commands correspond to file control monitoring fields is
shown in Table 6.
Table 6. EXEC CICS file commands related to file control monitoring fields
EXEC CICS command Monitoring fields
READ FCGETCT and FCTOTCT
READ UPDATE FCGETCT and FCTOTCT
DELETE (after READ UPDATE) FCDELCT and FCTOTCT
DELETE (with RIDFLD) FCDELCT and FCTOTCT
REWRITE FCPUTCT and FCTOTCT
WRITE FCADDCT and FCTOTCT
STARTBR FCTOTCT
READNEXT FCBRWCT and FCTOTCT
READNEXT UPDATE FCBRWCT and FCTOTCT
READPREV FCBRWCT and FCTOTCT
READPREV UPDATE FCBRWCT and FCTOTCT
ENDBR FCTOTCT
RESETBR FCTOTCT
UNLOCK FCTOTCT

Note: The number of STARTBR, ENDBR, RESETBR, and UNLOCK file control
requests can be calculated by subtracting the file request counts,
FCGETCT, FCPUTCT, FCBRWCT, FCADDCT, and FCDELCT from the
total file request count, FCTOTCT.
174 (TYPE-S, ‘RLSWAIT’, 8 BYTES)
| Elapsed time in which the user task waited for RLS file I/O. For more
| information, see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 175 (TYPE-S, ‘RLSCPUT’, 8 BYTES)
The RLS File Request CPU (SRB) time field (RLSCPUT) is the SRB CPU time
this transaction spent processing RLS file requests. This field should be added
to the transaction CPU time field (USRCPUT) when considering the
measurement of the total CPU time consumed by a transaction. Also, this field
cannot be considered a subset of any other single CMF field (including
RLSWAIT). This is because the RLS field requests execute asynchronously
under an MVS SRB which can be running in parallel with the requesting
transaction. It is also possible for the SRB to complete its processing before the
requesting transaction waits for the RLS file request to complete.

Note: This clock field could contain a CPU time of zero with a count of greater
than zero. This is because the CMF timing granularity is measured in 16
microsecond units and the RLS file request(s) may complete in less than
that time unit.

Chapter 6. The CICS monitoring facility 89


| 176 (TYPE-S, ’CFDTWAIT’, 8 BYTES)
| Elapsed time in which the user task waited for a data table access request to
| the Coupling Facility Data Table server to complete. For more information, see
| “Clocks and time stamps” on page 73, and “A note about wait (suspend)
| times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.

| Performance data in group DFHJOUR


Group DFHJOUR contains the following performance data:
010 (TYPE-S, ‘JCIOWTT’, 8 BYTES)
Elapsed time for which the user task waited for journal I/O. For more
information, see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 058 (TYPE-A, ‘JNLWRTCT’, 4 BYTES)
Number of journal write requests issued by the user task.
172 (TYPE-A, ‘LOGWRTCT’, 4 BYTES)
Number of CICS log stream write requests issued by the user task.

Performance data in group DFHMAPP


Group DFHMAPP contains the following performance data:
050 (TYPE-A, ‘BMSMAPCT’, 4 BYTES)
Number of BMS MAP requests issued by the user task. This field corresponds
to the number of RECEIVE MAP requests that did not incur a terminal I/O,
and the number of RECEIVE MAP FROM requests.
051 (TYPE-A, ‘BMSINCT’, 4 BYTES)
Number of BMS IN requests issued by the user task. This field corresponds to
the number of RECEIVE MAP requests that incurred a terminal I/O.
052 (TYPE-A, ‘BMSOUTCT’, 4 BYTES)
Number of BMS OUT requests issued by the user task. This field corresponds
to the number of SEND MAP requests.
090 (TYPE-A, ‘BMSTOTCT’, 4 BYTES)
Total number of BMS requests issued by the user task. This field is the sum of
BMS RECEIVE MAP, RECEIVE MAP FROM, SEND MAP, SEND TEXT, and
SEND CONTROL requests issued by the user task.

Performance data in group DFHPROG


Group DFHPROG contains the following performance data:
055 (TYPE-A, ‘PCLINKCT’, 4 BYTES)
Number of program LINK requests issued by the user task, including the link
to the first program of the user task. This field does not include program LINK
URM (user-replaceable module) requests.

90 CICS TS for OS/390: CICS Performance Guide


056 (TYPE-A, ‘PCXCTLCT’, 4 BYTES)
Number of program XCTL requests issued by the user task.
057 (TYPE-A, ‘PCLOADCT’, 4 BYTES)
Number of program LOAD requests issued by the user task.
071 (TYPE-C, ‘PGMNAME’, 8 BYTES)
The name of the first program invoked at attach-time.

For a remote transaction:


v If this CICS definition of the remote transaction does not specify a program
name, this field contains blanks.
v If this CICS definition of the remote transaction specifies a program name,
this field contains the name of the specified program. (Note that this is not
necessarily the program that is run on the remote system.)

For a dynamically-routed transaction, if the dynamic transaction routing


program routes the transaction locally and specifies an alternate program
name, this field contains the name of the alternate program.

For a dynamic program link (DPL) mirror transaction, this field contains the
initial program name specified in the dynamic program LINK request. DPL
mirror transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.

For an ONC RPC or WEB alias transaction, this field contains the initial
application program name invoked by the alias transaction. ONC RPC or WEB
alias transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.
072 (TYPE-A, ‘PCLURMCT’, 4 BYTES)
Number of program LINK URM (user-replaceable module) requests issued by,
or on behalf of, the user task.

A user-replaceable module is a CICS-supplied program that is always invoked


at a particular point in CICS processing, as if it were part of the CICS code.
You can modify the supplied program by including your own logic, or replace
it with a version that you write yourself.

The CICS-supplied user-replaceable modules are:


v bridge exit program
v program error program
v transaction restart program
v terminal error program
v node error program
v terminal autoinstall program(s)
v program autoinstall program
v dynamic routing program
v CICS-DBCTL interface status program
v CICS-DB2 dynamic plan exit program
| v distributed dynamic routing program
| v inbound IIOB exit program

Chapter 6. The CICS monitoring facility 91


For detailed information on CICS user-replaceable programs, see the CICS
Customization Guide.
| 073 (TYPE-A, ‘PCDPLCT’, 4 BYTES)
| Number of distributed program link (DPL) requests issued by the user task.
113 (TYPE-C, ‘ABCODEO’, 4 BYTES)
Original abend code.
114 (TYPE-C, ‘ABCODEC’, 4 BYTES)
Current abend code.
115 (TYPE-S, ‘PCLOADTM’, 8 BYTES)
Elapsed time in which the user task waited for program library (DFHRPL)
fetches. Only fetches for programs with installed program definitions or
autoinstalled as a result of application requests are included in this figure.
However, installed programs residing in the LPA are not included (because
they do not incur a physical fetch from a library). For more information about
program load time, see “Clocks and time stamps” on page 73, and “A note
about program load time” on page 79.

| Performance data in group DFHSOCK


| 241 (TYPE-S, ‘SOIOWTT’, 8 BYTES)
| The elapsed time in which the user task waited for socket I/O. For more
| information, see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 242 (TYPE-A, ‘SOBYENCT’, 4 BYTES)
| The number of bytes encrypted by the secure sockets layer for the user task.
| 243 (TYPE-A, ‘SOBYDECT’, 4 BYTES)
| The number of bytes decrypted by the secure sockets layer for the user task.
| 244 (TYPE-C, ‘CLIPADDR’, 16 BYTES)
| The client IP address (nnn.nnn.nnn.nnn)

| Performance data in group DFHSTOR


User storage fields in group DFHSTOR:
033 (TYPE-A, ‘SCUSRHWM’, 4 BYTES)
Maximum amount (high-water mark) of user storage allocated to the user task
below the 16MB line, in the user dynamic storage area (UDSA).
054 (TYPE-A, ‘SCUGETCT’, 4 BYTES)
Number of user-storage GETMAIN requests issued by the user task below the
16MB line, in the UDSA.
095 (TYPE-A, ‘SCUSRSTG’, 8 BYTES)
Storage occupancy of the user task below the 16MB line, in the UDSA. This
measures the area under the curve of storage in use against elapsed time. For
more information about storage occupancy, see “A note about storage
occupancy counts” on page 81.
105 (TYPE-A, ‘SCUGETCT’, 4 BYTES)
Number of user-storage GETMAIN requests issued by the user task for storage
above the 16MB line, in the extended user dynamic storage area (EUDSA).

92 CICS TS for OS/390: CICS Performance Guide


106 (TYPE-A, ‘SCUSRHWM’, 4 BYTES)
Maximum amount (high-water mark) of user-storage allocated to the user task
above the 16MB line, in the EUDSA.
107 (TYPE-A, ‘SCUCRSTG’, 8 BYTES)
Storage occupancy of the user task above the 16MB line, in the EUDSA. This
measures the area under the curve of storage in use against elapsed time. For
more information, see “A note about storage occupancy counts” on page 81.
116 (TYPE-A, ‘SC24CHWM’, 4 BYTES)
Maximum amount (high-water mark) of user-storage allocated to the user task
below the 16MB line, in the CICS dynamic storage area (CDSA).
117 (TYPE-A, ‘SCCGETCT’, 4 BYTES)
Number of user-storage GETMAIN requests issued by the user task for storage
below the 16MB line, in the CDSA.
118 (TYPE-A, ‘SC24COCC’, 8 BYTES)
Storage occupancy of the user task below the 16MB line, in the CDSA. This
measures the area under the curve of storage in use against elapsed time. For
more information, see “A note about storage occupancy counts” on page 81.
119 (TYPE-A, ‘SC31CHWM’, 4 BYTES)
Maximum amount (high-water mark) of user-storage allocated to the user task
above the 16MB line, in the extended CICS dynamic storage area (ECDSA).
120 (TYPE-A, ‘SCCGETCT’, 4 BYTES)
Number of user-storage GETMAIN requests issued by the user task for storage
above the 16MB line, in the ECDSA.
121 (TYPE-A, ‘SC31COCC’, 8 BYTES)
Storage occupancy of the user task above the 16MB line, in the ECDSA. This
measures the area under the curve of storage in use against elapsed time. For
more information, see “A note about storage occupancy counts” on page 81.
Table 7. User storage field id cross reference
UDSA EUDSA CDSA ECDSA
Getmain count 054 105 117 120
High-water-mark 033 106 116 119
Occupancy 095 107 118 121

Shared storage fields in group DFHSTOR:


144 (TYPE-A, ‘SC24SGCT’, 4 BYTES)
Number of storage GETMAIN requests issued by the user task for shared
storage below the 16MB line, in the CDSA or SDSA.
145 (TYPE-A, ‘SC24GSHR’, 4 BYTES)
Number of bytes of shared storage GETMAINed by the user task below the
16MB line, in the CDSA or SDSA.
146 (TYPE-A, ‘SC24FSHR’, 4 BYTES)
Number of bytes of shared storage FREEMAINed by the user task below the
16MB line, in the CDSA or SDSA.
147 (TYPE-A, ‘SC31SGCT’, 4 BYTES)
Number of storage GETMAIN requests issued by the user task for shared
storage above the 16MB line, in the ECDSA or ESDSA.

Chapter 6. The CICS monitoring facility 93


148 (TYPE-A, ‘SC31GSHR’, 4 BYTES)
Number of bytes of shared storage GETMAINed by the user task above the
16MB line, in the ECDSA or ESDSA.
149 (TYPE-A, ‘SC31FSHR’, 4 BYTES)
Number of bytes of shared storage FREEMAINed by the user task above the
16MB line, in the ECDSA or ESDSA.

Program storage fields in group DFHSTOR:


For more information on program storage see “Storage manager” on page 452.
087 (TYPE-A, ‘PCSTGHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task both above and below the 16MB line.
108 (TYPE-A, ‘PC24BHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task below the 16MB line. This field is a subset of PCSTGHWM (field ID 087)
that resides below the 16MB line.
122 (TYPE-A, ‘PC31RHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task above the 16MB line, in the extended read-only dynamic storage area
(ERDSA). This field is a subset of PC31AHWM (field ID 139) that resides in the
ERDSA.
139 (TYPE-A, ‘PC31AHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task above the 16MB line. This field is a subset of PCSTGHWM (field ID 087)
that resides above the 16MB line.
142 (TYPE-A, ‘PC31CHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task above the 16MB line, in the extended CICS dynamic storage area
(ECDSA). This field is a subset of PC31AHWM (139) that resides in the
ECDSA.
143 (TYPE-A, ‘PC24CHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task below the 16MB line, in the CICS dynamic storage area (CDSA). This field
is a subset of PC24BHWM (108) that resides in the CDSA.
160 (TYPE-A, ‘PC24SHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task below the 16MB line, in the shared dynamic storage area (SDSA). This
field is a subset of PC24BHWM (108) that resides in the SDSA.
161 (TYPE-A, ‘PC31SHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task above the 16MB line, in the extended shared dynamic storage area
(ESDSA). This field is a subset of PC31AHWM (139) that resides in the ESDSA.
162 (TYPE-A, ‘PC24RHWM’, 4 BYTES)
Maximum amount (high-water mark) of program storage in use by the user
task below the 16MB line, in the read-only dynamic storage area (RDSA). This
field is a subset of PC24BHWM (108) that resides in the RDSA.

94 CICS TS for OS/390: CICS Performance Guide


Performance data in group DFHSYNC
Group DFHSYNC contains the following performance data:
060 (TYPE-A, ‘SPSYNCCT’, 4 BYTES)
Number of SYNCPOINT requests issued during the user task.
Notes:
1. A SYNCPOINT is implicitly issued as part of the task-detach processing.
2. A SYNCPOINT is issued at PSB termination for DBCTL.
173 (TYPE-S, ‘SYNCTIME’, 8 BYTES)
Total elapsed time for which the user task was dispatched and was processing
Syncpoint requests.
| 177 (TYPE-S, ’SRVSYWTT’, 8 BYTES)
| Total elapsed time in which the user task waited for syncpoint or
| resynchronization processing using the Coupling Facility data tables server to
| complete.

| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 196 (TYPE-S, ’SYNCDLY’, 8 BYTES)
| The elapsed time in which the user task waited for a syncpoint request to be
| issued by it’s parent transaction. The user task was executing as a result of the
| parent task issuing a CICS BTS run-process or run-activity request to execute a
| process or activity synchronously. For more information, see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.

| Performance data in group DFHTASK


Group DFHTASK contains the following performance data:
001 (TYPE-C, ‘TRAN’, 4 BYTES)
Transaction identification.
004 (TYPE-C, ‘T’, 4 BYTES)
Transaction start type. The high-order bytes (0 and 1) are set to:
"TO" Attached from terminal input
"S" Attached by automatic transaction initiation (ATI) without data
"SD" Attached by automatic transaction initiation (ATI) with data
"QD" Attached by transient data trigger level
"U" Attached by user request
"TP" Attached from terminal TCTTE transaction ID
"SZ" Attached by Front End Programming Interface (FEPI).
007 (TYPE-S, ‘USRDISPT’, 8 BYTES)
Total elapsed time during which the user task was dispatched on each CICS
TCB under which the task executed. This can include QR, RO, CO, FO, SZ if
FEPI is active, and RP if the RPC ONC support or CICS Web interface is active.
For more information, see “Clocks and time stamps” on page 73.
008 (TYPE-S, ‘USRCPUT’, 8 BYTES)
| Processor time for which the user task was dispatched on each CICS TCB SL,

Chapter 6. The CICS monitoring facility 95


| SO, J8, L8, and S8, or QR, RO, CO, FO, SZ if FEPI is active, and RP if the RPC
ONC support or CICS Web interface is active, For more information see
“Clocks and time stamps” on page 73.
014 (TYPE-S, ‘SUSPTIME’, 8 BYTES)
Total elapsed wait time for which the user task was suspended by the
dispatcher. This includes:
v The elapsed time waiting for the first dispatch. This also includes any delay
incurred because of the limits set for this transaction’s transaction class (if
any) or by the system parameter MXT being reached.
v The task suspend (wait) time.
v The elapsed time waiting for redispatch after a suspended task has been
resumed.
For more information, see “A note about wait (suspend) times” on page 76.
031 (TYPE-P, ‘TRANNUM’, 4 BYTES)
Transaction identification number.

Note: The transaction number field is normally a 4-byte packed decimal


number. However, some CICS system tasks are identified by special
character ‘transaction numbers’, as follows:
v ‘ III’ for system initialization task
v ‘ TCP’ for terminal control.

These special identifiers are placed in bytes 2 through 4. Byte 1 is a


blank (X'40') before the terminal control TCP identifier, and a null value
(X'00') before the others.
059 (TYPE-A, ‘ICPUINCT’, 4 BYTES)
Number of interval control START or INITIATE requests during the user task.
064 (TYPE-A, ‘TASKFLAG’, 4 BYTES)
Task error flags, a string of 32 bits used for signaling unusual conditions
occurring during the user task:
Bit 0 Reserved
Bit 1 Detected an attempt either to start a user clock that was already
running, or to stop one that was not running
Bits 2–31
Reserved
066 (TYPE-A, ‘ICTOTCT’, 4 BYTES)
Total number of Interval Control Start, Cancel, Delay, and Retrieve requests
issued by the user task.
| 082 (TYPE-C, ‘TRNGRPID’, 28 BYTES)
| The transaction group ID is assigned at transaction attach time, and cab ve
| used to correlate the transactions that CICS executes for the same incoming
| work request (for example, the CWXN and CWBA transactions for Web
| requests). This transaction group ID relationship is useful when applied to the
| requests that originate through the CICS Web, IIOP, or 3270 bridge interface, as
| indicated by the transaction origin in Bytes 4 of the transaction flags field
| (group name DFHTASK, field ID 164).
097 (TYPE-C, ‘NETUOWPX’, 20 BYTES)
Fully qualified name by which the originating system is known to the VTAM
network. This name is assigned at attach time using either the netname

96 CICS TS for OS/390: CICS Performance Guide


derived from the TCT (when the task is attached to a local terminal), or the
netname passed as part of an ISC APPC or IRC attach header. At least three
padding bytes (X'00') are present at the right end of the name.

If the originating terminal is VTAM across an ISC APPC or IRC link, the
NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid.

All originating information passed as part of an ISC LUTYPE6.1 attach header


has the same format as the non-VTAM terminal originators above.

When the originator is communicating over an external CICS interface (EXCI)


session, the name is a concatenation of:
'DFHEXCIU | . | MVS Id | Address Space Id (ASID)'
8 bytes | 1 byte | 4 bytes | 4 bytes

derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ‘DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.
v A 4-byte field containing the address space id (ASID) in which the client
program is running. This field contains the 4-character EBCDIC
representation of the 2-byte hex address space id.
098 (TYPE-C, ‘NETUOWSX’, 8 BYTES)
Name by which the network unit of work id is known within the originating
system. This name is assigned at attach time using either an STCK-derived
token (when the task is attached to a local terminal), or the network unit of
work id passed as part of an ISC APPC or IRC attach header.

| The first six bytes of this field are a binary value derived from the system
| clock of the originating system and which can wrap round at intervals of
| several months.

The last two bytes of this field are for the period count. These may change
during the life of the task as a result of syncpoint activity.

Note: When using MRO or ISC, the NETUOWSX field must be combined with
the NETUOWPX field (097) to uniquely identify a task, because
NETUOWSX is unique only to the originating CICS system.
102 (TYPE-S, ‘DISPWTT’, 8 BYTES)
Elapsed time for which the user task waited for redispatch. This is the
aggregate of the wait times between each event completion and user-task
redispatch.

Note: This field does not include the elapsed time spent waiting for first
dispatch. This field is a component of the task suspend time, SUSPTIME
(014), field.
109 (TYPE-C, ‘TRANPRI’, 4 BYTES)
Transaction priority when monitoring of the task was initialized (low-order
byte-3).

Chapter 6. The CICS monitoring facility 97


Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
| 123 (TYPE-S, ‘GNQDELAY’, 8 BYTES)
| The elapsed time waiting for a CICS task control global enqueue. For more
| information, see “Clocks and time stamps” on page 73.

| Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
| 124 (TYPE-C, ‘BRDGTRAN’, 4 BYTES)
Bridge listener transaction identifier.
125 (TYPE-S, ‘DSPDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field. For more information, see “Clocks and time stamps” on page 73.
126 (TYPE-S, ‘TCLDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set for this transaction’s transaction class, TCLSNAME (166), being
reached. For more information, see “Clocks and time stamps” on page 73.

Note: This field is a subset of the first dispatch delay, DSPDELAY (125), field.
127 (TYPE-S, ‘MXTDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set by the system parameter, MXT, being reached.

Note: The field is a subset of the first dispatch delay, DSPDELAY (125), field.
128 (TYPE-S, ‘LMDELAY’, 8 BYTES)
The elapsed time that the user task waited to acquire a lock on a resource. A
user task cannot explicitly acquire a lock on a resource, but many CICS
modules lock resources on behalf of user tasks using the CICS lock manager
(LM) domain.

For more information about CICS lock manager, see CICS Problem Determination
Guide.

For information about times, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
129 (TYPE-S, ‘ENQDELAY’, 8 BYTES)
The elapsed time waiting for a CICS task control local enqueue. For more
information, see “Clocks and time stamps” on page 73.

Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
132 (TYPE-C, ‘RMUOWID’, 8 BYTES)
The identifier of the unit of work (unit of recovery) for this task. Unit of
recovery values are used to synchronize recovery operations among CICS and
other resource managers, such as IMS and DB2.
163 (TYPE-C, ‘FCTYNAME’, 4 BYTES)
Transaction facility name. This field is null if the transaction is not associated
with a facility. The transaction facility type (if any) can be identified using byte
0 of the transaction flags, TRANFLAG, (164) field.

98 CICS TS for OS/390: CICS Performance Guide


164 (TYPE-A, ‘TRANFLAG’, 8 BYTES)
Transaction flags, a string of 64 bits used for signaling transaction definition
and status information:
Byte 0 Transaction facility identification
Bit 0 Transaction facility name = none (x’80’)
Bit 1 Transaction facility name = terminal (x’40’)
If this Bit is set, FCTYNAME and TERM contain the same
terminal id.
Bit 2 Transaction facility name = surrogate (x’20’)
Bit 3 Transaction facility name = destination (x’10’)
Bit 4 Transaction facility name = 3270 bridge (x’08’)
Bits 5–7
Reserved
Byte 1 Transaction identification information
Bit 0 System transaction (x’80’)
Bit 1 Mirror transaction (x’40’)
Bit 2 DPL mirror transaction (x’20’)
Bit 3 ONC/RPC Alias transaction (x’10’)
Bit 4 WEB Alias transaction (x’08’)
Bit 5 3270 Bridge transaction (x’04’)
| Bit 6 Reserved (x’02’)
| Bit 7 CICS BTS Run transaction
Byte 2 MVS workload manager request (transaction) completion information
Bit 0 Report the total response time for completed work request
(transaction)
Bit 1 Notify that the entire execution phase of the work request is
complete
Bit 2 Notify that a subset of the execution phase of the work request
is complete
Bits 3-7
Reserved
Byte 3 Transaction definition information
Bit 0 Taskdataloc = below (x’80’)
Bit 1 Taskdatakey = cics (x’40’)
Bit 2 Isolate = no (x’20’)
Bit 3 Dynamic = yes (x’10’)
Bits 4–7
Reserved
Byte 4 Reserved
Byte 5 Reserved

Chapter 6. The CICS monitoring facility 99


Byte 6 Reserved
Byte 7 Recovery manager information
Bit 0 Indoubt wait = no
Bit 1 Indoubt action = commit
Bit 2 Recovery manager - UOW resolved with indoubt action
Bit 3 Recovery manager - Shunt
Bit 4 Recovery manager - Unshunt
Bit 5 Recovery manager - Indoubt failure
Bit 6 Recovery manager - Resource owner failure
Bit 7 Reserved

Note: Bits 2 through 6 will be reset on a SYNCPOINT request when


the MNSYNC=YES option is specified.
166 (TYPE-C, ‘TCLSNAME’, 8 BYTES)
Transaction class name. This field is null if the transaction is not in a
TRANCLASS.
170 (TYPE-S, ‘RMITIME’, 8 BYTES)
Amount of elapsed time spent in the Resource Manager Interface (RMI). For
more information, see “Clocks and time stamps” on page 73, “A note about
wait (suspend) times” on page 76, and Figure 8 on page 80.
171 (TYPE-S, ‘RMISUSP’, 8 BYTES)
Amount of elapsed time the task was suspended by the dispatcher while in the
Resource Manager Interface (RMI). For more information, see “Clocks and time
stamps” on page 73, “A note about wait (suspend) times” on page 76, and
Figure 8 on page 80.

Note: The field is a subset of the task suspend time, SUSPTIME (014), field
and also the RMITIME (170) field.
181 (TYPE-S, ‘WTEXWAIT’, 8 BYTES)
The elapsed time that the user task waited for one or more ECBs, passed to
CICS by the user task using the EXEC CICS WAIT EXTERNAL ECBLIST
command, to be MVS POSTed. The user task can wait on one or more ECBs. If
it waits on more than one, it is dispatchable as soon as one of the ECBs is
posted. For more information, see “Clocks and time stamps” on page 73, and
“A note about wait (suspend) times” on page 76.

Note: This field is a component of the task suspend time, (SUSPTIME) (014),
field.
182 (TYPE-S, ‘WTCEWAIT’, 8 BYTES)
The elapsed time the user task waited for:
v One or more ECBs, passed to CICS by the user task using the EXEC CICS
WAITCICS ECBLIST command, to be MVS POSTed. The user task can wait
on one or more ECBs. If it waits on more than one, it is dispatchable as soon
as one of the ECBs is posted.
v Completion of an event initiated by the same or by another user task. The
event would normally be the posting, at the expiration time, of a timer-event
control area provided in response to an EXEC CICS POST command. The

100 CICS TS for OS/390: CICS Performance Guide


EXEC CICS WAIT EVENT command provides a method of directly giving
up control to some other task until the event being waited on is completed.
For more information, see “Clocks and time stamps” on page 73, and “A note
about wait (suspend) times” on page 76.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
183 (TYPE-S, ‘ICDELAY’, 8 BYTES)
The elapsed time the user task waited as a result of issuing either:
v An interval control EXEC CICS DELAY command for a specified time
interval, or
v A specified time of day to expire, or
v An interval control EXEC CICS RETRIEVE command with the WAIT option
specified. For more information, see “Clocks and time stamps” on page 73,
and “A note about wait (suspend) times” on page 76.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
184 (TYPE-S, ‘GVUPWAIT’, 8 BYTES)
The elapsed time the user task waited as a result of giving up control to
another task. A user task can give up control in many ways. Some
examples are application programs that use one or more of the following
EXEC CICS API or SPI commands:
v Using the EXEC CICS SUSPEND command. This command causes the
issuing task to relinquish control to another task of higher or equal
dispatching priority. Control is returned to this task as soon as no other
task of a higher or equal priority is ready to be dispatched.
v Using the EXEC CICS CHANGE TASK PRIORITY command. This
command immediately changes the priority of the issuing task and
causes the task to give up control in order for it to be dispatched at its
new priority. The task is not redispatched until tasks of higher or equal
priority, and that are also dispatchable, have been dispatched.
v Using the EXEC CICS DELAY command with INTERVAL (0). This
command causes the issuing task to relinquish control to another task of
higher or equal dispatching priority. Control is returned to this task as
soon as no other task of a higher or equal priority is ready to be
dispatched.
v Using the EXEC CICS POST command requesting notification that a
specified time has expired. This command causes the issuing task to
relinquish control to give CICS the opportunity to post the time-event
control area.
v Using the EXEC CICS PERFORM RESETTIME command to synchronize
the CICS date and time with the MVS system date and time of day.
v Using the EXEC CICS START TRANSID command with the ATTACH
option.
For more information, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field.

Chapter 6. The CICS monitoring facility 101


| 190 (TYPE-C, ‘RRMSURID’, 16 BYTES)
| RRMS/MVS unit-of-recovery ID (URID).
| 191 (TYPE-S, ‘RRMSWAIT’, 8 BYTES)
| The elapsed time in which the user task waited indoubt using resource
| recovery services for EXCI.

| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 195 (TYPE-S, ‘RUNTRWTT’, 8 BYTES)
| The elapsed time in which the user task waited for completion of a
| transaction that executed as a result of the user task issuing a CICS BTS
| run process, or run activity, request to execute a process, or activity,
| synchronously.

| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 248 (TYPE-A, ‘CHMODECT’, 4 BYTES)
| The number of CICS change-TCB modes issued by the user task.
| 249 (TYPE-S, ‘QRMODDLY’, 8 BYTES)
| The elapsed time for which the user task waited for redispatch on the
| CICS QR TCB. This is the aggregate of the wait times between each event
| completion. and user-task redispatch.

| Note: This field does not include the elapsed time spent waiting for the
| first dispatch. The QRMODDLY field is a component of the task
| suspend time, SUSPTIME (014), field.
| 250 (TYPE-S, ‘MXTOTDLY’, 8 BYTES)
| The elapsed time in which the user task waited to obtain a CICS open
| TCB, because the region had reached the limit set by the system parameter,
| MAXOPENTCBS.

| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time,


| 251 (TYPE-A, ‘TCBATTCT’, 8 BYTES)
| The number of CICS TCBs attached by or on behalf of the user task.
| 253 (TYPE-S, ‘JVMTIME’, 8 BYTES)
| The elapsed time spent in the CICS JVM by the user task
| 254 (TYPE-S, ‘JVMSUSP’, 8 BYTES)
| The elapsed time the user task was suspended by the CICS dispatcher
| while running in the CICS JVM.

| Note: This field is a subset of the task suspend time, SUSPTIME (014),
| field.

102 CICS TS for OS/390: CICS Performance Guide


| 255 (TYPE-S, ‘QRDISPT’, 8 BYTES)
| The elapsed time for which the user task was dispatched on the CICS QR
| TCB. For more information, see “Clocks and time stamps” on page 73.
| 256 (TYPE-S, ‘QRCPUT’, 8 BYTES)
| The processor time for which the user task was dispatched on the CICS QR
| TCB. For more information, see “Clocks and time stamps” on page 73.
| 257 (TYPE-S, ‘MSDISPT’, 8 BYTES)
| Elapsed time for which the user task was dispatched on each CICS TCB
| (RO, CO, FO, SZ if FEPI is active, and RP if the ONC/RPC or CICS WEB
| Interface Feature is installed and active. Modes SO and SL are used only if
| TCPIP=YES is specified as a system initialization parameter). For more
| information, see “Clocks and time stamps” on page 73.
| 258 (TYPE-S, ‘MSCPUT’, 8 BYTES)
| The processor time for which the user task was dispatched on each CICS
| TCB (RO, CO, FO, SZ if FEPI is active, and RP if the ONC/RPC or CICS
| WEB interface feature is installed and active. Modes SO and SL are used
| only if TCPIP=YES is specified as a system initialization parameter). For
| more information, see “Clocks and time stamps” on page 73.
| 259 (TYPE-S, ‘L8CPUT’, 8 BYTES)
| The processor time for which the user task was dispatched on the CICS L8
| TCB. For more information see “Clocks and time stamps” on page 73.
| 260 (TYPE-S, ‘J8CPUT’, 8 BYTES)
| The processor time for which the user task was dispatched on each CICS J8
| TCB. For more information, see “Clocks and time stamps” on page 73.
| 261 (TYPE-S, ‘S8CPUT’, 8 BYTES)
| The processor time for which the user task was dispatched on the CICS S8
| TCB. For more information, see “Clocks and time stamps” on page 73.

Performance data in group DFHTEMP


Group DFHTEMP contains the following performance data:
011 (TYPE-S, ‘TSIOWTT’, 8 BYTES)
Elapsed time for which the user task waited for VSAM temporary storage I/O.
For more information see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 044 (TYPE-A, ‘TSGETCT’, 4 BYTES)
Number of temporary-storage GET requests issued by the user task.
046 (TYPE-A, ‘TSPUTACT’, 4 BYTES)
Number of PUT requests to auxiliary temporary storage issued by the user
task.
047 (TYPE-A, ‘TSPUTMCT’, 4 BYTES)
Number of PUT requests to main temporary storage issued by the user task.
092 (TYPE-A, ‘TSTOTCT’, 4 BYTES)
| Total number of temporary storage requests issued by the user task. This field
| is the sum of the temporary storage READQ (TSGETCT), WRITEQ AUX
| (TSPUTACT), WRITEQ MAIN (TSPUTMCT), and DELETEQ requests issued by
| the user task.

Chapter 6. The CICS monitoring facility 103


178 (TYPE-S, ‘TSSHWAIT’, 8 BYTES)
Elapsed time that the user task waited for an asynchronous shared temporary
storage request to a temporary storage data server to complete. For more
information, see “Clocks and time stamps” on page 73, and “A note about wait
(suspend) times” on page 76.

Note: This field is a component of the task suspend time, SUSPTIME (014),
field.

Performance data in group DFHTERM


Group DFHTERM contains the following performance data:
002 (TYPE-C, ‘TERM’, 4 BYTES)
Terminal or session identification. This field is null if the task is not associated
with a terminal or session.
009 (TYPE-S, ‘TCIOWTT’, 8 BYTES)
Elapsed time for which the user task waited for input from the terminal
operator, after issuing a RECEIVE request. For more information, see “Clocks
and time stamps” on page 73, and “A note about wait (suspend) times” on
| page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 034 (TYPE-A, ‘TCMSGIN1’, 4 BYTES)
Number of messages received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
035 (TYPE-A, ‘TCMSGOU1’, 4 BYTES)
Number of messages sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
067 (TYPE-A, ‘TCMSGIN2’, 4 BYTES)
Number of messages received from the LUTYPE6.1 alternate terminal facilities
by the user task.
068 (TYPE-A, ‘TCMSGOU2’, 4 BYTES)
Number of messages sent to the LUTYPE6.1 alternate terminal facilities by the
user task.
069 (TYPE-A, ‘TCALLOCT’, 4 BYTES)
Number of TCTTE ALLOCATE requests issued by the user task for LUTYPE6.2
(APPC), LUTYPE6.1, and IRC sessions.
083 (TYPE-A, ‘TCCHRIN1’, 4 BYTES)
Number of characters received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
084 (TYPE-A, ‘TCCHROU1’, 4 BYTES)
Number of characters sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
085 (TYPE-A, ‘TCCHRIN2’, 4 BYTES)
Number of characters received from the LUTYPE6.1 alternate terminal facilities
by the user task. (Not applicable to ISC APPC.)
086 (TYPE-A, ‘TCCHROU2’, 4 BYTES)
Number of characters sent to the LUTYPE6.1 alternate terminal facilities by the
user task. (Not applicable to ISC APPC.)

104 CICS TS for OS/390: CICS Performance Guide


100 (TYPE-S, ‘IRIOWTT’, 8 BYTES)
Elapsed time for which the user task waited for control at this end of an MRO
link. For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 111 (TYPE-C, ‘LUNAME’, 8 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. If the task is executing in an application-owning or file-owning
region, the LUNAME is the generic applid of the originating connection for
MRO, LUTYPE6.1, and LUTYPE6.2 (APPC). The LUNAME is blank if the
originating connection is an external CICS interface (EXCI).
133 (TYPE-S, ‘LU61WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.1
connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.1 connections, but not the waits incurred due to
LUTYPE6.1 syncpoint flows. For more information see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 134 (TYPE-S, ‘LU62WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.2
(APPC) connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.2 (APPC) connections, but not the waits
incurred due to LUTYPE6.2 (APPC) syncpoint flows. For more information, see
“Clocks and time stamps” on page 73, and “A note about wait (suspend)
| times” on page 76

| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 135 (TYPE-A, ‘TCM62IN2’, 4 BYTES)
Number of messages received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
136 (TYPE-A, ‘TCM62OU2’, 4 BYTES)
Number of messages sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
137 (TYPE-A, ‘TCC62IN2’, 4 BYTES)
Number of characters received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
138 (TYPE-A, ‘TCC62OU2’, 4 BYTES)
Number of characters sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
165 (TYPE-A, ‘TERMINFO’, 4 BYTES)
Terminal or session information for this task’s principal facility as identified in
the ‘TERM’ field id 002. This field is null if the task is not associated with a
terminal or session facility.
Byte 0 Identifies whether this task is associated with a terminal or session.
This field can be set to one of the following values:
X'00' None

Chapter 6. The CICS monitoring facility 105


X'01' Terminal
X'02' Session
Byte 1 If the principal facility for this task is a session (Byte 0 = x’02’), this
field identifies the session type. This field can be set to one of the
following values:
X'00' None
X'01' IRC
X'02' IRC XM
X'03' IRC XCF
X'04' LU61
X'05' LU62 Single
X'06' LU62 Parallel
Byte 2 Identifies the access method defined for the terminal id or session id in
field TERM. This field can be set to one of the following values:
X'00' None
X'01' VTAM
X'02' BTAM
X'03' BSAM
X'04' TCAM
X'05' TCAMSNA
X'06' BGAM
X'07' CONSOLE
Byte 3 Identifies the terminal or session type for the terminal id or session id
in TERM.
v See RDO Typeterm

For a list of the typeterm definitions, see the CICS Resource Definition
Guide.
169 (TYPE-C, ‘TERMCNNM’, 4 BYTES)
Terminal session connection name. If the terminal facility associated with this
transaction is a session, this field is the name of the owning connection (sysid).

A terminal facility can be identified as a session by using byte 0 of the terminal


information, TERMINFO (165), field. If the value is x’02’ the terminal facility is
a session.

| Performance data in group DFHWEBB


| Group DFHWEBB contains the following performance data:
| 231 (TYPE-A, ‘WBRCVCT’, 4 BYTES)
| The number of CICS Web interface RECEIVE requests issued by the user task.
| 232 (TYPE-A, ‘WBCHRIN’, 4 BYTES)
| The number of characters received by the CICS Web interface RECEIVE
| requests issued by the user task.
| 233 (TYPE-A, ‘WBSENDCT’, 4 BYTES)
| The number of CICS Web interface SEND requests issued by the user task.
| 234 (TYPE-A, ‘WBCHROUT’, 4 BYTES)
| The number of characters sent by the CICS Web interface SEND requests
| issued by the user task.
| 235 (TYPE-A, ‘WBTOTWCT’, 4 BYTES)
| The total number of CICS Web interface requests issued by the user task.

106 CICS TS for OS/390: CICS Performance Guide


| 236 (TYPE-A, ‘WBREPRCT’, 4 BYTES)
| The number of reads from the repository in shared temporary storage issued
| by the user task.
| 237 (TYPE-A, ‘WBREPWCT’, 4 BYTES)
| The number of writes to the repository in shared temporary storage issued by
| the user task.

|
End of Product-sensitive programming interface

Exception class data


Product-sensitive programming interface

Exception records are produced after each of the following conditions encountered
by a transaction has been resolved:
v Wait for storage in the CDSA
v Wait for storage in the UDSA
v Wait for storage in the SDSA
v Wait for storage in the RDSA
v Wait for storage in the ECDSA
v Wait for storage in the EUDSA
v Wait for storage in the ESDSA
v Wait for storage in the ERDSA
v Wait for auxiliary temporary storage
v Wait for auxiliary temporary storage string
v Wait for auxiliary temporary storage buffer
| v Wait for coupling facility data tables locking (request) slot
| v Wait for coupling facility data tables non-locking (request) slot (With coupling
| facility data tables each CICS has a number of slots available for requests in the
| CF data table. When all available slots are in use, any further request must wait.)
v Wait for file buffer
v Wait for file string
| v Wait for LSRPOOL buffer
v Wait for LSRPOOL string

These records are fixed format. The format of these exception records is as follows:
MNEXCDS DSECT
EXCMNTRN DS CL4 TRANSACTION IDENTIFICATION
EXCMNTER DS XL4 TERMINAL IDENTIFICATION
EXCMNUSR DS CL8 USER IDENTIFICATION
EXCMNTST DS CL4 TRANSACTION START TYPE
EXCMNSTA DS XL8 EXCEPTION START TIME
EXCMNSTO DS XL8 EXCEPTION STOP TIME
EXCMNTNO DS PL4 TRANSACTION NUMBER
EXCMNTPR DS XL4 TRANSACTION PRIORITY
DS CL4 RESERVED
EXCMNLUN DS CL8 LUNAME
DS CL4 RESERVED
EXCMNEXN DS XL4 EXCEPTION NUMBER
EXCMNRTY DS CL8 EXCEPTION RESOURCE TYPE
EXCMNRID DS CL8 EXCEPTION RESOURCE ID
EXCMNTYP DS XL2 EXCEPTION TYPE

Chapter 6. The CICS monitoring facility 107


EXCMNWT EQU X'0001' WAIT
EXCMNBWT EQU X'0002' BUFFER WAIT
EXCMNSWT EQU X'0003' STRING WAIT
DS CL2 RESERVED
EXCMNTCN DS CL8 TRANSACTION CLASS NAME
EXCMNSRV DS CL8 SERVICE CLASS NAME
EXCMNRPT DS CL8 REPORT CLASS NAME
EXCMNNPX DS CL20 NETWORK UNIT_OF_WORK PREFIX
EXCMNNSX DS XL8 NETWORK UNIT_OF_WORK SUFFIX
EXCMNTRF DS XL8 TRANSACTION FLAGS
EXCMNFCN DS CL4 TRANSACTION FACILITY NAME
EXCMNCPN DS CL8 CURRENT PROGRAM NAME
EXCMNBTR DS CL4 BRIDGE TRANSACTION ID
| EXCMNURI DS XL16 MVS/RRMS Unit of Recovery Id
| EXCMNRIL DS F EXCEPTION RESOURCE ID LENGTH
| EXCMNRIX DS XL256 EXCEPTION RESOURCE ID (EXTENDED)
* END OF EXCEPTION RECORD ...

Exception data field descriptions


EXCMNTRN (TYPE-C, 4 BYTES)
Transaction identification.
EXCMNTER (TYPE-C, 4 BYTES)
Terminal identification. This field is null if the task is not associated with a
terminal or session.
EXCMNUSR (TYPE-C, 8 BYTES)
User identification at task creation. This can also be the remote user identifier
for a task created as the result of receiving an ATTACH request across an MRO
or APPC link with attach-time security enabled.
EXCMNTST (TYPE-C, 4 BYTES)
Transaction start type. The low-order byte (0 and 1) is set to:
"TO" Attached from terminal input
"S" Attached by automatic transaction initiation (ATI) without data
"SD" Attached by automatic transaction initiation (ATI) with data
"QD" Attached by transient data trigger level
"U" Attached by user request
"TP" Attached from terminal TCTTE transaction ID
"SZ" Attached by Front End Programming Interface (FEPI)
EXCMNSTA (TYPE-T, 8 BYTES)
Start time of the exception.
EXCMNSTO (TYPE-T, 8 BYTES)
Finish time of the exception.

Note: The performance class exception wait time field, EXWTTIME (103), is a
calculation based on subtracting the start time of the exception
(EXCMNSTA) from the finish time of the exception (EXCMNSTO).
EXCMNTNO (TYPE-P, 4 BYTES)
Transaction identification number.
EXCMNTPR (TYPE-C, 4 BYTES)
Transaction priority when monitoring was initialized for the task (low-order
byte).
EXCMNLUN (TYPE-C, 4 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. This field is nulls if the task is not associated with a terminal.

108 CICS TS for OS/390: CICS Performance Guide


EXCMNEXN (TYPE-A, 4 BYTES)
Exception sequence number for this task.
EXCMNRTY (TYPE-C, 8 BYTES)
Exception resource type. The possible values for EXCMNRTY are shown in
Table 8 on page 112.
EXCMNRID (TYPE-C, 8 BYTES)
Exception resource identification. The possible values for EXCMNRID are
shown in Table 8 on page 112.
EXCMNTYP (TYPE-A, 2 BYTES)
Exception type. This field can be set to one of the following values:
X'0001'
Exception due to a wait (EXCMNWT)
X'0002'
Exception due to a buffer wait (EXCMNBWT)
X'0003'
Exception due to a string wait (EXCMNSWT)
EXCMNTCN (TYPE-C, 8 BYTES)
Transaction class name. This field is null if the transaction is not in a
transaction class.
EXCMNSRV (TYPE-C, 8 BYTES)
MVS Workload Manager Service Class name for this transaction. This field is
null if the transaction was WLM-classified in another CICS region.
EXCMNRPT (TYPE-C, 8 BYTES)
MVS Workload Manager Report Class name for this transaction. This field is
null if the transaction was WLM-classified in another CICS region.
EXCMNNPX (TYPE-C, 20 BYTES)
Fully qualified name by which the originating system is known to the VTAM
network. This name is assigned at attach time using either the NETNAME
derived from the TCT (when the task is attached to a local terminal), or the
NETNAME passed as part of an ISC APPC or IRC attach header. At least three
passing bytes (X'00') are present at the right end of the name.

If the originating terminal is a VTAM device across an ISC APPC or IRC link,
the NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid

All originating information passed as part of an ISC LUTYPE6.1 attach header


has the same format as the non-VTAM terminal originators above.

When the originator is communicating over an external CICS interface (EXCI)


session, the name is a concatenation of:
'DFHEXCIU | . | MVS Id | Address space Id (ASID)'
8 bytes | 1 byte | 4 bytes | 4 bytes

derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ’DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.

Chapter 6. The CICS monitoring facility 109


v A 4-byte field containing the address space ID (ASID) in which the client
program is running. This field contains the 4-character EBCDIC
representation of the 2-byte hex address space ID.
EXCMNNSX (TYPE-C, 8 BYTES)
Name by which the unit of work is known within the originating system. This
last name is assigned at attach time using either an STCK-derived token (when
the task is attached to a local terminal) or the unit of work ID is passed as part
of an ISC APPC or IRC attach header.

The first 6 bytes of this field are a binary value derived from the clock of the
originating system and wrapping round at intervals of several months. The last
two bytes of this field are for the period count. These may change during the
| life of the task as a result of syncpoint activity.

| Note: When using MRO or ISC, the EXCMNNSX field must be combined with
| the EXCMNNPX field to uniquely identify a task, because the
| EXCMNNSX field is unique only to the originating CICS system.
| EXCMNTRF (TYPE-C, 8 BYTES)
Transaction flags—a string of 64 bits used for signaling transaction definition
and status information:
Byte 0 Transaction facility identification
Bit 0 Transaction facility name = none
Bit 1 Transaction facility name = terminal
Bit 2 Transaction facility name = surrogate
Bit 3 Transaction facility name = destination
Bit 4 Transaction facility name = 3270 bridge
Bits 5–7
Reserved
Byte 1 Transaction identification information
Bit 0 System transaction
Bit 1 Mirror transaction
Bit 2 DPL mirror transaction
Bit 3 ONC RCP alias transaction
Bit 4 WEB alias transaction
Bit 5 3270 bridge transaction
| Bit 6 Reserved
| Bit 7 CICS BTS Run transaction
Byte 2 MVS Workload Manager information
Bit 0 Workload Manager report
Bit 1 Workload Manager notify, completion = yes
Bit 2 Workload Manager notify
Bits 3–7
Reserved
Byte 3 Transaction definition information

110 CICS TS for OS/390: CICS Performance Guide


Bit 0 Taskdataloc = below
Bit 1 Taskdatakey = cics
Bit 2 Isolate = no
Bit 3 Dynamic = yes
Bits 4– 7
Reserved
Byte 4 Reserved
Byte 5 Reserved
Byte 6 Reserved
Byte 7 Recovery manager information
Bit 0 Indoubt wait = no
Bit 1 Indoubt action = commit
Bit 2 Recovery manager - UOW resolved with indoubt action
Bit 3 Recovery manager - shunt
Bit 4 Recovery manager - unshunt
Bit 5 Recovery manager - indoubt failure
Bit 6 Recovery manager - resource owner failure
Bit 7 Reserved

Note: Bits 2 through 6 will be reset on a SYNCPOINT request when


the MSYNC=YES option is specified.
EXCMNFCN (TYPE-C, 4 BYTES)
Transaction facility name. This field is null if the transaction is not associated
with a facility. The transaction facility type (if any) can be identified by using
byte 0 of the transaction flags field, EXCMNTRF.
EXCMNCPN (TYPE-C, 8 BYTES)
The name of the currently running program for this user task when the
exception condition occurred.
EXCMNBTR (TYPE-C, 4 BYTES)
3270 Bridge transaction identification.
| EXCMNURI (TYPE-C, 16 BYTES)
| RRMS/MVS unit-of-recovery ID (URID)
| EXCMNRIL (TYPE-A, 4 BYTES)
| Exception resource ID length.
| EXCMNRIX (TYPE-C, 256 BYTES)
| Exception resource ID (extended).

The following table shows the value and relationships of the fields EXCMNTYP,
EXCMNRTY, and EXCMNRID.

Chapter 6. The CICS monitoring facility 111


Table 8. Possible values of EXCMNTYP, EXCMNRTY, and EXCMNRID. The relationship between exception type,
resource type, and resource identification.
EXCMNTYP EXCMNRTY EXCMNRID MEANING
Exception type Resource type Resource ID
| EXCMNWT ‘CFDTLRSW’ poolname Wait for CF data tables locking request slot
| EXCMNWT ‘CFDTPOOL’ poolname Wait for CF data tables non-locking request slot
EXCMNWT ‘STORAGE’ ‘UDSA’ Wait for UDSA storage
EXCMNWT ‘STORAGE’ ‘EUDSA’ Wait for EUDSA storage
EXCMNWT ‘STORAGE’ ‘CDSA’ Wait for CDSA storage
EXCMNWT ‘STORAGE’ ‘ECDSA’ Wait for ECDSA storage
EXCMNWT ‘STORAGE’ ‘SDSA’ Wait for SDSA storage
EXCMNWT ‘STORAGE’ ‘ESDSA’ Wait for ESDSA storage
EXCMNWT ‘STORAGE’ ‘RDSA’ Wait for RDSA storage
EXCMNWT ‘STORAGE’ ‘ERDSA’ Wait for ERDSA storage
EXCMNWT ‘TEMPSTOR’ TS Qname Wait for temporary storage
EXCMNSWT ‘FILE’ filename Wait for string associated with file
EXCMNSWT ‘LSRPOOL’ filename Wait for string associated with LSRPOOL
EXCMNSWT ‘TEMPSTOR” TS Qname Wait for string associated with DFHTEMP
EXCMNBWT ‘LSRPOOL’ LSRPOOL Wait for buffer associated with LSRPOOL
EXCMNBWT ‘TEMPSTOR’ TS Qname Wait for buffer associated with DFHTEMP

End of Product-sensitive programming interface

112 CICS TS for OS/390: CICS Performance Guide


Chapter 7. Tivoli Performance Reporter for OS/390
Tivoli Performance Reporter for OS/390, Version 1 Release 3, previously known as
Performance Reporter for MVS, supercedes Service Level Reporter (SLR).

Tivoli Performance Reporter for OS/390 is described in the following sections:


v “Overview.”
v “Using Tivoli Performance Reporter for OS/390 to report on CICS performance”
on page 115

Overview.
Tivoli Performance Reporter for OS/390 is a reporting system which uses DB2. You
can use it to process utilization and throughput statistics written to log data sets by
computer systems. You can use it to analyze and store the data into DB2, and
present it in a variety of forms. Tivoli Performance Reporter consists of a base
product with several optional features that are used in systems management, as
shown in Table 9. Tivoli Performance Reporter for OS/390 uses Reporting Dialog/2
as the OS/2® reporting feature.
Table 9. Tivoli Performance Reporter for OS/390 and optional features
CICS IMS Network System Workstation AS/400® Reporting Accounting
Performance Performance Performance Performance Performance Performance Dialog/2
Tivoli Performance Reporter for OS/390 Base

The Tivoli Performance Reporter for OS/390 base includes:


v Reporting and administration dialogs that use the Interactive System
Productivity Facility (ISPF)
v A collector function to read log data, with its own language
v Record mapping (definitions) for all data records used by the features

Each feature provides:


v Instructions (in the collector language) to transfer log data to DATABASE 2
(DB2) tables
v DB2 table definitions
v Reports.

The Tivoli Performance Reporter for OS/390 database can contain data from many
sources. For example, data from System Management Facilities (SMF), Resource
Measurement Facility (RMF), CICS, and Information Management System (IMS)
can be consolidated into a single report. In fact, you can define any non-standard
log data to Tivoli Performance Reporter for OS/390 and report on that data
together with data coming from the standard sources.

The Tivoli Performance Reporter for OS/390 CICS performance feature provides
reports for your use when analyzing the performance of CICS Transaction Server
for OS/390, and CICS/ESA, based on data from the CICS monitoring facility
(CMF) and, for CICS Transaction Server for OS/390, CICS statistics. These are
some of the areas that Tivoli Performance Reporter can report on:

© Copyright IBM Corp. 1983, 1999 113


v Response times
v Resource usage
v Processor usage
v Storage usage
v Volumes and throughput
v CICS/DB2 activity
v Exceptions and incidents
v Data from connected regions, using the unit of work as key
v SYSEVENT data
v CICS availability
v CICS resource availability

The Tivoli Performance Reporter for OS/390 CICS performance feature collects
only the data required to meet CICS users’ needs. You can combine that data with
more data (called environment data), and present it in a variety of reports. Tivoli
Performance Reporter for OS/390 provides an administration dialog for
maintaining environment data. Figure 12 illustrates how data is organized for
presentation in Tivoli Performance Reporter for OS/390 reports.

Operating system

System data

Data written to
Logs various logs

Performance Reporter
Performance collects only
Reporter Performance relevant data
CICS Reporter
performance records
feature

User-supplied
Performance User- environment data
Reporter supplied maintained in the
tables data Performance Reporter
database

Required data
Report Report Report presented in
report format

Figure 12. Organizing and presenting system performance data

The Tivoli Performance Reporter for OS/390 CICS performance feature processes
these records:

114 CICS TS for OS/390: CICS Performance Guide


CMF
CICS Transaction Server performance
CICS/ESA performance
CICS/ESA exceptions
CICS/MVS accounting, performance, and exceptions
Statistics
CICS Transaction Server statistics

Using Tivoli Performance Reporter for OS/390 to report on CICS


performance
To understand performance data, you must first understand the work CICS
performs at your installation. Analyze the work by its basic building blocks:
transactions. Group the transactions into categories of similar resource or user
requirements and describe each category’s characteristics. Understand the work
that CICS performs for each transaction and the volume of transactions expected
during any given period. Tivoli Performance Reporter for OS/390 can show you
various types of data for the transactions processed by CICS.

A service-level agreement for a CICS user group defines commitments in several


areas of quantifiable CICS-related resources and services. CICS service
commitments can belong to one of these areas:
v Response times
v Transaction rates
v Exceptions and incidents
v Availability.

The following sections describe certain issues and concerns associated with
systems management and how you can use the Tivoli Performance Reporter for
OS/390 CICS performance feature.

Monitoring response time


Use the Tivoli Performance Reporter for OS/390 CICS response-time reports to see
the CICS application internal response times, whose elements are shown in
Figure 13.

S ---------------Response time------------------ F
T I
A -Suspend time-- --------Dispatch time------- N
R I
T ----Service time--- S
H

Figure 13. CICS internal response-time elements

As described in Performance Reporter Network Performance Feature Reports, the


Network Performance feature generates reports that show the total, end-to-end
average response time (operator transit time) for VTAM applications (for example,
a CICS region) by logical unit. The operator transit time consists of the host transit
time and the network transit time, which are also shown in the Network
Performance feature reports. Using these reports, you can isolate a response-time
problem either to the network or to CICS and act on it accordingly. Should the

Chapter 7. Tivoli Performance Reporter for OS/390 115


problem be in CICS, you can use the Tivoli Performance Reporter for OS/390 CICS
performance feature reports to identify the application causing the response-time
degradation.

Monitoring processor and storage use


Poor response time usually indicates inefficient use of either the processor or
storage (or both). Tivoli Performance Reporter-supplied reports can help you
isolate a resource as the cause of a CICS performance problem.

If both the Tivoli Performance Reporter for OS/390 CICS performance feature’s
statistics component and the Performance Reporter System Performance feature’s
MVS component are installed and active, these reports are available for analyzing
transaction rates and processor use by CICS region:
v The CICS Transaction Processor Utilization, Monthly report shows monthly
averages for the dates you specify.
v The CICS Transaction Processor Utilization, Daily report shows daily averages
for the dates you specify.

Tivoli Performance Reporter for OS/390 produces several reports that can help
analyze storage usage. For example, the CICS Dynamic Storage (DSA) Usage
report, shows pagepool usage.

CICS Dynamic Storage (DSA) Usage


MVS ID ='IPO2' CICS ID ='CSRT5'
Date: '1998-09-21' to '1998-09-22'

Free Free Largest


Pagepool DSA Cushion storage storage free
name (bytes) (bytes) (bytes) (pct) area Getmains Freemains
-------- --------- -------- --------- ------- --------- -------- ---------
CDSA 1048576 65536 802816 76 765952 3695 3620
ECDSA 8388608 262144 7667712 91 7667712 8946 7252
ERDSA 3145728 262144 1302528 41 1290240 204 3
EUDSA 8388608 262144 8388608 100 8388608 1 1
UDSA 4194304 65536 4186112 99 4182016 6 4

Tivoli Performance Reporter Report: CICS809

Figure 14. CICS Dynamic storage (DSA) usage report

Monitoring volumes and throughput


Because CICS Transaction Server for OS/390 uses an MVS subtask to page and
because an MVS page-in causes an MVS task to halt execution, the number of
page-ins is a performance concern. Page-outs are not a concern because page-outs
are scheduled to occur during lulls in CICS processing. If you suspect that a
performance problem is related to excessive paging, you can use Tivoli
Performance Reporter for OS/390 to report on page-ins, using RMF data.

The best indicator of a transaction’s performance is its response. For each


transaction ID, the CICS transaction performance detail report (in Figure 15 on
page 117) shows the total transaction count and the average response time.

116 CICS TS for OS/390: CICS Performance Guide


CICS Transaction Performance, Detail
MVS ID ='IPO2' CICS ID ='CFGTV1 '
Date: '1998-09-19' to '1998-09-20'

Avg Avg Prog Program


resp CPU load Prog FC storage Getmains Getmains
Tran Tran time time reqs loads calls Excep- bytes < 16 MB > 16 MB
ID count (sec) (sec) (avg) (avg) (avg) tions (max) (avg) (avg)
------------ -------- ------- ---- ----- ----- ------ --------- -------- --------
QUIT 7916 0.085 0.017 0 0 18 0 74344 22 0
CRTE 1760 4.847 0.004 0 0 0 0 210176 1 0
AP00 1750 0.184 0.036 0 0 8 0 309800 66 0
PM94 1369 0.086 0.012 0 0 6 0 130096 24 0
VCS1 737 0.073 0.008 2 0 7 0 81200 14 0
PM80 666 1.053 0.155 1 0 62 0 104568 583 0
CESN 618 8.800 0.001 0 0 0 0 41608 0 0
SU01 487 0.441 0.062 4 0 126 0 177536 38 0
...
GC11 1 0.341 0.014 1 0 2 0 37048 10 0
DM08 1 0.028 0.002 0 0 0 0 5040 3 0
======== =========
20359 309800

Tivoli Performance Reporter Report: CICS101

Figure 15. CICS transaction performance, detail report

Use this report to start verifying that you are meeting service-level objectives. First,
verify that the values for average response time are acceptable. Then check that the
transaction rates do not exceed agreed-to limits. If a transaction is not receiving the
appropriate level of service, you must determine the cause of the delay.

Combining CICS and DB2 performance data


For each CICS task, CICS generates an LU6.2 unit-of-work ID. DB2 also creates an
LU6.2 unit-of-work ID. Figure 16 shows how DB2 data can be correlated with CICS
performance data using the DB2 token (QWHCTOKN) to identify the task.

DB2 accounting record

QWHCTOKN

CICS performance-monitoring record

TRAN USERID NETNAME UOWID TCIOWT

Figure 16. Correlating a CICS performance-monitoring record with a DB2 accounting record

If you match the NETNAME and UOWID fields in a CICS record to the DB2
token, you can create reports that show the DB2 activity caused by a CICS
transaction.

Chapter 7. Tivoli Performance Reporter for OS/390 117


Monitoring exception and incident data
An exception is an event that you should monitor. An exception appears in a report
only if it has occurred; reports do not show null counts. A single exception need
not be a cause for alarm. An incident is defined as an exception with severity 1, 2,
or 3.

The Tivoli Performance Reporter for OS/390 CICS performance feature creates
exception records for these incidents and exceptions:
v Wait for storage
v Wait for main temporary storage
v Wait for a file string
v Wait for a file buffer
v Wait for an auxiliary temporary storage string
v Wait for an auxiliary temporary storage buffer
v Transaction ABEND
v System ABEND
v Storage violations
v Short-of-storage conditions
v VTAM request rejections
v I/O errors on auxiliary temporary storage
v I/O errors on the intrapartition transient data set
v Autoinstall errors
v MXT reached
v DTB overflow
v Link errors for IRC and ISC
v Log stream buffer-full conditions
v CREAD and CWRITE fails (data space problems)
v Local shared resource (LSR) pool ( string waits (from A08BKTSW)
v Waits for a buffer in the LSR pool (from A09TBW)
v Errors writing to SMF
v No space on transient-data data set (from A11ANOSP)
v Waits for a transient-data string (from A11STNWT)
v Waits for a transient-data buffer (from A11ATNWT)
v Transaction restarts (from A02ATRCT)
v Maximum number of tasks in a class reached (CMXT) (from A15MXTM)
v Transmission errors (from A06TETE or AUSTETE).

Figure 17 shows an example of an incidents report.

CICS Incidents
DATE: '1998-09-20' to '1998-09-21'

Terminal
operator User Exception Exception
Sev Date Time ID ID ID description
--- ---------- -------- -------- -------- ------------------ ---------------------------
03 1995-09-20 15.42.03 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND AZTS
03 1995-09-21 00.00.00 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND APCT
03 1995-09-21 17.37.28 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL
03 1995-09-21 17.12.03 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL

Tivoli Performance Reporter report: CICS002

Figure 17. Example of a Tivoli Performance Reporter CICS incidents report

Tivoli Performance Reporter for OS/390 can pass the exceptions to an


Information/Management system.

118 CICS TS for OS/390: CICS Performance Guide


Unit-of-work reporting
In a CICS multiple region operation (MRO) or intersystem communication (ISC)
environment, you can trace a transaction as it migrates from one region (or
processor complex) to another and back. The data lets you determine the total
resource requirements of the combined transaction as a unit of work, without
having to separately analyze the component transactions in each region. The ability
to combine the component transactions of an MRO or ISC series makes possible
precise resource accounting and chargeback, and capacity and performance
analysis.

The CICS UOW Response Times report in Figure 18 shows an example of how
Tivoli Performance Reporter for OS/390 presents CICS unit- of-work response
times.

CICS UOW Response Times


Time: '09.59.00' to '10.00.00'
Date: 1998 09-20

Adjusted
UOW UOW Response
start Tran CICS Program tran time
time ID ID name count (sec)
-------- ---- -------- -------- ----- --------
09.59.25 OP22 CICSPROD DFHAPRT 2 0.436
OP22 CICSPRDC OEPCPI22

09.59.26 AP63 CICSPRDE APPM00 2 0.045


AP63 CICSPROD DFHAPRT
09.59.26 ARUS CICSPROD DFHAPRT 3 0.158
CSM5 CICSPRDB DFHMIR
ARUS CICSPRDC AR49000

09.59.27 CSM5 CICSPRDB DFHMIR 4 0.639


CSM5 CICSPRDB DFHMIR
MQ01 CICSPROD DFHAPRT
MQ01 CICSPRDD CMQ001

...
Tivoli Performance Reporter report: CICS902

Figure 18. Tivoli Performance Reporter for OS/390 CICS UOW response times report

Monitoring availability
Users of CICS applications depend on the availability of several types of resources:
v Central site hardware and the operating system environment in which the CICS
region runs
v Network hardware, such as communication controllers, teleprocessing lines, and
terminals through which users access the CICS region
v CICS region
v Application programs and data. Application programs can be distributed among
several CICS regions.
In some cases, an application depends on the availability of many resources of the
same and of different types, so reporting on availability requires a complex
analysis of data from different sources. Tivoli Performance Reporter for OS/390
can help you, because all the data is in one database.

Monitoring SYSEVENT data


If the SYSEVENT option is used, CICS records at the end of each transaction:
v Transaction ID
v Associated terminal ID
v Elapsed time

Chapter 7. Tivoli Performance Reporter for OS/390 119


This is useful when you require only transaction statistics, rather than the detailed
information that CMF produces. In many cases, it may be sufficient to process only
this data, since RMF records it as part of its SMF type-72 record. Analysis (and
even recording) of SMF records from CMF can then be reserved for those
circumstances when the detailed data is needed. Use the Tivoli Performance
Reporter System Performance feature (MVS performance component) to report on
this data.

When running under goal mode in MVS 5.1.0 and later, CICS performance can be
reported in workload groups, service classes, and periods. These are a few
examples of Tivoli Performance Reporter reports for CICS in this environment.
Figure 20 shows how service classes were served by other service classes. This
report is available only when the MVS system is running in goal mode.

2.50 Response
time (s)

Active
2.00
Ready
Response Time (sec)

Idle
1.50
Look wait

I/O wait
1.00
Conv wait

Distr wait
0.50
Syspl wait

Timer wait
0.00
Other wait
8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00

Time of Day Misc wait

Figure 19. Example of an MVSPM response time breakdown, hourly trend report

MVSPM Served Service Classes, Overview


Sysplex: 'SYSPLEX1' System: MVS_SYSTEM_ID
Date: '1998-09-22' Period: 'PRIME'

Workload Service Served No of times No of No of times


group class class served tx's served per tx
-------- -------- -------- ------------ ------------ --------------
CICS CICSREGS CICS-1 15227 664 22.9
CICS-2 6405 215 29.8
CICS-3 24992 1251 20.0
CICS-4 87155 1501 58.1
CICSTRX 67769 9314 7.3

Tivoli Performance Reporter report: MVSPM79

Figure 20. Example of an MVSPM served service classes overview report

120 CICS TS for OS/390: CICS Performance Guide


MVSPM Response Time Breakdown, Overview
Sysplex: 'SYSPLEX1' Subsystem: SUBSYSTEM
Date: '1995-09-22' Period: 'PRIME'

Service MVS Total Activ Ready Idle Lock I/O Conv Distr Local Netw Syspl Timer Other Misc
Workload class sysstate state state state wait wait wait wait wait wait wait wait wait wait
group /Period Ph ID (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
-------- ---------- --- --------- ----- ----- ----- ----- ----- -- --- ----- ----- ----- ----- ----- ----- -----
CICS CICS-1 /1 BTE CA0 6.6 0.0 0.0 0.0 0.0 0.0 6.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 29.4 0.0 0.0 0.0 0.0 0.0 14.7 0.0 0.0 0.0 0.0 0.0 14.6 0.0
C90 3.8 0.4 1.3 1.5 0.0 0.2 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 13.3 0.1 0.5 0.5 0.0 0.1 7.2 0.0 0.0 0.0 0.0 0.0 4.9 0.0

/1 EXE CA0 16.0 0.1 0.2 0.1 0.0 15.5 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0
C80 14.9 0.1 0.1 0.1 0.0 3.7 0.0 0.0 0.0 0.0 0.0 0.0 11.0 0.0
C90 14.0 1.6 4.5 4.8 0.0 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 14.9 0.6 1.6 1.7 0.0 7.4 0.0 0.0 0.0 0.0 0.0 0.0 3.7 0.0

IMS IMS-1 /1 EXE CA0 20.7 0.4 0.7 0.0 0.0 0.0 19.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 1.1 0.2 0.1 0.7 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C90 22.2 5.3 11.9 1.2 0.0 0.2 3.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ---- ----- ----- ----- ----- ----- ----- -----
* 14.7 2.0 4.2 0.6 0.0 0.1 7.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Tivoli Performance Reporter report: MVSPM73

Figure 21. Example of an MVSPM response time breakdown overview report

Figure 21 shows how much the various transaction states contribute to the average
response time. This report is available when the MVS system is running in goal
mode and when the subsystem is CICS or IMS.

Figure 19 on page 120 shows the average transaction response time trend and how
the various transaction states contribute to it. (The sum of the different states adds
up to the average execution time. The difference between the response time and
the execution time is mainly made up of switch time, for example, the time the
transactions spend being routed to another region for processing). This report is
available when the MVS system is running in goal mode and when the subsystem
is CICS or IMS.

Chapter 7. Tivoli Performance Reporter for OS/390 121


122 CICS TS for OS/390: CICS Performance Guide
Chapter 8. Managing Workloads
Workload management in a sysplex is provided by:
v MVS workload manager: see “MVS workload manager”
v CICSPlex SM workload management: see “CICSPlex SM workload
management” on page 133

MVS workload manager


This is particularly significant in a sysplex environment, but is also of value to
subsystems running in a single MVS image.

| CICS provides dynamic routing and distributed routing user-replaceable programs


| to handle the routing of the work requests to the selected target region. For details,
| see the CICS Customization Guide and CICS Intercommunication Guide.

To help you migrate to goal-oriented workload management, you can run any
MVS image in a sysplex in compatibility mode, using the performance management
tuning methods of releases of MVS before MVS/ESA 5.1.
Notes:
1. If you do not want to use the MVS workload management facility, you should
review your MVS performance definitions to ensure that they are still
appropriate for CICS Transaction Server for OS/390 Release 3. To do this,
review parameters in the IEAICS and IEAIPS members of the MVS PARMLIB
library. For more information about these MVS performance definitions, see the
OS/390 MVS Initialization and Tuning Guide.
| 2. If you use CICSPlex SM to control dynamic routing in a CICSplex or BTS-plex,
| you can base its actions on the CICS response time goals of the CICS
transactions as defined to the MVS workload manager. See “Using
CICSPlex SM workload management” on page 134. For full details, see the
CICSPlex SM Managing Workloads manual.

Benefits of using MVS Workload Manager


The benefits of using MVS workload manager are:
v Improved performance through MVS resource management
The improvement is likely to depend on many factors, for example:
– System hardware configuration
– The way the system is partitioned
– Whether CICS subsystems are single or multi-region
– The spread of types of applications or tasks performed, and the diversity of
their profile of operation
– The extent to which the sysplex workload changes dynamically.
v Improved efficiency of typical MVS sysplexes
– Improved overall capacity
– Increased work throughput.
v Simplified MVS tuning

© Copyright IBM Corp. 1983, 1999 123


Generally, systems whose operating signature makes attaining or maintaining
optimal tuning difficult or time consuming to achieve by current means will
tend to obtain the greater benefit.

The main benefit is that you no longer have to continually monitor and tune CICS
to achieve optimum performance. You can set your workload objectives in the
service definition and let the workload component of MVS manage the resources
and the workload to achieve your objectives.

The MVS workload manager produces performance reports that you can use to
establish reasonable performance goals and for capacity planning.

MVS workload management terms


The following terms are used in the description of MVS workload management:
classification rules
The rules workload management and subsystems use to assign a service
class and, optionally, a reporting class to a work request (transaction). A
classification rule consists of one or more work qualifiers. See “Defining
classification rules” on page 129.
compatibility mode
A workload management mode for an MVS image in a sysplex using the
pre-workload management MVS performance tuning definitions from the
IEAICSxx and IEAIPSxx members of the SYS1.PARMLIB library.
goal mode
A workload management mode for an MVS image in a sysplex using an
MVS workload management service definition to automatically and
dynamically balance its system resources according to the active service
policy for the sysplex.
report class
Work for which reporting information is collected separately. For example,
you can have a report class for information combining two different service
classes, or a report class for information on a single transaction.
service class
A subset of a workload having the same service goals or performance
objectives, resource requirements, or availability requirements. For
workload management, you assign a service goal to a service class. See
“Defining service classes” on page 128.
service definition
An explicit definition of all the workloads and processing capacity in a
sysplex. A service definition includes service policies, workloads, service
classes, resource groups, and classification rules. See “Setting up service
definitions” on page 127.
service policy
A set of performance goals for all MVS images using MVS workload
manager in a sysplex. There can be only one active service policy for a
sysplex, and all subsystems in goal mode within that sysplex process
towards that policy. However, you can create several service policies, and
switch between them to cater for the different needs of different processing
periods.

124 CICS TS for OS/390: CICS Performance Guide


workload
Work to be tracked, managed and reported as a unit. Also, a group of
service classes.
workload management mode
The mode in which workload management manages system resources in
an MVS image within a sysplex. The mode can be either compatibility
mode or goal mode.

Requirements for MVS workload management


To use MVS workload management you need the following software:
v MVS/ESA System Product (MVS/ESA SP) - JES2 Version 5 Release 1 or a later,
upward-compatible, release
v MVS/ESA System Product (MVS/ESA SP) - JES3 Version 5 Release 1 or a later,
upward-compatible, release

For MVS workload manager operation across the CICS task-related user exit
interface to other subsystems, such as DB2 and DBCTL, you need the appropriate
releases of these products.

For more information about requirements for MVS workload management see the
following manuals: MVS Planning: Workload Management, and MVS Planning:
Sysplex Manager.

Resource usage
The CICS function for MVS workload management incurs negligible impact on
CICS storage.

Span of workload manager operation


MVS workload manager operates across a sysplex. You can run each MVS image in
the sysplex in either goal mode or compatibility mode. However, there can be only
one active service policy for all MVS images running in goal mode in a sysplex.

All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload manager are subject to the effects of workload management.

If the CICS workload involves non-CICS resource managers, such as DB2 and
DBCTL, CICS can pass information through the resource manager interface (RMI1)
to enable MVS workload manager to relate the part of the workload within the
non-CICS resource managers to the part of the workload within CICS.

CICS does not pass information across ISC links to relate the parts of the task
execution thread on either side of the ISC link. If you use tasks that communicate
across ISC links, you must define separate performance goals, and service classes,
for the parts of the task execution thread on each side of the ISC link. These rules
apply to ISC links that are:
v Within the same MVS image (so called “intrahost ISC”)
v Between MVS images in the same sysplex (perhaps for compatibility reasons)

1. The CICS interface modules that handle the communication between a task-related user exit and the resource manager are usually
referred to as the resource manager interface (RMI) or the task-related user exit (TRUE) interface.

Chapter 8. Managing Workloads 125


v Between MVS images in different sysplexes.
If you use tasks that communicate across ISC links between two sysplexes, the
separate performance goals are defined in the active service policy for each
sysplex.

Defining performance goals


You can define performance goals, such as internal response times, for CICS (and
other MVS subsystems that comprise your workload). As an alternative to defining
your own goals, you can use “discretionary goals”—the workload manager decides
how best to run work for which this type of goal is specified. You can define goals
for:
v Individual CICS regions
v Groups of transactions running under CICS
v Individual transactions running under CICS
v Transactions associated with individual userids
v Transactions associated with individual LU names.

Workload management also collects performance and delay data, which can be
used by reporting and monitoring products, such as the Resource Measurement
Facility (RMF), the TIVOLI Performance Reporter for OS/390, or vendor products.

The service level administrator defines your installation’s performance goals, and
monitoring data, based on business needs and current performance. The complete
definition of workloads and performance goals is called a service definition. You
may already have this kind of information in a service level agreement (SLA).

Determining CICS response times before defining goals


Before you set goals for CICS work, you can determine CICS current response
times by running CICS in compatibility mode with an arbitrary goal. For this
purpose, use the SRVCLASS parameter, provided by MVS 5.1 in the installation
control specification (ICS). This parameter lets you associate a service class with a
report performance group, to be run in compatibility mode. You would then:
1. Define a service policy, with a default service class, or classes, for your CICS
work, and specify an arbitrary response time goal (say 3 seconds)
2. Define classification rules for the service class or classes (see “Defining
classification rules” on page 129)
3. Install the service definition
4. Activate the service policy in compatibility mode.
The average response time for work within the service classes is reported under
the report performance group in the RMF Monitor I workload activity report.

This information helps you to set realistic goals for running your CICS work when
you switch to goal mode. The reporting data produced by RMF reports:
v Is organized by service class
v Contains reasons for any delays that affect the response time for the service class
(for example, because of the actions of a resource manager or an I/O
subsystem).

From the reported information, you may be able to determine configuration


changes to improve performance.

126 CICS TS for OS/390: CICS Performance Guide


Example of using SRVCLASS parameter of IEAICSxx
To obtain CICS response time information while in compatibility mode, you can set
up the following:
v In your service definition, set up the following:
– A test policy, comprising the following:
Service Policy Name . . . : CICSTEST
Description . . . . . . . : Migration (compatibility) mode
– A workload definition, in which to define the required service class:
Workload Name . . . . . . . : CICSALL
Description . . . . . . . . . CICSTEST migration workload
– A service class for all CICS transactions:
Service Class Name . . . . . : CICSALL
Description . . . . . . . . . All CICS transactions
Workload Name . . . . . . . . CICSALL
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
__ 1 1 Average response time of 00:00:03.000

Note: It does not matter what goal you specify, since it is not used in
compatibility mode, but it cannot be discretionary.
– Specify the name of the service class under the classification rules for the
CICS subsystem:
Subsystem Type . . . . . . : CICS
Default Service Class . . : CICSALL
v In your ICS member in SYS1.PARMLIB (IEAICSxx), specify:
SUBSYS=CICS,
SRVCLASS=CICSALL,RPGN=100
v Install the workload definition in the coupling facility.
v Activate the test service policy, either by using options provided by the WLM
ISPF application, or by issuing the following MVS command:
VARY WLM,POLICY=CICSTEST

You receive response time information about CICS transactions in the RMF
Monitor I Workload Activity Report under report performance group 100. For more
information about defining performance goals and the use of SRVCLASS, see the
MVS Planning: Workload Management manual.

Setting up service definitions


You define one service definition for each sysplex. A service definition consists of:
Service policies
See “Defining service policies” on page 128
Workloads
See “Defining workloads” on page 128
Service classes
See “Defining service classes” on page 128
Classification rules
See “Defining classification rules” on page 129

Chapter 8. Managing Workloads 127


You should record the details of your planned service definition on worksheets, as
described in the MVS Planning: Workload Management manual. MVS 5.1 provides an
ISPF panel-based application for setting up and adjusting the service definition.

Defining service policies


You can have one or more service policies, which are a named set of performance
goals meant to cover a certain operating period.

If you have varying performance goals, you can define several service policies.

You can activate only one service policy at a time for the whole sysplex, and, when
appropriate, switch to another policy.

Defining workloads
A workload comprises units of work that share some common characteristics that
makes it meaningful for an installation to manage or monitor as a group. For
example, all CICS work, or all CICS order entry work, or all CICS development
work.

A workload is made up of one or more service classes.

Defining service classes


Service classes are categories of work, within a workload, to which you can assign
performance goals. You can create service classes for groups of work with similar:
v Performance goals
You can assign the following performance goals to the service classes:
Response time
You can define an average response time (the amount of time required
to complete the work) or a response time with percentile (a percentage
of work to be completed in the specified amount of time).
Discretionary
You can specify that the goal is discretionary for any work for which
you do not have specific goals.
Velocity
For work not related to transactions, such as batch jobs and started
tasks. For CICS regions started as started tasks, a velocity goal applies
only during start-up.
Notes:
1. For service classes for CICS transactions, you cannot define velocity
performance goals, discretionary goals, or multiple performance periods.
2. For service classes for CICS regions, you cannot define multiple performance
periods.
v Business importance to the installation
You can assign an importance to a service class, so that one service class goal is
recognized as more important than other service class goals. There are five levels
of importance, numbered, from highest to lowest, 1 to 5.

You can also create service classes for started tasks and JES, and can assign
resource groups to those service classes. You can use such service classes to
manage the workload associated with CICS as it starts up, but before CICS

128 CICS TS for OS/390: CICS Performance Guide


transaction-related work begins. (Note that when you define CICS in this way, the
address space name is specified as TN, for the task or JES “transaction” name.)

There is a default service class, called SYSOTHER. It is used for CICS transactions
for which MVS workload management cannot find a matching service class in the
classification rules—for example, if the couple data set becomes unavailable.

Defining classification rules


Classification rules determine how to associate incoming work with a service class.
Optionally, the classification rules can assign incoming work to a report class, for
grouping report data.

There is one set of classification rules for each service definition. The classification
rules apply to every service policy in the service definition; so there is one set of
rules for the sysplex.

You should use classification rules for every service class defined in your service
definition.

Classification rules categorize work into service classes and, optionally, report
classes, based on work qualifiers. You set up classification rules for each MVS
subsystem type that uses workload management. The work qualifiers that CICS
can use (and which identify CICS work requests to workload manager) are:
LU LU name
LUG LU name group
SI Subsystem instance (VTAM applid)
SIG Subsystem instance group
TN Transaction identifier
TNG Transaction identifier group
UI Userid
UIG Userid group.
Notes:
1. You should consider defining workloads for terminal-owning regions only.
Work requests do not normally originate in an application-owning region. They
(transactions) are normally routed to an application-owning region from a
terminal-owning region, and the work request is classified in the
terminal-owning region. In this case, the work is not reclassified in the
application-owning region.
If work orginates in the application-owning region it is classified in the
application-owning region; normally there would be no terminal.
2. You can use identifier group qualifiers to specify the name of a group of
qualifiers; for example, GRPACICS could specify a group of CICS tranids,
which you could specify on classification rules by TNG GRPACICS. This is a
useful alternative to specifying classification rules for each transaction
separately.

You can use classification groups to group disparate work under the same work
qualifier—if, for example, you want to assign it to the same service class.

You can set up a hierarchy of classification rules. When CICS receives a


transaction, workload manager searches the classification rules for a matching
qualifier and its service class or report class. Because a piece of work can have
more than one work qualifier associated with it, it may match more than one

Chapter 8. Managing Workloads 129


classification rule. Therefore, the order in which you specify the classification rules
determines which service classes are assigned.

Note: You are recommended to keep classification rules simple.

Example of using classification rules: As an example, you might want all CICS
work to go into service class CICSB except for the following:
v All work from LU name S218, except the PAYR transaction, is to run in service
class CICSA
v Work for the PAYR transaction (payroll application) entered at LU name S218 is
to run in service class CICSC.
v All work from terminals other than LU name S218, and whose LU name begins
with S2, is to run in service class CICSD.
You could specify this by the following classification rules:
Subsystem Type . . . . . . . CICS

-------Qualifier----------- -------Class--------
Type Name Start Service Report
DEFAULTS: CICSB ________
1 LU S218 CICSA ________
2 TN PAYR CICSC ________
1 LU S2* CICSD ________

Note: In this classification, the PAYR transaction is nested as a sub-rule under the
classification rule for LU name S218, indicated by the number 2, and the
indentation of the type and name columns.

Consider the effect of these rules on the following work requests:


Request 1 Request 2 Request 3 Request 4

LU name ...... S218 A001 S218 S214


Transaction .. PAYR PAYR DEBT ANOT

v For request 1, the work request for the payroll application runs in service class
CICSC. This is because the request is associated with the terminal with LU name
S218, and the TN—PAYR classification rule specifying service class CICSC is
nested under the LU—S218 classification rule qualifier.
v For request 2, the work request for the payroll application runs in service class
CICSB, because it is not associated with LU name S218, nor S2*, and there are
no other classification rules for the PAYR transaction. Likewise, any work
requests associated with LU names that do not start with S2 run in service class
CICSB, as there are classification rules for LU names S218 and S2* only.
v For request 3, the work request for the DEBT transaction runs in service class
CICSA, because it is associated with LU name S218, and there is no DEBT
classification rule nested under the LU—S218 classification rule qualifiers.
v For request 4, the work request for the ANOT transaction runs in service class
CICSD, because it is associated with an LU name starting S2, but not S218.

However, if the classification rules were specified as:


1 TN PAYR CICSA ________
1 LU S218 CICSA ________
2 TN PAYR CICSC ________
1 LU S2* CICSD ________

130 CICS TS for OS/390: CICS Performance Guide


the PAYR transaction would always run in service class CICSA, even if it were
associated with LU name S218.

Guidelines for classifying CICS transactions


For RMF to provide meaningful Workload Activity Report data it is suggested that
you use the following guidelines when defining the service classes for CICS
transactions. In the same service class:
1. Do not mix CICS-supplied transactions with user transactions
2. Do not mix routed with non-routed transactions
3. Do not mix conversational with pseudo-conversational transactions
4. Do not mix long-running and short-running transactions.

Using a service definition base


To minimize the amount of data you need to enter into the ISPF workload
application, you use a service definition base. When you set up your service
definition, you identify the workloads, the service classes, and their goals, based
on your performance objectives. Then you define classification rules. This
information makes up the service definition base. The base contains workloads,
service classes, resource groups, report classes, and classification rules.

All workloads, service classes, and classification rules defined in a service


definition base apply to every policy that you define. You should use classification
rules for every service class defined in your service definition. If you do not have
any other business requirements to modify a service goal or a resource group from
the service definition base, you can run an installation with one policy.

Using MVS workload manager


To use the MVS workload manager facility, you should:
1. Implement workload management on the MVS images that the CICS workload
is to run on, as outlined in “Implementing MVS workload management”.
2. Ensure that CICS performance parameters correspond to the policies defined
for MVS workload management, as outlined in “Matching CICS performance
parameters to service policies” on page 132.
3. Activate MVS workload manager, as outlined in “Activating CICS support for
MVS workload manager” on page 133.

Implementing MVS workload management


The task of implementing MVS workload management is part of the overall task of
planning for, and installing, MVS 5.1.

Implementing MVS workload management generally involves the following steps:


1. Establish your workloads.
2. Set your business priorities.
3. Understand your performance objectives.
4. Define critical work.
5. Define performance objectives based on current:
v Business needs

Chapter 8. Managing Workloads 131


v Performance:
– Reporting and monitoring products
– Capacity planning tools
– IEAICS and IEAIPS parameters.
6. Get agreement for your workload performance objectives.
7. Specify a service level agreement or performance objectives.
8. Specify an MVS WLM service definition using the information from step 7.

Note: It is helpful at this stage to record your service definition in a form that
will help you to enter it into the MVS workload manager ISPF
application. You are recommended to use the worksheets provided in
the MVS publication Planning: Workload Management.
9. Install MVS.
10. Set up a sysplex with a single MVS image, and run in workload manager
compatibility mode.
11. Upgrade your existing XCF couple data set.
12. Start the MVS workload manager ISPF application, and use it in the following
steps.
13. Allocate and format a new couple data set for workload management. (You
can do this from the ISPF application.)
14. Define your service definition.
15. Install your service definition on the couple data set for workload
management.
16. Activate a service policy.
17. Switch the MVS image into goal mode.
18. Start up a new MVS image in the sysplex. (That is, attach the new MVS image
to the couple data set for workload management, and link it to the service
policy.)
19. Switch the new MVS image into goal mode.
20. Repeat steps 18 and 19 for each new MVS image in the sysplex.
Notes:
1. CICS Transaction Server for OS/390 support for MVS workload manager is
initialized automatically during CICS startup.
2. All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload management are subject to the effects of workload manager.

Matching CICS performance parameters to service policies


You must ensure that the CICS performance parameters are compatible with the
workload manager service policies used for the CICS workload.

In general, you should define CICS performance objectives to the MVS workload
manager first, and observe the effect on CICS performance. Once the MVS
workload manager definitions are working correctly, you can then consider tuning
the CICS parameters to further enhance CICS performance. However, you should
use CICS performance parameters as little as possible.

Performance attributes that you might consider using are:

132 CICS TS for OS/390: CICS Performance Guide


v Transaction priority, passed on dynamic transaction routing. (Use prioritization
carefully, if at all.) The priority assigned by the CICS dispatcher must be
compatible with the task priority defined to MVS workload manager.
v Maximum number of concurrent user tasks for the CICS region.
v Maximum number of concurrent tasks in each transaction class.

Activating CICS support for MVS workload manager


CICS Transaction Server for OS/390 Release 3 support for MVS workload manager
is initialized automatically during CICS startup.

Customer-written resource managers and other non-CICS code which is attached to


CICS via the RMI must be modified to provide workload manager support, if
workload manager is to work correctly for CICS-based tasks which cross the RMI
| into such areas.
|
| CICSPlex SM workload management
| CICSPlex SM workload management directs work requests to a target region that
| is selected using one of the following:
| The queue algorithm
| CICSPlex SM routes work requests initiated in the requesting region to the
| most suitable target region within the designated set of target regions.
| The goal algorithm
| CICSPlex SM routes work requests to the target region that is best able to
| meet the goals that have been predefined using MVS workload manager.

| The CICSPlex SM dynamic routing program EYU9XLOP is invoked to route work


| requests to the selected target region. EYU9XLOP supports both workload
| balancing and workload separation. You define to CICSPlex SM which requesting,
| routing, and target regions in the CICSplex or BTS-plex can participate in dynamic
| routing, and any affinities that govern the target regions to which particular work
| requests must be routed. The output from the Transaction Affinities Utility can be
| used directly by CICSPlex SM.

| For more information about CICSPlex SM, see the CICSPlex SM Concepts and
| Planning manual.

| Benefits of using CICSPlex SM workload management


| There are no special requirements to be able to use the dynamic transaction routing
| mechanism, but it offers the user the following:
| v A dynamic routing program to make more intelligent routing decisions; for
| example, based on workload goals.
| v CICS can provide improved support for MVS goal-oriented workload
| management.
| v Make it easier to use a global temporary storage owning region in the MVS
| sysplex environment, which avoids intertransaction affinity that can occur with
| the use of local temporary storage queues.
| v Can be used by CICSPlex SM to provide intelligent routing in a CICSPlex or a
| BTS-plex that has at least one requesting region linked to multiple target regions.

Chapter 8. Managing Workloads 133


| Using CICSPlex SM workload management
| For information on setting up and using CICSPlex SM workload management, see
| the CICSPlex SM Concepts and Planning and the CICSPlex SM Managing Workloads
| manuals.

134 CICS TS for OS/390: CICS Performance Guide


Chapter 9. Understanding RMF workload manager data
This chapter provides in the following sections an explanation, together with a
number of examples, of CICS-related data from an RMF workload activity report.
v “Explanation of terms used in RMF reports”
v “Interpreting the RMF workload activity data” on page 137

RMF provides data for subsystem work managers that support workload
management. In MVS these are IMS and CICS.

This chapter includes a discussion of some possible data that may be reported for
CICS and IMS, and provides some possible explanations for the data. Based on this
discussion and the explanations, you may decide to alter your service class
definitions. In some cases, there may be some actions that you can take, in which
case you can follow the suggestion. In other cases, the explanations are provided
only to help you better understand the data. For more information about using
RMF, see the RMF User’s Guide.

Explanation of terms used in RMF reports


It might help to relate some of the terms used in an RMF activity report to the
more familiar CICS terms. For example, some of terms in the RMF report can be
equated with CEMT INQUIRE TASK terms.

These explanations are given for two main sections of the reports:
v The response time breakdown in percentage section
v The state section, covering switched time.

The response time breakdown in percentage section


The “Response time breakdown in percentage” section of the RMF report contains
the following headings:
ACTIVE
The percentage of response time accounted for by tasks currently executing
in the region—tasks shown as Running by the CEMT INQUIRE TASK
command.
READY
The percentage of response time accounted for by tasks that are not
currently executing but are ready to be dispatched—tasks shown as
Dispatchable by the CEMT INQUIRE TASK command.
IDLE The percentage of response time accounted for by a number of instances or
types of CICS tasks:
v Tasks waiting on a principal facility (for example, conversational tasks
waiting for a response from a terminal user)
v The terminal control (TC) task, CSTP, waiting for work
v The interregion controller task, CSNC, waiting for transaction routing
requests
v CICS system tasks, such as CSSY or CSNE waiting for work.

© Copyright IBM Corp. 1983, 1999 135


A CEMT INQUIRE TASK command would show any of these user tasks as
Suspended, as are the CICS system tasks.
WAITING FOR
The percentage of response time accounted for by tasks that are not
currently executing and are not ready to be dispatched—shown as
Suspended by the CEMT INQUIRE TASK command.

The WAITING FOR main heading is further broken down into a number of
subsidiary headings. Where applicable, for waits other than those described for the
IDLE condition described above, CICS interprets the cause of the wait, and records
the ‘waiting for’ reason in the WLM performance block.

The waiting-for terms used in the RMF report equate to the WLM_WAIT_TYPE
parameter on the SUSPEND, WAIT_OLDC, WAIT_OLDW, and WAIT_MVS calls
used by the dispatcher, and the SUSPEND and WAIT_MVS calls used in the CICS
XPI. These are shown as follows (with the CICS WLM_WAIT_TYPE term, where
different from RMF, in parenthesis):
Term Description
LOCK Waiting on a lock. For example, waiting for:
v A lock on CICS resource
v A record lock on a recoverable VSAM file
v Exclusive control of a record in a BDAM file
v An application resource that has been locked by an EXEC CICS ENQ
command.
I/O (IO)
Waiting for an I/O request or I/O related request to complete. For
example:
v File control, transient data, temporary storage, or journal I/O.
v Waiting on I/O buffers or VSAM strings.
CONV
Waiting on a conversation between work manager subsystems. This
information is further analyzed under the SWITCHED TIME heading.
DIST Not used by CICS.
LOCAL (SESS_LOCALMVS)
Waiting on the establishment of a session with another CICS region in the
same MVS image in the sysplex.
SYSPL (SESS_SYSPLEX)
Waiting on establishment of a session with another CICS region in a
different MVS image in the sysplex.
REMOT (SESS_NETWORK)
Waiting on the establishment of an ISC session with another CICS region
(which may, or may not, be in the same MVS image).
TIMER
Waiting for a timer event or an interval control event to complete. For
example, an application has issued an EXEC CICS DELAY or EXEC CICS
WAIT EVENT command which has yet to complete.
PROD (OTHER_PRODUCT)
Waiting on another product to complete its function; for example, when
the work request has been passed to a DB2 or DBCTL subsystem.

136 CICS TS for OS/390: CICS Performance Guide


MISC Waiting on a resource that does not fall into any of the other categories.

The state section


The state section covers the time that transactions are “switched” to another CICS
region:
SWITCHED TIME
The percentage of response time accounted for by tasks in a TOR that are
waiting on a conversation across an intersystem communication link (MRO
or ISC). This information provides a further breakdown of the response
time shown under the CONV heading.

The SWITCHED TIME heading is further broken down into a number of


subsidiary headings, and covers those transactions that are waiting on a
conversation. These are explained as follows:
LOCAL
The work request has been switched, across an MRO link, to another CICS
region in same MVS image.
SYSPL
The work request has been switched, across an XCF/MRO link, to another
CICS region in another MVS image in the sysplex.
REMOT
The work request has been switched, across an ISC link, to another CICS
region (which may, or may not, be in the same MVS image).

For more information on the MVS workload manager states and resource names
used by CICS Transaction Server for OS/390 Release 3, see the CICS Problem
Determination Guide.

Interpreting the RMF workload activity data


Figure 23 on page 138 shows an example of the CICS state section of an RMF
Monitor I workload activity report. It is based on an example hotel reservations
service class.

The text following the figure explains how to interpret the fields.

RMF reporting intervals

Chapter 9. Understanding RMF workload manager data 137


time

interval RMF reporting interval RMF reporting int

TOR AOR TOR

Starts Runs and completes Completes TxnA


TxnA TxnA, which is which is included
and routes included in EXE in BTE total for
it to AOR. total for this this interval.
interval.

Figure 22. Illustration of snapshot principle for RMF reporting intervals

REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSHR RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH

-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT


AVG 0.00 ACTUAL 000.00.00.114
MPL 0.00 QUEUED 000.00.00.036
ENDED 216 EXECUTION 000.00.00.078
END/SEC 0.24 STANDARD DEVIATION 000.00.00.270
#SWAPS 0
EXECUTD 216
--------------------------RESPONSE TIME BREAKDOWN IN PERCENTAGE------------------- ----STATE------
SUB P TOTAL ACTIVE READY IDLE -------------------------WAITING FOR--------------------- SWITCHED TIME (%)
TYPE LOCK I/O CONV DIST LOCAL SYSPL REMOT TIMER PROD MISC LOCAL SYSPL REMOT
CICS BTE 93.4 10.2 0.0 0.0 0.0 0.0 83.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 83.3 0.0 0.0
CICS EXE 67.0 13.2 7.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 46.7 0.0 0.0 0.0 0.0

Figure 23. Hotel Reservations service class

An RMF workload activity report contains “snapshot data” which is data collected
over a relatively short interval. The data for a given work request (CICS
transaction) in an MRO environment is generally collected for more than one CICS
region, which means there can be some apparent inconsistencies between the
execution (EXE) phase and the begin to end (BTE) data in the RMF reports. This is
caused by the end of a reporting interval occurring at a point when work has
completed in one region but not yet completed in an associated region. See
Figure 22.

For example, an AOR can finish processing transactions, the completion of which
are included in the current reporting interval, whilst the TOR may not complete its
processing of the same transactions during the same interval.

The fields in this RMF report describe an example CICS hotel reservations service
class (CICSHR), explained as follows:
CICS This field indicates that the subsystem work manager is CICS.
BTE This field indicates that the data in the row relates to the begin-to-end work
phase.
CICS transactions are analyzed over two phases: a begin-to-end (BTE)
phase, and an execution (EXE) phase.
The begin-to-end phase usually takes place in the terminal owning region
(TOR), which is responsible for starting and ending the transaction.
EXE This field indicates that the data in the row relates to the execution work
phase. The execution phase can take place in an application owning region
(AOR) and a resource-owning region such as an FOR. In our example, the

138 CICS TS for OS/390: CICS Performance Guide


216 transactions were routed by a TOR to another region for execution,
such as an AOR (and possibly an FOR).
ENDED
This field shows that 216 hotel reservation transactions completed.
EXECUTD
This field shows that the AORs completed 216 transactions in the reporting
interval.

Note: In our example the two phases show the same number of
transactions completed, indicating that during the reporting interval
all the transactions routed by the TORs (ENDED) were completed
by the AORs (EXECUTD) and also completed by the TORs. This will
not normally be the case because of the way data is captured in
RMF reporting intervals. See “RMF reporting intervals” on page 137.
ACTUAL
Shown under TRANSACTION TIME, this field shows the average response
time as 0.114 seconds, for the 216 transactions completed in the BTE phase.
EXECUTION
Shown under TRANSACTION TIME, this field shows that on average it
took 0.078 seconds for the AORs to execute the transactions.

While executing these transactions, CICS records the states the transactions are
experiencing. RMF reports the states in the RESPONSE TIME BREAKDOWN IN
PERCENTAGE section of the report, with one line for the begin-to-end phase, and
another for the execution phase.

| The response time analysis for the BTE phase is described as follows:
| For BTE
| Explanation
| TOTAL
| The CICS BTE total field shows that the TORs have information covering
| 93.4% of the ACTUAL response time, the analysis of which is shown in the
| remainder of the row. This value is the ratio of sampled response times to
| actual response times. The sampled response times are derived by
| calculating the elapse times to be the number of active performance blocks
| (inflight transactions) multiplied by the sample interval time. The actual
| response times are those reported to RMF by CICS when each transaction
| ends. The proximity of the total value to 100% and a relatively small
| standard deviation value are measures of how accurately the sampled data
| represents the actual system behavior. “Possible explanations” on page 141
| shows how these reports can be distorted.
| ACTIVE
| On average, the work (transactions) was active in the TORs for only about
| 10.2% of the ACTUAL response time
| READY
| In this phase, the TORs did not detect that any part of the average
| response time was accounted for by work that was dispatchable but
| waiting behind other transactions.
| IDLE In this phase, the TORs did not detect that any part of the average
| response time was accounted for by transactions that were waiting for
| work.

Chapter 9. Understanding RMF workload manager data 139


| WAITING FOR
| Only one field shows a value in the WAITING FOR section—the CONV
| value (this is typical for a TOR). It indicates that for about 83.3% of the
| time, the transactions were waiting on a conversation. This is further
| explained by the SWITCHED TIME data.
| SWITCHED TIME
| From the SWITCHED TIME % data you can see the reason for the
| ‘waiting-on-a-conversation’. This is 83.3 % LOCAL, which indicates that
| the transactions were routed locally to an AOR on the same MVS image.

| Note: In the analysis of the BTE phase, the values do not exactly add up to the
TOTAL value because of rounding—in our example, 10.2 + 83.3 = 93.5,
against a total shown as 93.4.

The response time analysis for the EXE phase is described as follows:
For EXE
Explanation
TOTAL
The CICS EXE total field shows that the AORs have information covering
67% of the ACTUAL response time.
ACTIVE
On average, the work is active in the AOR for only about 13.2% of the
average response time.
READY
On average the work is ready, but waiting behind other tasks in the region,
for about 7.1% of the average response time.
PROD On average, 46.7% of the average response time is spent outside the CICS
subsystem, waiting for another product to provide some service to these
transactions.
You can’t tell from this RMF report what the other product is, but the
probability is that the transactions are accessing data through a database
manager such as Database Control (DBCTL) or DB2.

Example: very large percentages in the response time breakdown


Figure 24 on page 141 shows an example of a work manager state section for the
CICSPROD service class. In the RESPONSE TIME BREAKDOWN IN
PERCENTAGE section of the report, both the CICS EXE and the CICS BTE rows
show excessively inflated percentages: 78.8K, 183, 1946 and so on.

140 CICS TS for OS/390: CICS Performance Guide


REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH

-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT


AVG 0.00 ACTUAL 000.00.00.111
MPL 0.00 QUEUED 000.00.00.000
ENDED 1648 EXECUTION 000.00.00.123
END/SEC 1.83 STANDARD DEVIATION 000.00.00.351
#SWAPS 0
EXECUTD 1009

-------------------------------RESPONSE TIME BREAKDOWN IN PERCENTAGE---------------------- ---STATE--------


SUB P TOTAL ACTIVE READY IDLE ------------------------WAITING FOR-------------------------- SWITCHED TIME (%)
TYPE LOCK I/O CONV DIST LOCAL SYSPL REMOT TIMER PROD MISC LOCAL SYSPL REMOT
CICS BTE 78.8K 183 265 1946 0.0 0.0 235 0.0 0.0 0.0 0.0 0.0 0.0 76.2K 229 0.0 17.9
CICS EXE 140 91.8 3.1 0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 45.4 0.0 19.6K 0.0 0.0

Figure 24. Response Time percentages greater than 100

Possible explanations
There several possible explanations for the unusual values shown in this sample
report:
v Long-running transactions
v Never-ending transactions
v Conversational transactions
v Dissimilar work in service class

Long-running transactions
| The RMF report in Figure 23 on page 138 shows both very high response times
| percentages and a large standard deviation of reported transaction times.

| The report shows for the recorded 15 minute interval that 1648 transactions
| completed in the TOR. These transactions had an actual average response time of
| 0.111seconds (note that this has a large standard deviation) giving a total of 182.9
| seconds running time (0.111 seconds multiplied by 1648 transactions). However, if
| there are a large number of long running transactions also running, these will be
| counted in the sampled data but not included in the the actual response time
| values. If the number of long running transactions is large, the distortion of the
| Total value will also be very large.

The long running transactions could be either routed or non-routed transactions.


Routed transactions are transactions that are routed from a TOR to one or more
AORs. Long-running routed transactions could result in many samples of waiting
for a conversation (CONV) in the CICS begin-to-end phase, with the AOR’s state
shown in the execution phase.

Non-routed transactions execute completely in a TOR, and have no execution


(CICS EXE) phase data. Non-routed CICS transactions could inflate the ACTIVE or
READY data for the CICS BTE phase.

Never-ending transactions
Never-ending transactions differ from long-running transactions in that they persist
for the life of a region. For CICS, these could include the IBM reserved transactions
such as CSNC and CSSY, or customer defined transactions. Never-ending
transactions are reported in a similar way to long-running transactions, as
explained above. However, for never-ending CICS transactions, RMF might report
large percentages in IDLE, or under TIMER or MISC in the WAITING FOR section.

Chapter 9. Understanding RMF workload manager data 141


Conversational transactions
Conversational transactions are considered long-running transactions. CICS marks
the state of a conversational transaction as IDLE when the transaction is waiting
for terminal input. Terminal input often includes long end-user think time, so you
might see very large values in the IDLE state as a percent of response time for
completed transactions.

Dissimilar work in the service class


A service class that mixes:
v Customer and IBM transactions,
v Long-running and short-running transactions
v Routed and non-routed transactions
v Conversational and non-conversational transactions
can expect to have RMF reports showing that the total states sampled account for
more than the average response time. This can be expected if the service class is
the subsystem default service class. The default is defined in the classification rules
as the service class to be assigned to all work in a subsystem not otherwise
assigned a service class.

Possible actions
The following are some actions you could take for reports of this type:

Group similar work into the same service classes: Make sure your service classes
represent groups of similar work. This could require creating additional service
classes. For the sake of simplicity, you may have only a small number of service
classes for CICS work. If there are transactions for which you want the RMF
response time breakdown data, consider including them in their own service class.

Do nothing: For service classes representing dissimilar work such as the subsystem
default service class, recognize that the response time breakdown could include
long-running or never-ending transactions. Accept that RMF data for such service
classes does not make much sense.

Example: response time breakdown data is all zero


Figure 25 on page 143 shows an example of a work manager state section for the
CICSLONG service class. All data shows a 0.0 value.

142 CICS TS for OS/390: CICS Performance Guide


REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSLONG RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
CICS Long Running Internal Trxs
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.000
MPL 0.00 QUEUED 000.00.00.000
ENDED 0 EXECUTION 000.00.00.000
END/SEC 0.00 STANDARD DEVIATION 000.00.00.000
#SWAPS 0
EXECUTD 0

-------------------------------RESPONSE TIME BREAKDOWN IN PERCENTAGE--------------- ---------STATE---


SUB P TOTAL ACTIVE READY IDLE ----------------------------WAITING FOR---------------- SWITCHED TIME (%)
TYPE LOCK I/O CONV DIST LOCAL SYSPL REMOT TIMER PROD MISC LOCAL SYSPL REMOT
CICS BTE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Figure 25. Response time breakdown percentages all 0.0

Possible explanations
There are two possible explanations:
1. No transactions completed in the interval
2. RMF did not receive data from all systems in the sysplex.

No transactions completed in the interval


While a long-running or never-ending transaction is being processed, RMF saves
the service class state samples to SMF Type 72 records, (subtype 3). But when no
transactions have completed, (and average response time is 0), the calculations to
apportion these state samples over the response time result in 0%.

RMF did not receive data from all systems in the sysplex.
The RMF post processor may have been given SMF records from only a subset of
the systems running in the sysplex. For example, the report may represent only a
single MVS image. If that MVS image has no TOR, its AORs receive CICS
transactions routed from another MVS image or from outside the sysplex. Since the
response time for the transactions is reported by the TOR, there is no transaction
response time for the work, nor are there any ended transactions.

Possible actions
The following are some actions you could take for reports of this type:

Do nothing
You may have created this service class especially to prevent the state samples of
long running transactions from distorting data for your production work. In this
case there is no action to take.

Combine all SMF records for the sysplex


The state data is contained in the SMF records. If you combine the data from an
MVS image that doesn’t have a TOR with another MVS image that does, the state
data from the two MVS images is analyzed together by RMF. This ensures that the
response time distribution data is no longer reported as zeros.

Chapter 9. Understanding RMF workload manager data 143


Example: execution time greater than response time
Figure 26 shows an example of a work manager state section for the CICSPROD
service class. In the example, there are 1731 ENDED transactions yet the EXECUTD
field shows that only 1086 have been executed. The response time (ACTUAL field)
shows 0.091 seconds as the average of all 1731 transactions, while the AORs can
only describe the execution of the 1086 they participated in, giving an execution
time of 0.113.

REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
CICS Trans not classified singly
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.091
MPL 0.00 QUEUED 000.00.00.020
ENDED 1731 EXECUTION 000.00.00.113
END/SEC 1.92 STANDARD DEVIATION 000.00.00.092
#SWAPS 0
EXECUTD 1086

Figure 26. Execution time greater than response time

Possible explanation
The situation illustrated by this example could be explained by the service class
containing a mixture of routed and non-routed transactions. In this case, the AORs
have recorded states which account for more time than the average response time
of all the transactions. The response time breakdown shown by RMF for the
execution phase of processing can again show percentages exceeding 100% of the
response time.

Possible actions
Define routed and non-routed transactions in different service classes.

Example: large SWITCH LOCAL Time in CICS execution phase


Figure 27 shows a work manager state data section for a CICSPROD service class.
The SWITCH LOCAL time in the response time breakdown section shows a value
of 6645.

REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.150
MPL 0.00 QUEUED 000.00.00.039
ENDED 3599 EXECUTION 000.00.00.134
END/SEC 4.00 STANDARD DEVIATION 000.00.00.446
#SWAPS 0
EXECUTD 2961

-------------------------------RESPONSE TIME BREAKDOWN IN PERCENTAGE----------------- ------STATE------


SUB P TOTAL ACTIVE READY IDLE ---------------------------WAITING FOR---------------------- SWITCHED TIME (%)
TYPE LOCK I/O CONV DIST LOCAL SYSPL REMOT TIMER PROD MISC LOCAL SYSPL REMOT
CICS BTE 26.8K 75.1 98.4 659 0.0 0.3 154 0.0 0.0 0.0 0.0 0.0 0.0 25.8K 149 0.0 7.8
CICS EXE 93.7 38.6 5.6 0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 49.4 0.0 6645 0.0 0.0

Figure 27. High SWITCH time in a CICS execution environment

144 CICS TS for OS/390: CICS Performance Guide


Possible explanations
This situation can be explained by instances of distributed transaction processing

If, while executing a transaction, an AOR needs to function ship a request to


another region (for example, to a file-owning or queue-owning region), the
execution time reported in the RMF report for the AOR (the CICS EXE field)
includes the time spent in that other region.

However, if a program initiates distributed transaction processing to multiple


back-end regions, there can be many AORs associated with the original transaction.
Each of the multiple back-end regions can indicate they are switching control back
to the front-end region (SWITCH LOCAL). Thus, with a 1-many mapping like this,
there are many samples of the execution phase indicating switched requests—long
enough to exceed 100% of the response time of other work completing in the
service class.

Possible actions
None.

Example: fewer ended transactions with increased response times


The RMF workload activity report shows increased response times, and a decrease
in the number of ended transactions.

Possible explanation
This situation could be caused by converting from ISC to MRO between the TOR
and the AOR.

When two CICS regions are connected via VTAM intersystem communication (ISC)
links, the perspective from a WLM viewpoint is that they behave differently from
when they are connected via multiregion (MRO) option. One key difference is that,
with ISC, both the TOR and the AOR are receiving a request from VTAM, so each
believes it is starting and ending a given transaction. So for a given user request
routed from the TOR via ISC to an AOR, there would be 2 completed transactions.

Let us assume they have response times of 1 second and .75 seconds respectively,
giving for an average of .875 seconds. When the TOR routes via MRO, the TOR
will describe a single completed transaction taking 1 second (in a begin-to-end
phase), and the AOR will report it’s .75 seconds as execution time. Therefore,
converting from an ISC link to an MRO connection, for the same workload, could
result in 1/2 the number of ended transactions and a corresponding increase in the
response time reported by RMF.

Possible action
Increase CICS transaction goals prior to your conversion to an MRO connection.

Chapter 9. Understanding RMF workload manager data 145


146 CICS TS for OS/390: CICS Performance Guide
Part 3. Analyzing the performance of a CICS system
This part gives an overview of performance analysis, identifies performance
constraints, and describes various techniques for performance analysis.
v “Chapter 10. Overview of performance analysis” on page 149
v “Chapter 11. Identifying CICS constraints” on page 155
v “Chapter 12. CICS performance analysis” on page 169
v “Chapter 13. Tuning the system” on page 177.

© Copyright IBM Corp. 1983, 1999 147


148 CICS TS for OS/390: CICS Performance Guide
Chapter 10. Overview of performance analysis
This chapter discusses performance analysis in the following sections:
v “Establishing a measurement and evaluation plan” on page 150
v “Investigating the overall system” on page 152
v “Other ways to analyze performance” on page 153

There are four main uses for performance analysis:


1. You currently have no performance problems, but you simply want to adjust
the system to give better performance, and you are not sure where to start.
2. You want to characterize and calibrate individual stand-alone transactions as
part of the documentation of those transactions, and for comparison with some
future time when, perhaps, they start behaving differently.
3. A system is departing from previously identified objectives, and you want to
find out precisely where and why this is so. Although an online system may be
operating efficiently when it is installed, the characteristics of the system usage
may change and the system may not run so efficiently. This inefficiency can
usually be corrected by adjusting various controls. At least some small
adjustments usually have to be made to any new system as it goes live.
4. A system may or may not have performance objectives, but it appears to be
suffering severe performance problems.

If you are in one of the first two categories, you can skip this chapter and the next
and go straight to “Chapter 12. CICS performance analysis” on page 169.

If the current performance does not meet your needs, you should consider tuning
the system. The basic rules of tuning are:
1. Identify the major constraints in the system.
2. Understand what changes could reduce the constraints, possibly at the expense
of other resources. (Tuning is usually a trade-off of one resource for another.)
3. Decide which resources could be used more heavily.
4. Adjust the parameters to relieve the constrained resources.
5. Review the performance of the resulting system in the light of:
v Your existing performance objectives
v Progress so far
v Tuning effort so far.
6. Stop if performance is acceptable; otherwise do one of the following:
v Continue tuning
v Add suitable hardware capacity
v Lower your system performance objectives.

The tuning rules can be expressed in flowchart form as follows:

© Copyright IBM Corp. 1983, 1999 149


Understand
Performance
Objectives

Monitor the system


following a
measurement and
evaluation plan
- Objectives
- Resource
contention
- Predictions

Have the Identify major


performance No
resolvable resource
objectives contention
been met?

Yes

Devise a tuning
Continue strategy that will:
monitoring the - Minimize usage
system as planned of resource
- Expand the
capacity of
the system

Identify
the variables

Predict
the effects

Make the change

Figure 28. Flowchart to show rules for tuning performance

Establishing a measurement and evaluation plan


For some installations, a measurement and evaluation plan might be suitable. A
measurement and evaluation plan is a structured way to measure, evaluate, and
monitor the system’s performance. By taking part in setting up this plan, the users,
user management, and your own management will know how the system’s
performance is to be measured. In addition, you will be able to incorporate some
of their ideas and tools, and they will be able to understand and concur with the
plan, support you and feel part of the process, and provide you with feedback.

The implementation steps for this plan are:


1. Devise the plan

150 CICS TS for OS/390: CICS Performance Guide


2. Review the plan
3. Implement the plan
4. Revise and upgrade the plan as necessary.

Major activities in using the plan are:


v Collect information periodically to determine:
– Whether objectives have been met
– Transaction activity
– Resource utilization.
v Summarize and analyze the information. For this activity:
– Plot volumes and averages on a chart at a specified frequency
– Plot resource utilization on a chart at a specified frequency
– Log unusual conditions on a daily log
– Review the logs and charts weekly.
v Make or recommend changes if objectives have not been met.
v Relate past, current, and projected:
– Transaction activity
– Resource utilization.
to determine:
– If objectives continue to be met
– When resources are being used beyond an efficient capacity.
v Keep interested parties informed by means of informal reports, written reports,
and monthly meetings.

A typical measurement and evaluation plan might include the following items as
objectives, with statements of recording frequency and the measurement tool to be
used:
v Volume and response time for each department
v Network activity:
– Total transactions
– Tasks per second
– Total by transaction type
– Hourly transaction volume (total, and by transaction).
v Resource utilization examples:
– DSA utilization
– Processor utilization with CICS
– Paging rate for CICS and for the system
– Channel utilization
– Device utilization
– Data set utilization
– Line utilization.
v Unusual conditions:
– Network problems
– Application problems
– Operator problems
– Transaction count for entry to transaction classes

Chapter 10. Overview of performance analysis 151


– SOS occurrences
– Storage violations
– Device problems (not associated with the communications network)
– System outage
– CICS outage time.

Investigating the overall system


Always start by looking at the overall system before you decide that you have a
specific CICS problem. The behavior of the system as a whole is usually just as
important. You should check such things as total processor usage, DASD activity,
and paging.

Performance degradation is often due to application growth that has not been
matched by corresponding increases in hardware resources. If this is the case, solve
the hardware resource problem first. You may still need to follow on with a plan
for multiple regions.

Information from at least three levels is required:


1. CICS: Examine the CICS interval or end-of-day statistics for exceptions, queues,
and other symptoms which suggest overloads on specific resources. A shorter
reporting period can isolate a problem. Consider software as well as hardware
resources: for example, utilization of VSAM strings or database threads as well
as files and TP lines. Check run time messages sent to the console and to
transient data destinations, such as CSMT and CSTL, for persistent application
problems and network errors.
Use tools such as CEMT and RMF, to monitor the online system and identify
activity which correlates to periods of bad performance. Collect CICS
monitoring facility history and analyze it, using tools like TIVOLI Performance
Reporter to identify performance and resource usage exceptions and trends. For
example, processor-intensive transactions which do little or no I/O should be
noted. After they get control, they can monopolize the processor. This can cause
erratic response in other transactions with more normally balanced activity
profiles. They may be candidates for isolation in another CICS region.
2. MVS: Use SMF data to discover any relationships between periods of bad CICS
performance and other concurrent activity in the MVS system. Use RMF data to
identify overloaded devices and paths. Monitor CICS region paging rates to
make sure that there is sufficient real storage to support the configuration.
3. Network: The proportion of response time spent in the system is usually small
compared with transmission delays and queuing in the network. Use tools such
as NetView, NPM, and VTAMPARS to identify problems and overloads in the
network. Without automatic tools like these, you are dependent on the
application users’ subjective opinions that performance has deteriorated. This
makes it more difficult to know how much worse performance has become and
to identify the underlying reasons.

Within CICS, the performance problem is either a poor response time or an


unexpected and unexplained high use of resources. In general, you need to look at
the system in some detail to see why tasks are progressing slowly through the
system, or why a given resource is being used heavily. The best way of looking at
detailed CICS behavior is by using CICS auxiliary trace. But note that switching on
auxiliary trace, though the best approach, may actually worsen existing poor
performance while it is in use (see page 332).

152 CICS TS for OS/390: CICS Performance Guide


The approach is to get a picture of task activity first, listing only the task traces,
and then to focus on particular activities: specific tasks, or a very specific time
interval. For example, for a response time problem, you might want to look at the
detailed traces of one task that is observed to be slow. There may be a number of
possible reasons.

The tasks may simply be trying to do too much work for the system. You are
asking it to do too many things, it clearly takes time, and the users are simply
trying to put too much through a system that can’t do all the work that they want
done.

Another possibility is that the system is real-storage constrained, and therefore the
tasks progress more slowly than expected because of paging interrupts. These
would show as delays between successive requests recorded in the CICS trace.

Yet another possibility is that many of the CICS tasks are waiting because there is
contention for a particular function. There is a wait on strings on a particular data
set, for example, or there is an application enqueue such that all the tasks issue an
enqueue for a particular item, and most of them have to wait while one task
actually does the work. Auxiliary trace enables you to distinguish most of these
cases.

Other ways to analyze performance


Potentially, any performance measurement tool, including statistics and the CICS
monitoring facility, may tell you something about your system that help in
diagnosing problems. You should regard each performance tool as usable in some
degree for each purpose: monitoring, single-transaction measurement, and problem
determination.

Again, CICS statistics may reveal heavy use of some resource. For example, you
may find a very large allocation of temporary storage in main storage, a very high
number of storage control requests per task (perhaps 50 or 100), or high program
use counts that may imply heavy use of program control LINK.

Both statistics and CICS monitoring may show exceptional conditions arising in the
CICS run. Statistics can show waits on strings, waits for VSAM shared resources,
waits for storage in GETMAIN requests, and so on. These also generate CICS
monitoring facility exception class records.

While these conditions are also evident in CICS auxiliary trace, they may not
appear so obviously, and the other information sources are useful in directing the
investigation of the trace data.

In addition, you may gain useful data from the investigation of CICS outages. If
there is a series of outages, common links between the outages should be
investigated.

The next chapter tells you how to identify the various forms of CICS constraints,
and Chapter 12 gives you more information on performance analysis techniques.

Chapter 10. Overview of performance analysis 153


154 CICS TS for OS/390: CICS Performance Guide
Chapter 11. Identifying CICS constraints
If current performance has been determined to be unacceptable, you need to
identify the performance constraints (that is, the causes of the symptoms) so that
they can be tuned. This chapter discusses these constraints in the following
sections:
v “Major CICS constraints”
v “Response times” on page 156
v “Storage stress” on page 157
v “Effect of program loading on CICS” on page 159
v “What is paging?” on page 159
v “Recovery from storage violation” on page 161
v “Dealing with limit conditions” on page 161
v “Identifying performance constraints” on page 162
v “Resource contention” on page 164
v “Solutions for poor response time” on page 165
v “Symptoms and solutions for resource contention problems” on page 166

Major CICS constraints


Major constraints on a CICS system show themselves in the form of external
symptoms: stress conditions and paging being the chief forms. This chapter
describes these symptoms in some detail so that you can recognize them when
your system has a performance problem, and know the ways in which CICS itself
attempts to resolve various conditions.

The fundamental thing that has to be understood is that practically every symptom
of poor performance arises in a system that is congested. For example, if there is a
slowdown in DASD, transactions doing data set activity pile up: there are waits on
strings; there are more transactions in the system, there is therefore a greater
virtual storage demand; there is a greater real storage demand; there is paging;
and, because there are more transactions in the system, the task dispatcher uses
more processor power scanning the task chains. You then get task constraints, your
MXT or transaction class limit is exceeded and adds to the processor overhead
because of retries, and so on.

The result is that the system shows heavy use of all its resources, and this is the
typical system stress. It does not mean that there is a problem with all of them; it
means that there is a constraint that has yet to be found. To find the constraint,
you have to find what is really affecting task life.

© Copyright IBM Corp. 1983, 1999 155


Response times
The basic criterion of performance in a production system is response time, but
what is good response time? In straightforward data-entry systems, good response
time implies subsecond response time. In normal production systems, good
response time is measured in the five to ten second range. In scientific,
compute-bound systems or in print systems, good response time can be one or two
minutes.

Good performance, then, depends on a variety of factors including user


requirements, available capacity, system reliability, and application design. Good
performance for one system can be poor performance for another.

When checking whether the performance of a CICS system is in line with the
system’s expected or required capability, you should base this investigation on the
hardware, software, and applications that are present in the installation.

If, for example, an application requires 100 accesses to a database, a response time
of three to six seconds may be considered to be quite good. If an application
requires only one access, however, a response time of three to six seconds for disk
accesses would need to be investigated. Response times, however, depend on the
speed of the processor, and on the nature of the application being run on the
production system.

You should also observe how consistent the response times are. Sharp variations
indicate erratic system behavior.

The typical way in which the response time in the system may vary with
increasing transaction rate is gradual at first, then deteriorates rapidly and
suddenly. The typical curve shows a sharp change when, suddenly, the response
time increases dramatically for a relatively small increase in the transaction rate.

Response
time
C Unacceptable (poor) response time

B Acceptable response time

A Good response time

Increasing load or
decreasing resource availability

Figure 29. Graph to show the effect of response time against increasing load

For stable performance, it is necessary to keep the system operating below this
point where the response time dramatically increases. In these circumstances, the

156 CICS TS for OS/390: CICS Performance Guide


user community is less likely to be seriously affected by the tuning activities being
undertaken by the DP department, and these changes can be done in an unhurried
and controlled manner.

Response time can be considered as being made up of queue time and service
time. Service time is generally independent of usage, but queue time is not. For
example, 50% usage implies a queue time approximately equal to service time, and
80% usage implies a queue time approximately four times the service time. If
service time for a particular system is only a small component of the system
response, for example, in the processor, 80% usage may be acceptable. If it is a
greater portion of the system response time, for example, in a communication line,
50% usage may be considered high.

If you are trying to find the response time from a terminal to a terminal, you
should be aware that the most common “response time” obtainable from any aid
or tool that runs in the host is the “internal response time.” Trace can identify only
when the software in the host, that is, CICS and its attendant software, first “sees”
the message on the inbound side, and when it last “sees” the message on the
outbound side.

Internal response time gives no indication of how long a message took to get from
the terminal, through its control unit, across a line of whatever speed, through the
communication controller (whatever it is), through the communication access
method (whatever it is), and any delays before the channel program that initiated
the read is finally posted to CICS. Nor does it account for the time it might take
for CICS to start processing this input message. There may have been lots of work
for CICS to do before terminal control regained control and before terminal control
even found this posted event.

The same is true on the outbound side. CICS auxiliary trace knows when the
application issued its request, but that has little to do with when terminal control
found the request, when the access method ships it out, when the controllers can
get to the device, and so on.

While the outward symptom of poor performance is overall bad response, there
are progressive sets of early warning conditions which, if correctly interpreted, can
ease the problem of locating the constraint and removing it.

In the advice given so far, we have assumed that CICS is the only major program
running in your system. If batch programs or other online programs are running
simultaneously with CICS, you must ensure that CICS receives its fair share of the
system resources and that interference from other regions does not seriously
degrade CICS performance.

Storage stress
Stress is the term used in CICS for a shortage of free space in one of the dynamic
storage areas.

Storage stress can be a symptom of other resource constraints that cause CICS
tasks to occupy storage for longer than is normally necessary, or of a flood of tasks
which simply overwhelms available free storage, or of badly designed applications
that require unreasonably large amounts of storage.

Chapter 11. Identifying CICS constraints 157


Controlling storage stress
Before CICS/ESA® Version 3, all non-resident, not-in-use programs were removed
when a GETMAIN request could not be satisfied. Since CICS/ESA Version 3,
storage stress has been handled as follows.

Nonresident, not-in-use programs may be deleted progressively with decreasing


free storage availability as CICS determines appropriate, on a least-recently-used
basis. The dispatching of new tasks is also progressively slowed as free storage
approaches a critically small amount. This self-tuned activity tends to spread the
cost of managing storage. There may be more program loading overall, but the
heavy overhead of a full program compression is not incurred at the critical time.

The loading or reloading of programs is handled by CICS with an MVS subtask.


This allows other user tasks to proceed if a processor of the MVS image is
available and even if a page-in is required as part of the program load.

User runtime control of storage usage is achieved through appropriate use of MXT
and transaction class limits. This is necessary to avoid the short-on-storage
condition that can result from unconstrained demand for storage.

Short-on-storage condition
CICS reserves a minimum number of free storage pages for use only when there is
not enough free storage to satisfy an unconditional GETMAIN request even when
all, not-in-use, nonresident programs have been deleted.

Whenever a request for storage results in the number of contiguous free pages in
one of the dynamic storage areas falling below its respective cushion size, or
failing to be satisfied even with the storage cushion, a cushion stress condition
exists. Details are given in the storage manager statistics (“Times request
suspended”, “Times cushion released”). CICS attempts to alleviate the storage
stress situation by releasing programs with no current user and slowing the
attachment of new tasks. If these actions fail to alleviate the situation or if the
stress condition is caused by a task that is suspended for SOS, a short-on-storage
condition is signaled. This is accompanied by message DFHSM0131 or
DFHSM0133.

Removing unwanted data set name blocks


One of the CICS dynamic storage areas, the ECDSA, is also used for data set name
blocks, one of which is created for every data set opened by CICS file control.
These DSN blocks are recovered at a warm or emergency restart. If you have an
application that creates a large number of temporary data sets, all with a unique
name, the number of DSN blocks can increase to such an extent that they can
cause a short-on-storage condition.

If you have application programs that use temporary data sets, with a different
name for every data set created, it is important that your programs remove these
after use. See the CICS System Programming Reference for information about how
you can use the SET DSNAME command to remove unwanted temporary data sets
| from your CICS regions.

158 CICS TS for OS/390: CICS Performance Guide


| LE run time options for AMODE(24) programs
| The default LE run time options for CICS are (among other things) ALL31(ON)
| and STACK(ANY). This means that all programs must run above the line
| (AMODE(31)) in an LE environment. To allow AMODE(24) programs to run in an
| LE environment, ALL31(OFF) and STACK(BELOW) can be specified. However, if
| you globally change these options so that all programs can use them, a lot of
| storage will be put below the line, which can cause a short-on-storage condition.

Purging of tasks
If a CICS task is suspended for longer than its DTIMOUT value, it may be purged
if SPURGE=YES is specified on the RDO transaction definition. That is, the task is
abended and its resources freed, thus allowing other tasks to use those resources.
In this way, CICS attempts to resolve what is effectively a deadlock on storage.

CICS hang
If purging tasks is not possible or not sufficient to solve the problem, CICS ceases
processing. You must then either cancel and restart the CICS system, or initiate or
allow an XRF takeover.

Effect of program loading on CICS


CICS employs MVS load under an MVS subtask to load programs. This provides
the benefits, relative to versions of CICS prior to CICS Transaction for OS/390
Release 1, of fast loading from DASD and allows the use of the library lookaside
function of MVS to eliminate most DASD I/Os by keeping copies of programs in
an MVS controlled dataspace exploiting expanded storage.

A page-in operation causes the MVS task which requires it to stop until the page
has been retrieved. If the page is to be retrieved from DASD, this has a significant
effect. When the page can be retrieved from expanded storage, the impact is only a
relatively small increase in processor usage.

The loading of a program into CICS storage can be a major cause of page-ins.
Because this is carried out under a subtask separate from CICS main activity, such
page-ins do not halt most other CICS activities.

What is paging?
The virtual storage of a processor may far exceed the size of the central storage
available in the configuration. Any excess must be maintained in auxiliary storage
(DASD), or in expanded storage. This virtual storage occurs in blocks of addresses
called “pages”. Only the most recently referenced pages of virtual storage are
assigned to occupy blocks of physical central storage. When reference is made to a
page of virtual storage that does not appear in central storage, the page is brought
in from DASD or expanded storage to replace a page in central storage that is not
in use and least recently used.

The newly referenced page is said to have been “paged in”. The displaced page
may need to be “paged out” if it has been changed.

Chapter 11. Identifying CICS constraints 159


Paging problems
It is the page-in rate that is of primary concern, because page-in activity occurs
synchronously (that is, an MVS task stops until the page fault is resolved).
Page-out activity is overlapped with CICS processing, so it does not appreciably
affect CICS throughput.

A page-in from expanded storage incurs only a small processor usage cost, but a
page-in from DASD incurs a time cost for the physical I/O and a more significant
increase in processor usage.

Thus, extra DASD page-in activity slows down the rate at which transactions flow
through the CICS system, that is, transactions take longer to get through CICS, you
get more overlap of transactions in CICS, and so you need more virtual and real
storage.

If you suspect that a performance problem is related to excessive paging, you can
use RMF to obtain the paging rates.

Consider controlling CICS throughput by using MXT and transaction class limits in
CICS on the basis that a smaller number of concurrent transactions requires less
real storage, causes less paging, and may be processed faster than a larger number
of transactions.

When a CICS system is running with transaction isolation active, storage is


allocated to user transactions in multiples of 1MB. This means that the virtual
storage requirement for a CICS system with transaction isolation enabled is very
large. This does not directly affect paging which only affects those 4K byte pages
that have been touched. More real storage is required in ELSQA, however, and for
more information on transaction isolation and real storage see “Transaction
isolation and real storage requirements” on page 301.

What is an ideal CICS paging rate from DASD? Less than one page-in per second
is best to maximize the throughput capacity of the CICS region. Anything less than
five page-ins per second is probably acceptable; up to ten may be tolerable. Ten
per second is marginal, more is probably a major problem. Because CICS
performance can be affected by the waits associated with paging, you should not
allow paging to exceed more than five to ten pages per second.

Note: The degree of sensitivity of CICS systems to paging from DASD depends on
the transaction rate, the processor loading, and the average internal lifetime
of the CICS tasks. An ongoing, hour-on-hour rate of even five page-faults
per second may be excessive for some systems, particularly when you
realize that peak paging rates over periods of ten seconds or so could easily
be four times that figure.

What paging rates are excessive on various processors and are these rates
operating-system dependent? Excessive paging rates should be defined as those
which cause excessive delays to applications. The contribution caused by the
high-priority paging supervisor executing instructions and causing applications to
wait for the processor is probably a minor consideration as far as overall delays to
applications are concerned. Waiting on a DASD device is the dominant part of the
overall delays. This means that the penalty of “high” paging rates has almost
nothing to do with the processor type.

160 CICS TS for OS/390: CICS Performance Guide


CICS systems are usually able to deliver much better response times with
somewhat better processor utilization when the potential of large amounts of
central and expanded storage is exploited by keeping more data and programs in
memory.

Recovery from storage violation


CICS can detect storage violations when:
v The duplicate storage accounting area (SAA) or the initial SAA of a TIOA
storage element has become corrupted.
v The leading storage check zone or the trailing storage check zone of a user task
storage has become corrupted.

A storage violation can occur in two basic situations:


1. When CICS detects an error during its normal processing of a FREEMAIN
request for an individual element of a TIOA storage, and finds that the two
storage check zones of the duplicate SAA and the initial SAA are not identical.
2. CICS also detects user violations involving user task storage by checking the
storage check zones of an element of user task storage following a FREEMAIN
command.

When a storage violation is detected, an exception trace entry is made in the


internal trace table. A message (DFHSM0102) is issued and a CICS system dump
follows if the dump option is switched on.

Storage violations can be reduced considerably if CICS has storage protection, and
transaction isolation, enabled.

See the CICS Problem Determination Guide for further information about diagnosing
and dealing with storage violations.

Dealing with limit conditions


The main limit conditions or constraints that can occur in a CICS system include
those listed at the beginning of this chapter. Stress conditions generally tell you
that certain limiting conditions have been reached. If these conditions occur,
additional processing is required, and the transactions involved have to wait until
resources are released.

To summarize, limit conditions can be indicated by the following:


v Virtual storage conditions (“short-on-storage”: SOS). This item in the CICS
storage manager statistics shows a deficiency in the allocation of virtual storage
space to the CICS region.
In most circumstances, allocation of more virtual storage does not in itself cause
a degradation of performance. You should determine the reason for the
condition in case it is caused by some form of error. This could include failure of
applications to free storage (including temporary storage), unwanted multiple
copies of programs or maps, storage violations, and high activity of nonresident
exception routines caused by program or hardware errors.
All new applications should be written to run above the 16MB line. The
dynamic storage areas above the 16MB line can be expanded up to the 2GB limit
of 31-bit addressing. The dynamic storage areas below the 16MB line are limited
to less than the region size, which is less than 16MB.
Chapter 11. Identifying CICS constraints 161
v Number of simultaneous tasks (MXT and transaction class limit) reached (shown
in the transaction manager statistics).
v Maximum number of VTAM receive-any RPLs in use (shown in the VTAM
statistics).
v ‘Wait-on-string’ and associated conditions for VSAM data sets (shown in the file
control statistics).

Check how frequently the limit conditions occur. In general:


v If no limit conditions occur, this implies that too many resources have been
allocated. This is quite acceptable if the resource is inexpensive, but not if the
resource is both overallocated and of more use elsewhere.
v Infrequent occurrence of a limit condition is an indication of good usage of the
particular resource. This usually implies a healthy system.
v Frequent occurrence (greater than 5% of transactions) usually reveals a problem,
either directly or indirectly, that needs action to prevent more obvious signs of
poor performance. If the frequency is greater than about 10%, you may have to
take some action quickly because the actions taken by CICS itself (dynamic
program storage compression, release of storage cushion, and so on) can have a
perceptible effect on performance.
Your own actions should include:
– Checking for errors
– Raising the limit, provided that it does not have a degrading effect on other
areas
– Allocating more resources to remove contention
– Checking recovery usage for contention.

Identifying performance constraints


When you are dealing with limit conditions, you may find it helpful to check the
various points where performance constraints can exist in a system. These points
are summarized below under hardware and software constraints.

Hardware constraints
1. Processor cycles. It is not uncommon for transactions to execute more than one
million instructions. To execute these instructions, they must contend with
other tasks and jobs in the system. At different times, these tasks must wait for
such activities as file I/O. Transactions give up their use of the processor at
these points and must contend for use of the processor again when the activity
has completed. Dispatching priorities affect which transactions or jobs get use
of the processor, and batch or other online systems may affect response time
through receiving preferential access to the processor. Batch programs accessing
online databases also tie up those databases for longer periods of time if their
dispatching priority is low. At higher usages, the wait time for access to the
processor can be significant.
2. Real storage (working set). Just as transactions must contend for the processor,
they also must be given a certain amount of real storage. A real storage
shortage can be particularly significant in CICS performance because a normal
page fault to acquire real storage results in synchronous I/O. The basic design
of CICS is asynchronous, which means that CICS processes requests from
multiple tasks concurrently to make maximum use of the processor. Most
paging I/O is synchronous and causes the MVS task that CICS is using to wait,
and that part of CICS cannot do any further processing until the page

162 CICS TS for OS/390: CICS Performance Guide


operation completes. Most, but not all, of CICS processing uses a single MVS
task (called ‘QUASI’ in the dispatcher statistics).
3. Database-associated hardware (I/O) contention. When data is being accessed to
provide information that is required in a transaction, an I/O operation passes
through the processor, the processor channel, a disk control unit, the head of
string on a string of disks, and the actual disk device where the data resides. If
any of these devices are overused, the time taken to access the data can
increase significantly. This overuse can be the result of activity on one data set,
or on a combination of active data sets. Error rates also affect the usage and
performance of the device. In shared DASD environments, contention between
processors also affects performance. This, in turn, increases the time that the
transaction ties up real and virtual storage and other resources.
The use of large amounts of central and expanded storage by using very large
data buffers, and by keeping programs in storage, can significantly reduce DB
I/O contention and somewhat reduce processor utilization while delivering
significant internal response time benefits.
4. Network-associated hardware contention. The input and output messages of a
transaction must pass from the terminal to a control unit, a communications
link, a network controller, a processor channel, and finally the processor. Just as
overuse of devices to access data can affect response time, so excessive use of
network resources can cause performance degradation. Error rates affect
performance as well. In some cases, the delivery of the output message is a
prerequisite to freeing the processor resources that are accessed, and contention
can cause these resources to be tied up for longer periods.

Software constraints
1. Database design. A data set or database needs to be designed to the needs of the
application it is supporting. Such factors as the pattern of access to the data set
(especially whether it is random or sequential), access methods chosen, and the
frequency of access determine the best database design. Such data set
characteristics as physical record size, blocking factors, the use of alternate or
secondary indexes, the hierarchical or relational structure of database segments,
database organization (HDAM, HIDAM, and so on), and pointer arrangements
are all factors in database performance.
The length of time between data set reorganizations can also affect
performance. The efficiency of accesses decreases as the data set becomes more
and more fragmented. This fragmentation can be kept to the minimum by
reducing the length of time between data set reorganizations.
2. Network design. This item can often be a major factor in response time because
the network links are much slower than most components of an online system.
Processor operations are measured in nanoseconds, line speeds in seconds.
Screen design can also have a significant effect on overall response time. A
1200-byte message takes one second to be transmitted on a relatively
high-speed 9600 bits-per-second link. If 600 bytes of the message are not
needed, half a second of response time is wasted. Besides screen design and
size, such factors as how many terminals are on a line, the protocols used
(SNA, bisynchronous), and full-or half-duplex capabilities can affect
performance.
3. Use of specific software interfaces or serial functions. The operating system, terminal
access method, database manager, data set access method, and CICS must all
communicate in the processing of a transaction. Only a given level of
concurrent processing can occur at these points, and this can also cause a
performance constraint. Examples of this include the VTAM receive any pool
(RAPOOL), VSAM data set access (strings), CICS temporary storage, CICS

Chapter 11. Identifying CICS constraints 163


transient data, and CICS intercommunication sessions. Each of these can have a
single or multiserver queueing effect on a transaction’s response time, and can
tie up other resources by slowing task throughput.

One useful technique for isolating a performance constraint in a CICS system with
VTAM is to use the IBMTEST command issued from a user’s terminal. This
terminal must not be in session with CICS, but must be connected to VTAM.

You enter at a VTAM terminal:


IBMTEST (n)(,data)

where n is the number of times you want the data echoed, and data may consist of
any character string. If you enter no data, the alphabet and the numbers zero
through nine are returned to the terminal. This command is responded to by
VTAM.

IBMTEST is an echo test designed to give the user a rough idea of the VTAM
component of terminal response time. If the response time is fast in a
slow-response system, the constraint is not likely to be any component from VTAM
onward. If this response is slow, VTAM or the network may be the reason. This
sort of deductive process in general can be useful in isolating constraints.

To avoid going into session with CICS, you may have to remove APPLID= from
the LU statement or CONNECT=AUTO from the TERMINAL definition.

Resource contention
The major resources used or managed by CICS consist of the following:
v Processor
v Real storage
v Virtual storage
v Software (specification limits)
v Channels
v Control units
v Lines
v Devices
v Sessions to connected CICS systems.

Contention at lower levels prevents full use of higher-level resources. To avoid or


reduce resource contention, you can:
v Minimize or eliminate the use of a resource by:
– Reordering, relocating, or reducing its size
– Redesign, rewriting, rescheduling, or reducing processing time
– Education, eliminating a function, or controlling its usage.
v Give the resource more capacity
v Exchange one resource with another:
– Processor with virtual storage
– Real storage with paging I/O
– Paging I/O with program library I/O
– Priorities of various end-users with each other

164 CICS TS for OS/390: CICS Performance Guide


– CICS response times with batch throughput
– Batch throughput with more DP operators.

Two sets of symptoms and solutions are provided in this chapter. The first set
provides suggested solutions for poor response, and the second set provides
suggested solutions for a variety of resource contention problems.

Solutions for poor response time


Table 10 shows four levels of response time, in decreasing order of severity. The
major causes are shown for each level together with a range of suggested
solutions. Your first step is to check the causes by following the advice given in
“Chapter 12. CICS performance analysis” on page 169. When you have identified
the precise causes, the relevant checklist in “Chapter 14. Performance checklists” on
page 181 tells you what solutions are available and where to find information in
Part 4 of this book on how to implement the solutions.
Table 10. CICS response time checklist
Level Symptom Major Causes Overall Solution
1 Poor response at all High level of paging Reduce working set, or allocate
loads for all more real storage
transactions
Very high usage of Reconsider system resource
major resources requirements and redesign
system

Check for application errors and


resource contention
2 Poor response at High level of paging Reduce working set, or allocate
medium and high more real storage
loads
High processor usage Reduce pathlength, or increase
processor power
High DB or data set Reorganize data sets, or reduce
usage data transfer, or increase
capacity
High communication Reduce data transfer, or increase
network usage capacity
TP or I/O Increase buffer availability
access-method
constraint
CICS limit values Change operands, or provide
exceeded more resources, or check if
errors in application

Chapter 11. Identifying CICS constraints 165


Table 10. CICS response time checklist (continued)
Level Symptom Major Causes Overall Solution
3 Poor response for Identify common As for level 2
certain transactions characteristics
only
Lines or terminal Increase capacity, or reduce data
usage transfer, or change transaction
logic
Data set usage Change data set placement
buffer allocations or change
enqueue logic or data set design
High storage usage Redesign or tune applications
Same subprograms Redesign or tune application
used by transactions subprograms
Same access method Reallocate resource or change
or CICS features used application. Reevaluate use of
by transactions feature in question
Limit conditions Reallocate resource or change
application
4 Poor response for Check network Increase capacity of that part of
certain terminals loading as network
appropriate
Check operator Revise terminal procedures
techniques
Check CEDA Redefine CEDA terminal
terminal definitions definitions

Symptoms and solutions for resource contention problems


This section presents a general range of solutions for each type of constraint. You
should:
1. Confirm that your diagnosis of the type of constraint is correct, by means of
detailed performance analysis. “Chapter 12. CICS performance analysis” on
page 169 describes various techniques.
2. Read “Chapter 13. Tuning the system” on page 177 for general advice on
performance tuning.
3. See the relevant sections in Part 4 of this book for detailed information on
applying the various solutions.
4. Improve virtual storage exploitation. This requires:
v Large data buffers above the 16MB line or in Hiperspace
v Programs that run above the 16MB line
v Large amounts of central and expanded storage to support the virtual
storage exploitation.
Such a system can deliver better internal response times, while minimizing
DASD I/O constraint and reducing processor utilization.

166 CICS TS for OS/390: CICS Performance Guide


DASD constraint
Symptoms
v Slow response times (the length of the response time depends on the number of
I/O operations, with a longer response time when batch mode is active)
v High DSA utilization
v High paging rates
v MXT limit frequently reached
v SOS condition often occurs.

Solutions
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.
See “DASD tuning” on page 199 for suggested solutions.

Communications network constraint


Symptoms
v Slow response times
v Good response when few terminals are active on a line, but poor response when
many terminals are active on that line
v Big difference between internal response time and terminal response time.

Solutions
v Reduce the line utilization.
v Reduce delays in data transmission.
v Alter the network.

Remote systems constraints


Symptoms
v SOS condition or MXT occur when there is a problem with a connected region.
v CICS takes time to recover when the problem is fixed.

Solutions
v Control the amount of queuing which takes place for the use of the connections
to the remote systems.
v Improve the response time of the remote system.

Virtual storage constraint


Symptoms
v Slow response times
v Multiple loads of the same program
v Increased I/O operations against program libraries
v High paging rates
v SOS condition often occurs.

Chapter 11. Identifying CICS constraints 167


Solutions
v Tune the MVS system to obtain more virtual storage for CICS (increase the
region size).
v Expand or make more efficient use of the dynamic storage area.

See the “Virtual storage above and below 16MB line checklist” on page 182 for a
detailed list of suggested solutions.

Real storage constraint


Symptoms
v High paging rates
v Slow response times
v MXT limit frequently reached
v SOS condition often occurs.

Solutions
v Reduce the demands on real storage
v Tune the MVS system to obtain more real storage for CICS
v Obtain more central and expanded storage.

See the “Real storage checklist” on page 183 for a detailed list of suggested
solutions.

Processor cycles constraint


Symptoms
v Slow response times
v Low-priority transactions respond very slowly
v Low-priority work gets done very slowly.

Solutions
v Increase the dispatching priority of CICS.
v Reevaluate the relative priorities of operating system jobs.
v Reduce the number of MVS regions (batch).
v Reduce the processor utilization for productive work.
v Use only the CICS facilities that you really require.
v Turn off any trace that is not being used.
v Minimize the data being traced by reducing the:
– Scope of the trace
– Frequency of running trace.
v Obtain a faster processor.

See the “Processor cycles checklist” on page 184 for a detailed list of suggested
solutions.

168 CICS TS for OS/390: CICS Performance Guide


Chapter 12. CICS performance analysis
This chapter describes aspects of CICS performance analysis in the following:
v “Assessing the performance of a DB/DC system”
v “Methods of performance analysis” on page 170
v “Full-load measurement” on page 171
v “Single-transaction measurement” on page 174

Performance analysis, as compared with monitoring, is the use of certain


performance tools described in Part 2 to:
v Investigate a deviation from performance objectives that is resulting in
performance deterioration, and identify performance problems
v Identify where a system can be adjusted to give a required level of performance
v Characterize and calibrate individual stand-alone transactions as part of the
documentation of those transactions, and for comparison with some future time
when, perhaps, they start behaving differently.

Assessing the performance of a DB/DC system


You may find the following performance measurements helpful in determining the
performance of a system:
1. Processor usage: This item reflects how active the processor is. Although the
central processor is of primary concern, 37X5 communications controllers and
terminal control units (these can include an intelligent cluster controller such as
the 3601 and also the 3270 cluster control units) can also increase response time
if they are heavily used.
2. I/O rates: These rates measure the amount of access to a disk device or data set
over a given period of time. Again, acceptable rates vary depending on the
speed of the hardware and response time requirements.
3. Terminal message or data set record block sizes: These factors, when combined with
I/O rates, provide information on the current load on the network or DASD
subsystem.
4. Indications of internal virtual storage limits: These vary by software component,
including storage or buffer expansion counts, system messages, and program
abends because of system stalls. In CICS, program fetches on nonresident
programs and system short-on-storage or stress messages reflect this condition.
5. Paging rates: CICS can be sensitive to a real storage shortage, and paging rates
reflect this shortage. Acceptable paging to DASD rates vary with the speed of
the DASD and response time criteria. Paging rates to expanded storage are only
as important as its effect on processor usage.
6. Error rates: Errors can occur at any point in an online system. If the errors are
recoverable, they can go unnoticed, but they put an additional load on the
resource on which they are occurring.

You should investigate both system conditions and application conditions.

© Copyright IBM Corp. 1983, 1999 169


System conditions
A knowledge of these conditions enables you evaluate the performance of the
system as a whole:
v System transaction rate (average and peak)
v Internal response time and terminal response time, preferably compared with
transaction rate
v Working set, at average and peak transaction rates
v Average number of disk accesses per unit time (total, per channel, and per
device)
v Processor usage, compared with transaction rate
v Number of page faults per second, compared with transaction rate and real
storage
v Communication line usage (net and actual)
v Average number of active CICS tasks
v Number and duration of outages.

Application conditions
These conditions, measured both for individual transaction types and for the total
system, give you an estimate of the behavior of individual application programs.

You should gather data for each main transaction and average values for the total
system. This data includes:
v Program calls per transaction
v CICS storage GETMAINs and FREEMAINs (number and amount)
v Application program and transaction usage
v File control (data set, type of request)
v Terminal control (terminal, number of inputs and outputs)
v Transaction routing (source, target)
v Function shipping (source, target)
v Other CICS requests.

Methods of performance analysis


You can use two methods for performance analysis:
1. Measuring a system under full production load (full-load measurement), to get
all information that is measurable only under high system-loading.
2. Measuring single-application transactions (single-transaction measurement),
during which the system should not carry out any other activities. This gives
an insight into the behavior of single transactions under optimum system
conditions.

Because a system can have a variety of problems, we cannot recommend which


option you should use to investigate the behavior of a system. When in doubt
about the extent of a problem, you should always use both methods.

Rapid performance degradation often occurs after a threshold is exceeded and the
system approaches its ultimate load. You can see various indications only when the

170 CICS TS for OS/390: CICS Performance Guide


system is fully loaded (for example, paging, short-on-storage condition in CICS,
and so on), and you should usually plan for a full-load measurement.

Bear in mind that the performance constraints might possibly vary at different
times of the day. You might want to run a particular option that puts a particular
pressure on the system only at a certain time in the afternoon.

If a full-load measurement reveals no serious problems, or if a system is not


reaching its expected performance capability under normal operating conditions,
you can then use single-transaction measurement to reveal how individual system
transactions behave and to identify the areas for possible improvement.

Often, because you have no reliable information at the beginning of an


investigation into the probable causes of performance problems, you have to
examine and analyze the whole system.

Before carrying out this analysis, you must have a clear picture of the functions
and the interactions of the following components:
v Operating system supervisor with the appropriate access methods
v CICS management modules and control tables
v VSAM data sets
v DL/I databases
v DB2
v External security managers
v Performance monitors
v CICS application programs
v Influence of other regions
v Hardware peripherals (disks and tapes).

In addition, you should collect the following information:


v Does performance fluctuate or is it uniformly bad?
v Are performance problems related to a specific hour, day, week, or month?
v Has anything in the system been changed recently?
v Have all such changes been fully documented?

Full-load measurement
A full-load measurement highlights latent problems in the system. It is important
that full-load measurement lives up to its name, that is, you should make the
measurement when, from production experience, the peak load is reached. Many
installations have a peak load for about one hour in the morning and again in the
afternoon. CICS statistics and various performance tools can provide valuable
information for full-load measurement. In addition to the overall results of these
tools, it may be useful to have the CICS auxiliary trace or RMF active for about
one minute.

CICS auxiliary trace


CICS auxiliary trace can be used to find situations that occur under full load. For
example, all ENQUEUEs that cannot immediately be honored in application

Chapter 12. CICS performance analysis 171


programs result in a suspension of the issuing task. If this frequently happens,
attempts to control the system by using the CEMT master transaction, are not
effective.

Trace is a very heavy overhead. Use trace selectivity options to minimize this
overhead.

RMF
It is advisable to do the RMF measurement without any batch activity. (See
“Resource measurement facility (RMF)” on page 27 for a detailed description of
this tool. Guidance on how to use RMF with the CICS monitoring facility is given
in “Using CICS monitoring SYSEVENT information with RMF” on page 67.)

For full-load measurement, the system activity report and the DASD activity report
are important.

The most important values for full-load measurement are:


v Processor usage
v Channel and disk usage
v Disk unit usage
v Overlapping of processor with channel and disk activity
v Paging
v Count of start I/O operations and average start I/O time
v Response times
v Transaction rates.

You should expect stagnant throughput and sharply climbing response times as the
processor load approaches 100%.

It is difficult to forecast the system paging rate that can be achieved without
serious detriment to performance, because too many factors interact. You should
observe the reported paging rates; note that short-duration severe paging leads to a
rapid increase in response times.

In addition to taking note of the count of start I/O operations and their average
length, you should also find out whether the system is waiting on one device only.
With disks, for example, it can happen that several frequently accessed data sets
are on one disk and the accesses interfere with each other. In each case, you should
investigate whether a system wait on a particular unit could not be minimized by
reorganizing the data sets.

The RMF DASD activity report includes the following information:


v A summary of all disk information
v Per disk, a breakdown by system number and region
v Per disk, the distribution of the seek arm movements
v Per disk, the distribution of accesses with and without arm movement.

Use IOQ(DASD) option in RMF monitor 1 to show DASD control unit contention.

After checking the relationship of accesses with and without arm movement, for
example, you may want to move to separate disks those data sets that are
periodically very frequently accessed.

172 CICS TS for OS/390: CICS Performance Guide


Comparison charts
You might wish to consider using a comparison chart to measure key aspects of
your system’s performance before and after tuning changes have been made. A
suggested chart is as follows:
Table 11. Comparison chart
Run:
Number
DL/I transactions
Response
Number
VSAM transactions
Response
DL/I
Response times
VSAM

Most heavily used Number


transaction Response

Average-use Number
transaction Response
System
Paging rate
CICS
Maximum
DSA virtual storage
Average
Peak
Tasks
At MXT

Most heavily used Response


DASD Utilization
Response
Average-use DASD
Utilization

CPU utilization

The use of this type of comparison chart requires the use of TPNS, RMF, and CICS
interval statistics running together for about 20 minutes, at a peak time for your
system. It also requires you to identify the following:
v A representative selection of terminal-oriented DL/I transactions accessing DL/I
databases
v A representative selection of terminal-oriented transactions processing VSAM
files
v The most heavily used transaction
v Two average-use nonterminal-oriented transactions writing data to intrapartition
transient data destinations
v The most heavily used volume in your system
v A representative average-use volume in your system.

To complete the comparison chart for each CICS run before and after a tuning
change, you can obtain the figures from the following sources:

Chapter 12. CICS performance analysis 173


v DL/I transactions: you should first identify a selection of terminal-oriented DL/I
transactions accessing DL/I databases.
v VSAM transactions: similarly, you should first identify a selection of
terminal-oriented transactions processing VSAM files.
v Response times: external response times are available from the TPNS terminal
response time analysis report; internal response times are available from RMF.
The “DL/I” subheading is the average response time calculated at the 99th
percentile for the terminal-oriented DL/I transactions you have previously
selected. The “VSAM” subheading is the average response time calculated at the
99th percentile for the terminal-oriented VSAM transactions you have previously
selected.
v Paging rate (system): this is from the RMF paging activity report, and is the figure
shown for total system non-VIO non-swap page-ins added to the figure shown
for the total system non-VIO non-swap page-outs. This is the total paging rate
per second for the entire system.
v Tasks: this is from the transaction manager statistics (part of the CICS interval,
end-of-day, and requested statistics). The “Peak” subheading is the figure shown
for “Peak Number of Tasks” in the statistics. The “At MXT” subheading is the
figure shown for “Number of Times at Max. Task” in the statistics.
v Most heavily used DASD: this is from the RMF direct access device activity report,
and relates to the most heavily used volume in your system. The “Response”
subheading is the figure shown in the “Avg. Resp. Time” column for the volume
you have selected. The “Utilization” subheading is the figure shown in the “%
Dev. Util.” column for that volume.
v Average-use DASD: this is also from the RMF direct access device activity report,
and relates to a representative average-use volume in your system. The
“Response” subheading is the figure shown in the “Avg. Resp. Time” column for
the volume you have selected. The “Utilization” subheading is the figure shown
in the “% Dev. Util.” column for that volume.
v Processor utilization: this is from the RMF processor activity report.

This chart is most useful when comparing before-and-after changes in performance


while you are tuning your CICS system.

Single-transaction measurement
You can use full-load measurement to evaluate the average loading of the system
per transaction. However, this type of measurement cannot provide you with
information on the behavior of a single transaction and its possible excessive
loading of the system. If, for example, nine different transaction types issue five
start I/Os (SIOs) each, but the tenth issues 55 SIOs, this results in an average of
ten SIOs per transaction type. This should not cause concern if they are executed
simultaneously. However, an increase of the transaction rate of the tenth
transaction type could possibly lead to poor performance overall.

Sometimes, response times are quite good with existing terminals, but adding a
few more terminals leads to unacceptable degradation of performance. In this case,
the performance problem may be present with the existing terminals, and has
simply been highlighted by the additional load.

To investigate this type of problem, do a full-load measurement as well as a


single-transaction measurement. To be of any use, the single-transaction

174 CICS TS for OS/390: CICS Performance Guide


measurement must be done when no batch region is running, and there must be
no activity in CICS apart from the test screen. Even the polling of remote terminals
should be halted.

You should measure each existing transaction that is used in a production system
or in a final test system. Test each transaction two or three times with different
data values, to exclude an especially unfavorable combination of data. Document
the sequence of transactions and the values entered for each test as a prerequisite
for subsequent analysis or interpretation.

Between the tests of each single transaction, there should be a pause of several
seconds, to make the trace easier to read. A copy of the production database or
data set should be used for the test, because a test data set containing 100 records
can very often result in completely different behavior when compared with a
production data set containing 100 000 records.

The condition of data sets has often been the main reason for performance
degradation, especially when many segments or records have been added to a
database or data set. Do not do the measurements directly after a reorganization,
because the database or data set is only in this condition for a short time. On the
other hand, if the measurement reveals an unusually large number of disk
accesses, you should reorganize the data and do a further measurement to evaluate
the effect of the data reorganization.

You may feel that single-transaction measurement under these conditions with only
one terminal is not an efficient tool for revealing a performance degradation that
might occur when, perhaps 40 or 50 terminals are in use. Practical experience has
shown, however, that this is usually the only means for revealing and rectifying,
with justifiable expense, performance degradation under full load. The main reason
for this is that it is sometimes a single transaction that throws the system behavior
out of balance. Single-transaction measurement can be used to detect this.

Ideally, single-transaction measurement should be carried out during the final test
phase of the transactions. This gives the following advantages:
v Any errors in the behavior of transactions may be revealed before production
starts, and these can be put right during validation, without loading the
production system unnecessarily.
v The application is documented during the measurement phase. This helps to
identify the effects of later changes.

CICS auxiliary trace


Auxiliary trace is a standard feature of CICS, and gives an overview of transaction
flows so that you can quickly and effectively analyze them.

From this trace, you can find out whether a specified application is running as it is
expected to run. In many cases, it may be necessary for the application
programmer responsible to be called in for the analysis, to explain what the
transaction should actually be doing.

If you have a very large number of transactions to analyze, you can select, in a
first pass, the transactions whose behavior does not comply with what is expected.

Chapter 12. CICS performance analysis 175


If all transactions last much longer than expected, this almost always indicates a
system-wide error in application programming or in system implementation. The
analysis of a few transactions is then sufficient to determine the error.

If, on the other hand, only a few transactions remain in this category, these
transactions should be analyzed next, because it is highly probable that most
performance problems to date arise from these.

176 CICS TS for OS/390: CICS Performance Guide


Chapter 13. Tuning the system
When you have identified specific constraints, you will have identified the system
resources that need to be tuned. The three major steps in tuning a system are:
1. Determine acceptable tuning trade-offs
2. Make the change to the system
3. Review the results of tuning.

Determining acceptable tuning trade-offs


The art of tuning can be summarized as finding and removing constraints. In most
systems, the performance is limited by a single constraint. However, removing that
constraint, while improving performance, inevitably reveals a different constraint,
and you might often have to remove a series of constraints. Because tuning
generally involves decreasing the load on one resource at the expense of increasing
the load on a different resource, relieving one constraint always creates another.

A system is always constrained. You do not simply remove a constraint; you can
only choose the most satisfactory constraint. Consider which resources can accept
an additional load in the system without themselves becoming worse constraints.

Tuning usually involves a variety of actions that can be taken, each with its own
trade-off. For example, if you have determined virtual storage to be a constraint,
your tuning options may include reducing buffer allocations for data sets, or
reducing terminal scan delay (ICVTSD) to shorten the task life in the processor.

The first option increases data set I/O activity, and the second option increases
processor usage. If one or more of these resources are also constrained, tuning
could actually cause a performance degradation by causing the other resource to
be a greater constraint than the present constraint on virtual storage.

Making the change to the system


The next step in the tuning process is to make the actual system modifications that
are intended to improve performance. You should consider several points when
adjusting the system:
v Tuning is the technique of making small changes to the system’s resource
allocation and availability to achieve relatively large improvements in response
time.
v Tuning is not always effective. If the system response is too long and all the
system resources are lightly used, you see very little change in the CICS
response times. (This is also true if the wrong resources are tuned.) In addition,
if the constraint resource, for example, line capacity, is being fully used, the only
solution is to provide more capacity or redesign the application (to transmit less
data, in the case of line capacity).
v Do not tune just for the sake of tuning. Tune to relieve identified constraints. If
you tune resources that are not the primary cause of performance problems, this
has little or no effect on response time until you have relieved the major
constraints, and it may actually make subsequent tuning work more difficult. If

© Copyright IBM Corp. 1983, 1999 177


there is any significant improvement potential, it lies in improving the
performance of the resources that are major factors in the response time.
v In general, tune major constraints first, particularly those that have a significant
effect on response time. Arrange the tuning actions so that items having the
greatest effect are done first. In many cases, one tuning change can solve the
performance problem if it addresses the cause of the degradation. Other actions
may then be unnecessary. Further, improving performance in a major way can
alleviate many user complaints and allow you to work in a more thorough way.
The 80/20 rule applies here; a small number of system changes normally
improves response time by most of the amount by which it can be improved,
assuming that those changes address the main causes of performance problems.
v Make one tuning change at a time. If two changes are made at the same time,
their effects may work in opposite directions and it may be difficult to tell which
of them had a significant effect.
v Change allocations or definitions gradually. For example, when reducing the
number of resident programs in a system, do not change all programs in a
system from RES=YES to RES=NO at once. This could cause an unexpected
lengthening of response times by increasing storage usage because of
fragmentation, and increasing processor usage because of higher program
loading activity. If you change a few programs at a time, starting with the
lesser-used programs, this can give you a better idea of the overall results.
The same rule holds true for buffer and string settings and other data set
operands, transaction and program operands, and all resources where the
operand can be specified individually for each resource. For the same reason, do
not make large increases or decreases in the values assigned to task limits such
as MXT.
v Continue to monitor constraints during the tuning process. Because each
adjustment changes the constraints in a system, these constraints vary over time.
If the constraint changes, tuning must be done on the new constraint because
the old one is no longer the restricting influence on performance. In addition,
constraints may vary at different times during the day.
v Put fallback procedures in place before starting the tuning process. As noted
earlier, some tuning can cause unexpected performance results. If this leads to
poorer performance, it should be reversed and something else tried. If previous
definitions or path designs were not saved, they have to be redefined to put the
system back the way it was, and the system continues to perform at a poorer
level until these restorations are made. If the former setup is saved in such a
way that it can be recalled, back out of the incorrect change becomes much
simpler.

Reviewing the results of tuning


After each adjustment has been done, review the performance measurements that
have been identified as the performance problem to verify that the desired
performance changes have occurred and to quantify that change. If performance
has improved to the point that service level agreements are being met, no more
tuning is required. If performance is better, but not yet acceptable, investigation is
required to determine the next action to be taken, and to verify that the resource
that was tuned is still a constraint. If it is not still a constraint, new constraints
need to be identified and tuned. This is a return to the first step of the tuning
process, and you should repeat the next steps in that process until an acceptable
performance level is reached.

178 CICS TS for OS/390: CICS Performance Guide


Part 4. Improving the performance of a CICS system

Important

Always tune DASD, the network, and the overall MVS system before tuning
any individual CICS subsystem through CICS parameters.

Also review your application code before any further tuning

“Chapter 14. Performance checklists” on page 181 itemizes the actions you can take
to tune the performance of an operational CICS system.

The other chapters in this part contain the relevant performance tuning guidelines
for the following aspects of CICS:
v “Chapter 15. MVS and DASD” on page 187
v “Chapter 16. Networking and VTAM” on page 201
v “Chapter 18. VSAM and file control” on page 225
v “Chapter 21. Database management” on page 263
v “Chapter 22. Logging and journaling” on page 271
v “Chapter 23. Virtual and real storage” on page 283
v “Chapter 24. MRO and ISC” on page 305
v “Chapter 25. Programming considerations” on page 315
v “Chapter 26. CICS facilities” on page 321
v “Chapter 27. Improving CICS startup and normal shutdown time” on page 339.

© Copyright IBM Corp. 1983, 1999 179


180 CICS TS for OS/390: CICS Performance Guide
Chapter 14. Performance checklists
The following checklists provide a quick reference to options that you can adjust to
relieve different constraints. They assume that you have identified the exact cause
of an existing constraint; they should not be used for random tuning exercises.

There are four checklists, corresponding to four of the main contention areas
described in “Chapter 11. Identifying CICS constraints” on page 155.
1. I/O contention (this applies to data set and database subsystems, as well as to
the data communications network)
2. Virtual storage above and below the 16MB line
3. Real storage
4. Processor cycles.

The checklists are in the sequence of low-level to high-level resources, and the
items are ordered from those that probably have the greatest effect on performance
to those that have a lesser effect, from the highest likelihood of being a factor in a
normal system to the lowest, and from the easiest to the most difficult to
implement.

Before taking action on a particular item, you should review the item to:
v Determine whether the item is applicable in your particular environment
v Understand the nature of the change
v Identify the trade-offs involved in the change.

Input/output contention checklist

Note:
Ideally, I/O contention should be reduced by using very large data buffers
and keeping programs in storage. This would require adequate central and
expanded storage, and programs that can be loaded above the 16MB line

Item Page
VSAM considerations
Review use of LLA 197
Implement Hiperspace buffers 240
Review/increase data set buffer allocations within 235
LSR
Use data tables when appropriate 244

Database considerations
Replace DL/I function shipping with IMS/ESA 263
DBCTL facility
Reduce/replace shared database access to online 263
data sets
Review DB2 threads and buffers 266

Journaling

© Copyright IBM Corp. 1983, 1999 181


Item Page
Increase activity keypoint frequency (AKPFREQ) 279
value

Terminals, VTAM and SNA.


Implement terminal output compression exit 215
Increase concurrent VTAM inputs 204
Increase concurrent VTAM logon/logoffs 210
Minimize SNA terminal data flows 208
Reduce SNA chaining 209

Miscellaneous
Reduce DFHRPL library contention 299
Review temporary storage strings 321
Review transient data strings 326

Virtual storage above and below 16MB line checklist

Note:
The lower the number of concurrent transactions in the system, the lower the
usage of virtual storage. Therefore, improving transaction internal response
time decreases virtual storage usage. Keeping programs in storage above the
16MB line, and minimizing physical I/Os makes the largest contribution to
well-designed transaction internal response time improvement.

Item Page
CICS region
Increase CICS region size 192
Reorganize program layout within region 299
Split the CICS region 284

DSA sizes
Specify optimal size of the dynamic storage areas 625
upper limits (DSALIM, EDSALIM)
Adjust maximum tasks (MXT) 287
Control certain tasks by transaction class 288
Put application programs above 16MB line 300

Database considerations
Increase use of DBCTL and reduce use of shared 263
database facility
Replace DL/I function shipping with IMS DBCTL 263
facility
Review use of DB2 threads and buffers 266

Applications
Compile COBOL programs RES, NODYNAM 316
Use PL/I shared library facility 317
Implement VS COBOL II 317

Journaling

182 CICS TS for OS/390: CICS Performance Guide


Item Page
Increase activity keypoint frequency (AKPFREQ) 279
value

Terminals, VTAM and SNA


Reduce VTAM input message size 203
Reduce concurrent VTAM inputs 204
Reduce terminal scan delay 211
Discourage use of MSGINTEG and PROTECT 208
Reduce concurrent VTAM logon/logoffs 210
Reduce AIQMAX setting for autoinstall 216

MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared 305
database programs

Miscellaneous
Reduce use of aligned maps 298
Prioritize transactions 291
Use only required CICS recovery facilities 334
Recycle job initiators with each CICS startup 193

Real storage checklist

Note:
Adequate central and expanded storage is vital to achieving good
performance with CICS.

Item Page
MVS considerations
Dedicate, or fence, real storage to CICS 190
Make CICS nonswappable 190
Move CICS code to the LPA/ELPA 297

VSAM considerations
Review the use of Hiperspace buffers 240
Use VSAM LSR where possible 240
Review the number of VSAM buffers 235
Review the number of VSAM strings 237

Task control considerations


Adjust maximum tasks (MXT) 287
Control certain tasks by transaction class 288

MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared
database programs
Use CICS intercommunication facilities 305

Database considerations

Chapter 14. Performance checklists 183


Item Page
Replace DL/I function shipping with IMS DBCTL 263
facility
Review use of DB2 buffers and threads 266

Temporary storage and transient data


Reduce temporary storage strings or buffers 321
Reduce transient data strings or buffers 326

Journaling

Increase activity keypoint frequency (AKPFREQ) 279


value

Terminal, VTAM and SNA


Reduce terminal scan delay 211
Reduce concurrent VTAM inputs 204
Reduce VTAM input message size 203
Prioritize transactions 291
Reduce concurrent VTAM logon/logoffs 210

Applications
Use PL/I shared library facilities 317
Compile COBOL programs RES, NODYNAM 316

Miscellaneous
Decrease region exit interval 194
Reduce trace table size 332
Use only required CICS recovery facilities 334

Processor cycles checklist

Note:
Minimizing physical I/Os by employing large data buffers and keeping
programs in storage reduces processor use, if adequate central and expanded
storage is available.

Item Page
General
Reduce or turn off CICS trace 332
Increase CICS dispatching level or performance 192
group

Terminal, VTAM and SNA


Implement VTAM high performance option 207
processing
Increase terminal scan delay 211
Minimize SNA terminal data flows 208
Reduce SNA chaining 209

Task control considerations

184 CICS TS for OS/390: CICS Performance Guide


Item Page
Adjust maximum tasks (MXT) 287
Control certain tasks by transaction class 288
Define CICS maps with device suffixes 315

MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MRO fastpath facilities 305
Implement MVS cross-memory services with shared 263
database programs
Use CICS intercommunication facilities 305

Database considerations

Journaling
Increase activity keypoint frequency (AKPFREQ) 279
value

Temporary storage and transient data


Increase temporary storage queue pointer allocations 321
Increase use of main temporary storage 321
Review the use of CICS transient data facilities 326

Miscellaneous
Use only required CICS monitoring facilities 331
Review use of required CICS recovery facilities 334
Review use of required CICS security facilities 334
Increase region exit interval 194
Review use of program storage 299
Use NPDELAY for unsolicited input errors on TCAM 214
lines
Prioritize transactions 291

Chapter 14. Performance checklists 185


186 CICS TS for OS/390: CICS Performance Guide
Chapter 15. MVS and DASD
The information in this chapter appears under the following headings:
v “Tuning CICS and MVS”
v “Splitting online systems: availability” on page 189
v “Making CICS nonswappable” on page 190
v “Isolating (fencing) real storage for CICS (PWSS and PPGRTR)” on page 190
v “Increasing the CICS region size” on page 192
v “Giving CICS a high dispatching priority or performance group” on page 192
v “Using job initiators” on page 193
v “Region exit interval (ICV)” on page 194
v “Use of LLA (MVS library lookaside)” on page 197
v “DASD tuning” on page 199

Tuning CICS and MVS


Tuning CICS for virtual storage under MVS depends on the following main
elements:
v MVS systems tuning
v VTAM tuning
v CICS tuning
v VSAM tuning.

Because tuning is a top-down activity, you should already have made a vigorous
effort to tune MVS before tuning CICS. Your main effort to reduce virtual storage
constraint and to get relief should be concentrated on reducing the life of the
various individual transactions: in other words, shortening task life.

This section describes some of the techniques that can contribute significantly to
shorter task life, and therefore, a reduction in virtual storage constraint.

The installation of a faster processor can cause the current instructions to be


executed faster and, therefore, reduce task life (internal response time), because
more transactions can be processed in the same period of time. Installing faster
DASD can reduce the time spent waiting for I/O completion, and this shorter wait
time for paging operations, data set index retrieval, or data set buffer retrieval can
also reduce task life in the processor.

Additional real storage, if page-ins are frequently occurring (if there are more than
5 to 10 page-ins per second, CICS performance is affected), can reduce waits for
the paging subsystem.

MVS provides storage isolation for an MVS performance group, which allows you
to reserve a specific range of real storage for the CICS address space and to control
the page-rates for that address space based on the task control block (TCB) time
absorbed by the CICS address space during execution.

© Copyright IBM Corp. 1983, 1999 187


You can isolate CICS data on DASD drives, strings, and channels to minimize the
I/O contention suffered by CICS from other DASD activity in the system. Few
CICS online systems generate enough I/O activity to affect the performance of
CICS seriously if DASD is isolated in this manner.

So far (except when describing storage isolation and DASD sharing), we have
concentrated on CICS systems that run a stand-alone single CICS address space.
The sizes of all MVS address spaces are defined by the common requirements of
the largest subsystem. If you want to combine the workload from two or more
processors onto an MVS image, you must be aware of the virtual storage
requirements of each of the subsystems that are to execute on the single-image
ESA processor. Review the virtual storage effects of combining the following kinds
of workload on a single-image MVS system:
1. CICS and a large number (100 or more) of TSO users
2. CICS and a large IMS system
3. CICS and 5000 to 7500 VTAM LUs.
By its nature, CICS requires a large private region that may not be available when
the large system’s common requirements of these other subsystems are satisfied. If,
after tuning the operating system, VTAM, VSAM, and CICS, you find that your
address space requirements still exceed that available, you can split CICS using
one of three options:
1. Multiregion option (MRO)
2. Intersystem communication (ISC)
3. Multiple independent address spaces.

Adding large new applications or making major increases in the size of your
VTAM network places large demands on virtual storage, and you must analyze
them before implementing them in a production system. Careful analysis and
system specification can avoid performance problems arising from the addition of
new applications in a virtual-storage-constrained environment.

If you have not made the necessary preparations, you usually become aware of
problems associated with severe stress only after you have attempted to implement
the large application or major change in your production system. Some of these
symptoms are:
v Poor response times
v Short-on-storage
v Program compression
v Heavy paging activity
v Many well-tested applications suddenly abending with new symptoms
v S80A and S40D abends
v S822 abends
v Dramatic increase in I/O activity on DFHRPL program libraries.

Various chapters in the rest of this book deal with specific, individual operands
and techniques to overcome these problems. They tell you how to minimize the
use of virtual storage in the CICS address space, and how to split it into multiple
address spaces if your situation requires it.

For an overall description of ESA virtual storage, see “Appendix F. MVS and CICS
virtual storage” on page 615.

188 CICS TS for OS/390: CICS Performance Guide


Reducing MVS common system area requirements
This can be the most productive area for tuning. CICS installations that have not
previously tuned their ESA system may be able to recover 1.5 to 2.0 megabytes of
virtual storage. This topic is outside the scope of this book, but you should
investigate it fully before tuning CICS. A manual that gives information about this
is the OS/390 MVS Initialization and Tuning Reference manual.

Splitting online systems: availability


Splitting the CICS system into two or more separate address spaces may lead to
improved availability. If CICS failures are being caused by application program
errors, for example, separating out the failing application can improve overall
availability. This can also give virtual storage gains and, in addition, can allow you
to use multiprocessors and MVS images more efficiently. See “Splitting online
systems: virtual storage” on page 284 for more information. A fuller account can be
found in the System/390 MVS Sysplex Application Migration Guide (GC28-1211).

The availability of the overall system may be improved by splitting the system
because the effects of a failure can be limited or the time to recover from the
failure can be reduced.

The main ways of splitting a system for availability are to have:


v Terminal owning regions. With one or more terminal owning regions (TORs) using
transaction routing, availability can be improved because a TOR is less likely to
fail because it contains no application code. The time taken to restart the failed
part of the system is reduced because the terminal sessions are maintained at
failure if the TOR continues to operate.
v Multiple application owning regions. Using multiple application owning regions
(AORs), you can separate unstable or new applications from the rest of the
system. If these applications cause a failure of that AOR, all other AORs are still
available. If the region susceptible to failure contains no terminals or files and
databases, it also tends to restart quickly.
Applications under test in AORs can use function shipping to access ‘live’ data,
which adds to the realism of the test environment.
v File owning regions. File requests from many CICS regions can be
function-shipped to file owning regions (FORs). The FORs contain no application
code and so are unlikely to fail, so that access to files can be maintained even if
other regions fail. Removing the files and databases from these other regions
speeds up their recovery by removing file allocation and opening time.
Having only one FOR in a system, or logical subset of a system, can reduce the
operational difficulties of restarting a system. It is possible to split the regions in
different ways to those described so far, by having many regions all of which own
some terminals, some applications, and some files and databases. This type of
splitting is very complex to maintain and operate, and also needs careful
monitoring to ensure that the performance of the overall system is optimal. For
these reasons, a structured approach with each of the regions having a clearly
defined set of one type of resource is recommended.

Chapter 15. MVS and DASD 189


Limitations
Splitting a CICS system requires increased real storage, increased processor cycles,
and extensive planning. These overheads are described in more detail in “Splitting
online systems: virtual storage” on page 284.

Recommendations
If availability of your system is an important requirement, both splitting systems
and the use of XRF should be considered. The use of XRF can complement the
splitting of systems by automating the recovery of the components.

When splitting your system, you should try to separate the sources of failure so
that as much of the rest of the system as possible is protected against their failure,
and remains available for use. Critical components should be backed up, or
configured so that service can be restored with minimum delay. Since the
advantages of splitting regions for availability can be compromised if the queueing
of requests for remote regions is not controlled, you should also review
“Intersystems session queue management” on page 307.

Making CICS nonswappable


You can take a variety of actions to cause the operating system to give CICS
preferential treatment in allocation of processor resources.

Making CICS nonswappable prevents the address space from being swapped out
in MVS, and reduces the paging overhead. Consider leaving only very lightly used
test systems swappable.

How implemented
You should consider making your CICS region nonswappable by using the
PPTNSWP option in the MVS Program Properties Table (PPT).

Limitations
Using the PPT will make all CICS systems (including test systems) nonswappable.
As an alternative, use the IPS. For more information about defining entries in the
PPT see the OS/390 MVS Programming: Callable Services for HLL manual.

How monitored
The DISPLAY ACTIVE (DA) command on SDSF gives you an indication of the
number of real pages used and the paging rate. Use RMF, the RMFMON command
on TSO to provide additional information. For more information about RMF see
“Resource measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.

Isolating (fencing) real storage for CICS (PWSS and PPGRTR)


Real storage isolation, or “fencing” in MVS, is a way of allocating real storage to
CICS alone, and can reduce paging problems.

190 CICS TS for OS/390: CICS Performance Guide


Recommendations
Use PWSS=(a,b) and PPGRTR=(c,d) or PPGRT=(c,d) in the IEAIPSxx.

The PWSS=(a,b) parameter specifies the range (minimum,maximum) of page


frames needed for the target working set size for an address space.

The target working set size of an XRF alternate CICS system can vary significantly
in different environments.

The PPGRTR=(c,d) or PPGRT=(c,d) parameter specifies the minimum and


maximum paging rates to use in adjusting the target working set size specified in
PWSS. PPGRTR means that the system resource manager (SRM) calculates the
paging rate using the alternate system’s residency time, rather than the execution
time if PPGRT is specified.

For the XRF alternate system that has a low activity while in the surveillance
phase, PPGRTR is a better choice because the target working set size is adjusted on
the basis of page-faults per second, rather than page-faults per execution second.

During catchup and while tracking, the real storage needs of the XRF alternate
CICS system are increased as it changes terminal session states and the contents of
the TCT. At takeover, the real storage needs also increase as the alternate CICS
system begins to switch terminal sessions and implement emergency restart. In
order to ensure good performance and minimize takeover time, the target working
set size should be increased. This can be done in several different ways, two of
which are:
1. Parameter “b” in PWSS=(a,b) can be set to “*” which allows the working set
size to increase without limit, if the maximum paging rate (parameter “d” in
PPGRTR=(c,d)) is exceeded.
2. A command can be put in the CLT to change the alternate CICS system’s
performance group at takeover to one which has different real storage isolation
parameters specified.

If you set PWSS=(*,*), and PPGRTR=(1,2), this allows CICS to use as much storage
as it wants when the paging rate is > 2 per second. The values depend very much
on the installation and the MVS setup. The values suggested here assume that
CICS is an important address space and therefore needs service to be resumed
quickly.

For the definition and format of the storage isolation parameters in IEAIPSxx, see
the OS/390 MVS Initialization and Tuning Reference manual.

How implemented
See the OS/390 MVS Initialization and Tuning Reference manual.

How monitored
Use RMF, the RMFMON command on TSO for additional information. The
DISPLAY ACTIVE (DA) command on SDSF will give you an indication of the
number of real pages used and the paging rate.

Chapter 15. MVS and DASD 191


Increasing the CICS region size
If all other factors in a CICS system are kept constant, increasing the region size
available to CICS allows an increase in the dynamic storage areas.

Changes to MVS and other subsystems over time generally reduce the amount of
storage required below the 16MB line. Thus the CICS region size may be able to be
increased when a new release of MVS or non-CICS subsystem is installed.

To get any further increase, operating-system functions and storage areas (such as
the local shared queue area, LSQA), or other programs must be reduced. The
LSQA is used by VTAM and other programs, and any increase in the CICS region
size decreases the area available for the LSQA, SWA, and subpools 229 and 230. A
shortage in these subpools can cause S80A, S40D, and S822 abends.

If you specify a larger region, the value of the relevant dsasize system initialization
parameter must be increased or the extra space is not used.

How implemented
The region size is defined in the startup job stream for CICS. Other definitions are
made to the operating system or through operating-system console commands.

To determine the maximum region size, determine the size of your private area
from RMF II or one of the storage monitors available.

To determine the maximum region size you should allocate, use the following
formula:
Max region possible = private area size – system region size – (LSQA + SWA +
subpools 229 and 230)

The remaining storage is available for the CICS region; for safety, use 80% or 90%
of this number. If the system is static or does not change much, use 90% of this
number for the REGION= parameter; if the system is dynamic, or changes
frequently, 80% would be more desirable.

Note: You must maintain a minimum of 200KB of free storage between the top of
the region and the bottom of the ESA high private area (the LSQA, the SWA,
and subpools 229 and 230).

How monitored
Use RMF, the RMFMON command on TSO for additional information. For more
information about RMF see “Resource measurement facility (RMF)” on page 27 or
the MVS RMF User’s Guide.

Giving CICS a high dispatching priority or performance group


Giving CICS a high dispatching priority causes the processor to be accessible more
often when it is needed.

192 CICS TS for OS/390: CICS Performance Guide


Performance groups in MVS are another way of giving CICS increased access to
the processor. Putting CICS at a high dispatching priority or in a favorable
performance group is most effective when CICS is processor-constrained.

The relative order of priority can be:


v VTAM
v Performance monitor
v Database
v CICS.

How implemented
Set the CICS priority above the automatic priority group (APG). See the OS/390
MVS Initialization and Tuning Reference manual for further information.

There are various ways to assign CICS a dispatching priority. The best is through
the ICS (PARMLIB member IEAICSxx). The ICS assigns performance group
numbers and enforces assignments. The dispatching priorities are specified in
PARMLIB member IEAIPSxx. Use APGRNG to capture the top ten priority sets (6
through 15). Specify a suitably high priority for CICS. There are priority levels that
change dynamically, but we recommend a simple fixed priority for CICS. Use
storage isolation only when necessary.

You cannot specify a response time, and you must give CICS enough resources to
achieve good performance.

See the OS/390 MVS Initialization and Tuning Reference manual for more
information.

How monitored
Use either the DISPLAY ACTIVE (DA) command on SDSF or use RMF, the
RMFMON command on TSO. For more information about RMF see “Resource
measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.

Using job initiators


The management of the MVS high private area can sometimes result in
fragmentation and stranded subpools caused by large imbedded free areas known
as “holes”.

Some fragmentation can also occur in a region when a job initiator starts multiple
jobs without being stopped and then started again. If you define the region as
having the maximum allowable storage size, it is possible to start and stop the job
the first time the initiator is used, but to have an S822 abend (insufficient virtual
storage) the second time the job is started. This is because of the fragmentation
that occurs.

In this situation, either the region has to be decreased, or the job initiator has to be
stopped and restarted.

Chapter 15. MVS and DASD 193


Two methods of starting the CICS job are available, to maximize the virtual storage
available to the region. One is to start and stop the initiator with each initialization
of CICS, executing CICS in a newly started initiator; and the other is to use the
MVS START command.

If CICS is executed as an MVS-started task (using the MVS START command)


instead of submitting it as a batch job, this not only ensures that a clean address
space is used (reducing the possibility of an S822 abend), but also saves a
significant amount of LSQA storage.

Effects
Some installations have had S822 abends after doing I/O generations or after
adding DD statements to large applications. An S822 abend occurs when you
request a REGION=nnnnK size that is larger than the amount available in the
address space.

The maximum region size that is available is difficult to define, and is usually
determined by trial and error. One of the reasons is that the size depends on the
system generation and on DD statements.

At least two techniques can be used to reduce storage fragmentation:


1. Dynamic allocation. You might consider writing a “front-end” program that
dynamically allocates the cataloged data sets for the step and then transfers
control (XCTL) to CICS. The effect of this is that only one eligible device list
(EDL) is used at a time.
2. UNITNAME. You might consider creating a new UNITNAME (via EDT-GEN or
IOGEN). This UNITNAME could be a subset of devices known to contain the
cataloged data set. By using the “unit override” feature of JCL, it could cause
the EDL to be limited to the devices specified in the UNITNAME.

Limitations
Available virtual storage is increased by starting new initiators to run CICS, or by
using MVS START. Startup time may be minimally increased.

How implemented
CICS startup and use of initiators are defined in an installation’s startup
procedures.

How monitored
Part of the job termination message IEF374I 'VIRT=nnnnnK' shows you the virtual
storage below the 16MB line, and another part 'EXT=nnnnnnnK' shows the virtual
storage above the 16MB line.

Region exit interval (ICV)


When CICS cannot dispatch a task, either because there are no tasks in the system
at that time, or because all tasks are waiting for data set or terminal I/O to finish,
CICS issues an operating-system WAIT. The ICV system initialization parameter
(see also “Terminal scan delay (ICVTSD)” on page 211) controls the length of this

194 CICS TS for OS/390: CICS Performance Guide


wait (but bear in mind that any interrupt, for example, data set I/O or terminal
I/O, before any of these expires, causes CICS to be dispatched).

The ICV system initialization parameter specifies the maximum time in


milliseconds that CICS releases control to the operating system when there are no
transactions ready to resume processing. CICS issues a region wait in this case for
the time specified in the ICV system initialization parameter. If activity in the
system causes CICS to be dispatched sooner, this parameter has no effect.

In general, ICV can be used in low-volume systems to keep part of the CICS
management code paged in. Expiration of this interval results in a full terminal
control table (TCT) scan in non-VTAM environments, and controls the dispatching
of terminal control in VTAM systems with low activity. Redispatch of CICS by
MVS after the wait may be delayed because of activity in the supervisor or in
higher-priority regions, for example, VTAM. The ICV delay can affect the
shutdown time if no other activity is taking place.

The value of ICV acts as a backstop for MROBTCH (see “Batching requests
(MROBTCH)” on page 311).

Main effect
The region exit interval determines the maximum period between terminal control
full scans. However, the interval between full scans in very active systems may be
less than this, being controlled by the normally shorter terminal scan delay interval
(see “Terminal scan delay (ICVTSD)” on page 211). In such systems, ICV becomes
largely irrelevant unless ICVTSD has been set to zero.

Secondary effects
Whenever control returns to the task dispatcher from terminal control after a full
scan, ICV is added to the current time of day to give the provisional due time for
the next full scan. In idle systems, CICS then goes into an operating-system wait
state, setting the timer to expire at this time. If there are application tasks to
dispatch, however, CICS passes control to these and, if the due time arrives before
CICS has issued an operating-system WAIT, the scan is done as soon as the task
dispatcher next regains control.

In active systems, after the due time has been calculated by adding ICV, the scan
may be performed at an earlier time by application activity (see “Terminal scan
delay (ICVTSD)” on page 211).

Operating-system waits are not always for the duration of one ICV. They last only
until some event ends. One possible event is the expiry of a time interval, but
often CICS regains control because of the completion of an I/O operation. Before
issuing the operating-system WAIT macro, CICS sets an operating-system timer,
specifying the interval as the time remaining until the next time-dependent activity
becomes due for processing. This is usually the next terminal control scan,
controlled by either ICV or ICVTSD, but it can be the earliest ICE expiry time, or
even less.

In high-activity systems, where CICS is contending for processor time with very
active higher-priority subsystems (VTAM, TSO, other CICS systems, or DB/DC),
control may be seized from CICS so often that CICS always has work to do and
never issues an operating-system WAIT.

Chapter 15. MVS and DASD 195


Where useful
The region exit interval is useful in environments where batch or other CICS
systems are running concurrently.

Limitations
Too low a value can impair concurrent batch performance by causing frequent and
unnecessary dispatches of CICS by MVS. Too high a value can lead to an
appreciable delay before the system handles time-dependent events (such as
abends for terminal read or deadlock timeouts) after the due time.

A low ICV value does not prevent all CICS modules from being paged out. When
the ICV time interval expires, the operating system dispatches CICS task control
which, in turn, dispatches terminal control. CICS references only task control,
terminal control, TCT, and the CSA. No other modules in CICS are referenced. If
there is storage constraint they do not stay in real storage.

After the operating-system WAIT, redispatch of CICS may be delayed because of


activity in the supervisor or in higher-priority regions such as VTAM, and so on.

The ICV delay can affect the shutdown time if no other activity is taking place.

Recommendations
The time interval can be any decimal value in the range from 100 through 3600000
milliseconds.

In normal systems, set ICV to 1000-10000 milliseconds, or more.

A low interval value can enable much of the CICS nucleus to be retained, and not
be paged out at times of low terminal activity. This reduces the amount of paging
necessary for CICS to process terminal transactions (thus representing a potential
reduction in response time), sometimes at the expense of concurrent batch region
throughput. Large networks with high terminal activity tend to drive CICS without
a need for this value, except to handle the occasional, but unpredictable, period of
inactivity. These networks can usually function with a large interval (10000 to
30000 milliseconds). After a task has been initiated, the system recognizes its
requests for terminal services and the completion of the services, and overrides this
maximum delay interval.

Small systems or those with low terminal activity are subject to paging introduced
by other jobs running in competition with CICS. If you specify a low interval
value, key portions of the CICS nucleus are referenced more frequently, thus
reducing the probability of these pages being paged-out. However, the execution of
the logic, such as terminal polling activity, without performing productive work
might be considered wasteful.

You must weigh the need to increase the probability of residency by frequent but
unproductive referencing, against the extra overhead and longer response times
incurred by allowing the paging to occur. If you increase the interval size, more
productive work is performed at the expense of performance if paging occurs
during the periods of CICS activity.

196 CICS TS for OS/390: CICS Performance Guide


Note: If the terminal control negative poll delay feature is used, the ICV value
selected must not exceed the negative poll delay value. If the negative poll
delay used is zero, any ICV value may be used (see “Negative poll delay
(NPDELAY)” on page 214).

How implemented
ICV is specified in the SIT or at startup, and can be changed using either the
CEMT or EXEC CICS SET SYSTEM (time) command. It is defined in units of
milliseconds, rounded down to the nearest multiple of ten. The default is 1000
(that is, one second; usually too low).

How monitored
The region exit interval can be monitored by the frequency of CICS
operating-system WAITs that are counted in “Dispatcher domain” on page 367.

Use of LLA (MVS library lookaside)


Modules loaded by CICS from the DFHRPL libraries may be managed by the MVS
LLA (library lookaside) facility. LLA is designed to minimize disk I/O by keeping
load modules in a VLF (virtual lookaside facility) dataspace and keeping a version
of the library directory in its own address space.

LLA manages modules (system or application) whose library names you have put
in the appropriate CSVLLA member in SYS1.PARMLIB.

There are two optional parameters in this member that affect the management of
specified libraries:
FREEZE
Tells the system always to use the copy of the directory that is maintained
in the LLA address space.
NOFREEZE
Tells the system always to search the directory that resides in DASD
storage.

However, FREEZE and NOFREEZE are only relevant when LLACOPY is not used.
When CICS issues a LOAD and specifies the directory entry (DE), it bypasses the
LLA directory processing, but determines from LLA whether the program is
already in VLF or must be fetched from DASD. For more information about the
FREEZE and NOFREEZE options, see the OS/390 MVS Initialization and Tuning
Guide.

The use of LLA to manage a very busy DFHRPL library can show two distinct
benefits:
1. Improved transaction response time
2. Better DASD utilization.

It is possible, as throughput increases, that DASD utilization actually decreases.


This is due to LLA’s observation of the load activity and its decisions about which
modules to stage (keep) in the VLF dataspace.

Chapter 15. MVS and DASD 197


LLA does not automatically stage all members that are fetched. LLA attempts to
select those modules whose staging gives the best reductions in response time,
contentions, storage cost, and an optional user-defined quantity.

In addition to any USER-defined CICS DFHRPL libraries, LLA also manages the
system LNKLST. It is likely that staging some modules from the LNKLST could
have more effect than staging modules from the CICS libraries. LLA makes
decisions on what is staged to VLF only after observing the fetch activity in the
system for a certain period. For this reason it is possible to see I/O against a
program library even when it is managed by LLA.

Another contributing factor for continued I/O is the system becoming “MAXVIRT
constrained”, that is, the sum of bytes from the working set of modules is greater
than the MAXVIRT parameter for the LLA class of VLF objects. You can increase
this value by changing it in the COFVLF member in SYS1.PARMLIB. A value too
small can cause excessive movement of that VLF object class; a value too large can
cause excessive paging; both may increase the DASD activity significantly.

See the OS/390 MVS Initialization and Tuning Guide manual for information on LLA
and VLF parameters.

Effects of LLACOPY
CICS can use one of two methods for locating modules in the DFHRPL
concatenation. Either a build link-list (BLDL) macro or a LLACOPY macro is
issued to return the directory information to pass to the load request. Which macro
is issued is dependant upon the LLACOPY system initialization parameter and the
reason for the locate of the module.

The LLACOPY macro is used to update the LLA-managed directory entry for a
module or a list of modules. If a module which is LLA managed has an LLACOPY
issued against it, it results in a BLDL with physical I/O against the DCB specified.
If the directory information does not match that which is stored within LLA, the
LLA tables are then updated, keeping both subsystems synchronized. While this
activity takes place an ENQ for the resource SYSZLLA1.update is held. This is then
unavailable to any other LLACOPY request on the same MVS system and therefore
another LLACOPY request is delayed until the ENQ is released.

The BLDL macro also returns the directory information. When a BLDL is issued
against an LLA managed module, the information returned will be from the LLA
copy of the directory, if one exists. It will not necessarily result in physical I/O to
the dataset and may therefore be out of step with the actual dataset. BLDL does
not require the SYSZLLA1.update ENQ and is therefore less prone to being
delayed by BLDLs on the same MVS system. Note that it is not advisable to use a
NOCONNECT option when invoking the BLDL macro because the DFHRPL
concatenated dataset may contain partitioned data set extended (PDSE) datasets.
PDSE can contain more function than PDS, but CICS may not recognise some of
this function. PDSE also use more virtual storage .

The SIT Parameter LLACOPY


If you code LLACOPY=YES, the default, CICS issues a LLACOPY macro each time
a module is located from the RPL dataset. This is done either on the first
ACQUIRE or on any subsequent NEWCOPY or PHASEIN requests. This ensures
that CICS always obtains the latest copy of any LLA-managed modules. There is a

198 CICS TS for OS/390: CICS Performance Guide


small chance of delay because of a failure to obtain an ENQ while another
LLACOPY completes and there is some extra pathlength involved in maintaining
the LLA tables.

If you code LLACOPY=NO, CICS never issues an LLACOPY macro. Instead, each
time the RPL dataset is searched for a module, a BLDL is issued.

If you code LLACOPY=NEWCOPY to request a new copy of an LLA-managed


module, a MODIFY LLA,REFRESH or F LLA,REFRESH must be issued before the
NEWCOPY is performed within CICS. (MODIFY LLA,REFRESH rebuilds LLA’s
directory for the entire set of libraries managed by LLA.) When you code
LLACOPY=NEWCOPY, CICS issues the LLACOPY macro when loading a module
as a result of a NEWCOPY or PHASEIN request. A BLDL macro is issued on the
first use of the module, but when it is due to one of these requests, the REFRESH
option results in loading the new module. If an out of date version of a module is
loaded upon its first use, the latest version would be used after a NEWCOPY or
PHASEIN. For more information about the LLACOPY system initialization
parameter, see the CICS System Definition Guide.

DASD tuning
The main solutions to DASD problems are to:
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.

Reducing the number of I/O operations


The principal ways of reducing the number of I/O operations are to:
v Allocate VSAM Hiperspace buffers
v Allocate additional address space buffers
v Use data tables when appropriate
v Use or increase the use of main temporary storage
v Eliminate or minimize program compression
v Review and improve the design of applications run on CICS
v Make use of a DASD controller cache, but only if data set placement tuning has
been done
v Minimize CI/CA splits by:
– Allocating ample free space (free space can be altered by key range during
load)
– Timely reorganizations of disk storage.

Tuning the I/O operations


This can reduce service time. The principal ways of tuning the I/O operations are
to:
v Specify the correct CI size. This has an effect on:
– The space used on the volume
– Transfer time
– Storage requirements for buffers

Chapter 15. MVS and DASD 199


– The type of processing (direct or sequential).
v Specify the location of the VTOC correctly.
v Take care over data set placement within the volume.
v Use an appropriately fast device type and, if necessary, use a cache memory (but
only if data set placement tuning has been done and if there are sufficient
channels to handle the device speed).

Balancing I/O operations


This can reduce queue time. The principal ways of balancing I/O operations are to:
v Spread a high-use data set across multiple volumes.
v Minimize the use of shared DASD volumes between multiple processors.
v Place batch files and online files on separate volumes, especially:
– Spool files
– Sort files
– Assembler or compiler work files
– Page data sets.
v Place index and data on separate volumes (for VSAM KSDS files).
v Place concurrently used files on separate volumes. For example, a CICS journal
should be the only data set in use on its volume.

Take the following figures as guidelines for best DASD response times for online
systems:
v Channel busy: less than 30% (with CHP ids this can be higher)
v Device busy: less than 35% for randomly accessed files
v Average response time: less than 20 milliseconds.

Aim for multiple paths to disk controllers because this allows dynamic path
selection to work.

200 CICS TS for OS/390: CICS Performance Guide


Chapter 16. Networking and VTAM
This chapter includes the following topics:
v “Terminal input/output area (TYPETERM IOAREALEN or TCT TIOAL)”
v “Receive-any input areas (RAMAX)” on page 203
v “Receive-any pool (RAPOOL)” on page 204
v “High performance option (HPO) with VTAM” on page 207
v “SNA transaction flows (MSGINTEG, and ONEWTE)” on page 208
v “SNA chaining (TYPETERM RECEIVESIZE, BUILDCHAIN, and SENDSIZE)” on
page 209
v “Number of concurrent logon/logoff requests (OPNDLIM)” on page 210
v “Terminal scan delay (ICVTSD)” on page 211
v “Negative poll delay (NPDELAY)” on page 214
v “Compression of output terminal data streams” on page 215
v “Automatic installation of terminals” on page 216

Terminal input/output area (TYPETERM IOAREALEN or TCT TIOAL)


If you are using VTAM, the CEDA DEFINE TYPETERM IOAREALEN command
determines the initial size of the terminal input/output area (TIOA) to be passed
onto a transaction for each terminal. The syntax for IOAREALEN is
({0|value1},{0|value2}). This operand is used only for the first input message for
all transactions.

For TCAM, the DFHTCT TYPE=TERMINAL TIOAL=value macro, is the only way
to adjust this value.

One value defining the minimum size is used for non-SNA devices, while two
values specifying both the minimum and maximum size are used for SNA devices.

This book does not discuss the performance aspects of the CICS Front End
Programming Interface. See the CICS Front End Programming Interface User’s Guide
for more information.

Effects
When value1,0 is specified for IOAREALEN, value1 is the minimum size of the
terminal input/output area that is passed to an application program when a
RECEIVE command is issued. If the size of the input message exceeds value1, the
area passed to the application program is the size of the input message.

When value1, value2 is specified, value1 is the minimum size of the terminal
input/output area that is passed to an application program when a RECEIVE
command is issued. Whenever the size of the input message exceeds value1, CICS
will use value2. If the input message size exceeds value2, the node abnormal
condition program sends an exception response to the terminal.

If you specify ATI(YES), you must specify an IOAREALEN of at least one byte.

© Copyright IBM Corp. 1983, 1999 201


For TCAM supported devices, if the TIOAL operand is omitted, it defaults to the
INAREAL length value in the TCT TYPE=LINE operand. Do not omit the TIOAL
operand for remote terminals.

Limitations
Real storage can be wasted if the IOAREALEN (value1) or TIOAL value is too
large for most terminal inputs in the network. If IOAREALEN (value1) or TIOAL
is smaller than most initial terminal inputs, excessive GETMAIN requests can
occur, resulting in additional processor requirements, unless IOAREALEN(value1)
or TIOAL is zero.

Recommendations
IOAREALEN(value1) or TIOAL should be set to a value that is slightly larger than
the average input message length for the terminal. The maximum value that may
be specified for IOAREALEN/TIOAL is 32767 bytes.

If a value of nonzero is required, the best size to specify is the most commonly
encountered input message size. A multiple of 64 bytes minus 21 allows for SAA
requirements and ensures good use of operating system pages.

For VTAM, you can specify two values if inbound chaining is used. The first value
should be the length of the normal chain size for the terminal, and the second
value should be the maximum size of the chain. The length of the TIOA presented
to the task depends on the message length and the size specified for the TIOA.
(See the example in Figure 30.)

Without chain assembly:


TIOA size: ────────────────────Y│
Message length: ──────────────Y│
TIOA acquired: ────────────────────Y│
TIOA size: ────────────────────Y│
Message length: ───────────────────────────Y│
TIOA acquired: ───────────────────────────Y│
With chain assembly
Value1 size: ────────────────────Y│
Value2 size: ───────────────────────────Y│
Message1 length: ────────────────Y│
TIOA acquired: ────────────────────Y│
Message2 length: ───────────────────────Y│
TIOA acquired: ───────────────────────────Y│

Figure 30. Message length and terminal input/output area length

Avoid specifying too large a value1, for example, by matching it to the size of the
terminal display screen. This area is used only as input. If READ with SET is
specified, the same pointer is used by applications for an output area.

If too small a value is specified for value1, extra processing time is required for
chain assembly, or data is lost if inbound chaining is not used.

In general, a value of zero is best because it causes the optimum use of storage and
eliminates the second GETMAIN request. If automatic transaction initiation (ATI) is
used for that terminal, a minimum size of one byte is required.

202 CICS TS for OS/390: CICS Performance Guide


The second value for SNA devices is used to prevent terminal streaming, and so
should be slightly larger than the largest possible terminal input in the network. If
a message larger than this second value is encountered, a negative response is
returned to the terminal, and the terminal message is discarded.

How implemented
For VTAM, the TIOA value is specified in the CEDA DEFINE TYPETERM
IOAREALEN attribute.

For TCAM, the TIOAL value can be specified in the terminal control table (TCT)
TYPE=TERMINAL operand. TIOAL defaults to the INAREAL value specified in
the TCT TYPE=LINE operand.

How monitored
RMF and NetView Performance Monitor (NPM) can be used to show storage usage
and message size characteristics in the network.

Receive-any input areas (RAMAX)


The system initialization parameter, RAMAX, specifies the size in bytes of the I/O
area that is to be allocated for each VTAM receive-any operation. These storage
areas are called receive-any input areas (RAIAs), and are used to receive the first
terminal input for a transaction from VTAM. All input from VTAM comes in
request/response units (RUs).

Storage for the RAIAs, which is above the 16MB line, is allocated by the CICS
terminal control program during CICS initialization, and remains allocated for the
entire execution of the CICS job step. The size of this storage is the product of the
RAPOOL and RAMAX system initialization parameters.

Effects
VTAM attempts to put any incoming RU into the initial receive-any input area,
which has the size of RAMAX. If this is not large enough, VTAM indicates that
and also states how many extra bytes are waiting that cannot be accommodated.

RAMAX is the largest size of any RU that CICS can take directly in the receive-any
command, and is a limit against which CICS compares VTAM’s indication of the
overall size of the RU. If there is more, VTAM saves it, and CICS gets the rest in a
second request.

With a small RAMAX, you reduce the virtual storage taken up in RAIAs but risk
more processor usage in VTAM retries to get any data that could not fit into the
RAIA.

For many purposes, the default RAMAX value of 256 bytes is adequate. If you
know that many incoming RUs are larger than this, you can always increase
RAMAX to suit your system.

For individual terminals, there are separate parameters that determine how large
an RU is going to be from that device. It makes sense for RAMAX to be at least as
large as the largest CEDA SENDSIZE for any frequently-used terminals.

Chapter 16. Networking and VTAM 203


Where useful
You can use the RAMAX system initialization parameter in any networks that use
the VTAM access method for terminals.

Limitations
Real storage can be wasted with a high RAMAX value, and additional processor
time can be required with a low RAMAX value. If the RAMAX value is set too
low, extra processor time is needed to acquire additional buffers to receive the
remaining data. Because most inputs are 256 bytes, this should normally be
specified.

Do not specify a RAMAX value that is less than the RUSIZE (from the CINIT) for
a pipeline terminal because pipelines cannot handle overlength data.

Recommendations
Code RAMAX with the size in bytes of the I/O area allocated for each receive-any
request issued by CICS. The maximum value is 32767.

Set RAMAX to be slightly larger than your CICS system input messages. If you
know the message length distribution for your system, set the value to
accommodate the majority of your input messages.

In any case, the size required for RAMAX need only take into account the first (or
only) RU of a message. Thus, messages sent using SNA chaining do not require
RAMAX based on their overall chain length, but only on the size of the constituent
RUs.

Receive-any input areas are taken from a fixed length subpool of storage. A size of
2048 may appear to be adequate for two such areas to fit on one 4KB page, but
only 4048 bytes are available in each page, so only one area fits on one page. A
size of 2024 should be defined to ensure that two areas, including page headers, fit
on one page.

How implemented
RAMAX is a system initialization parameter.

How monitored
The size of RUs or chains in a network can be identified with a VTAM line or
buffer trace. The maximum size RUs are defined in the CEDA SENDSIZE attribute.

Receive-any pool (RAPOOL)


The RAPOOL system initialization parameter specifies the number of concurrent
receive-any requests that CICS is to process from VTAM. RAPOOL determines how
many receive-any buffers there are at any time and, therefore, if VTAM has a lot of
input simultaneously, it enables VTAM to put all the messages directly into CICS
buffers rather than possibly having to store them itself elsewhere. The first operand
(value1) is for non-HPO systems, the second operand (value2) is for HPO systems.

204 CICS TS for OS/390: CICS Performance Guide


The HPO value for the non-HPO operand is derived according to the formula
shown in the CICS System Definition Guide. The second operand (value2) for HPO
systems is used with minimal adjustment by the formula.

Effects
Initially, task input from a terminal or session is received by the VTAM access
method and is passed to CICS if CICS has a receive-any request outstanding.

For each receive-any request, a VTAM request parameter list (RPL), a receive-any
control element (RACE), and a receive-any input area (RAIA)—the value specified
by RAMAX (see “Receive-any input areas (RAMAX)” on page 203) are set aside.
The total area set aside for VTAM receive-any operations is:

(maximum RAIA size + RACE size + RPL size) * RAPOOL

If HPO=YES, both RACE and RPL are above the 16MB line.

See page 203 for RAIA considerations.

In general, input messages up to the value specified in RAPOOL are all processed
in one dispatch of the terminal control task. Because the processing of a
receive-any request is a short operation, at times more messages than the RAPOOL
value may be processed in one dispatch of terminal control. This happens when a
receive-any request completes before the terminal control program has finished
processing and there are additional messages from VTAM.

VTAM receive-any processing is for the first terminal message in a transaction, so


RAPOOL has no effect on further inputs for conversational tasks. Those additional
inputs are processed with VTAM receive-specific requests.

The pool is used only for the first input to start a task; it is not used for output or
conversational input. VTAM posts the event control block (ECB) associated with
the receive any input area. CICS then moves the data to the terminal I/O area
(TIOA) ready for task processing. The RAIA is then available for reuse.

Where useful
Use the RAPOOL operand in networks that use the VTAM access method for
terminals.

Limitations
If the RAPOOL value is set too low, this can result in terminal messages not being
processed in the earliest dispatch of the terminal control program, thereby
inducing transaction delays during high-activity periods. For example, if you use
the default and five terminal entries want to start up tasks, three tasks may be
delayed for at least the time required to complete the VTAM receive-any request
and copy the data and RPL. In general, no more than 5 to 10% of all receive-any
processing should be at the RAPOOL ceiling, with none being at the RAPOOL
ceiling if there is sufficient storage.

Chapter 16. Networking and VTAM 205


If the RAPOOL value is set too high, this can use excessive virtual storage, but
does not affect real storage because the storage is not page-fixed and is therefore
paged out.

Recommendations
Whether RAPOOL is significant or not depends on the environment of the CICS
system: whether, for example, HPO is being used.

In some cases, it may sometimes be more economical for VTAM to store the
occasional peak of messages in its own areas rather than for CICS itself to have a
large number of RAIAs, many of which are unused most of the time.

Furthermore, there are situations where CICS reissues a receive-any as soon as it


finds one satisfied. It thereby uses the same element over and over again in order
to bring in any extra messages that are in VTAM.

CICS maintains a VTAM RECEIVE ANY for n of the RPLs, where n is either the
RAPOOL value, or the MXT value minus the number of currently active tasks,
whichever is the smaller. See the CICS System Definition Guide for more information
about these SIT parameters.

A general recommendation is to code RAPOOL with the number of fixed request


parameter lists (RPLs) that you require. When it is not at MXT, CICS maintains a
receive-any request for each of these RPLs. The number of RPLs that you require
depends on the expected activity of the system, the average transaction lifetime,
and the MXT specified.

The RAPOOL value you set depends on the number of sessions, the number of
terminals, and the ICVTSD value (see page 211) in the system initialization table
(SIT). Initially, for non-HPO systems, you should set RAPOOL to 1.5 times your
peak local 2 transaction rate per second plus the autoinstall rate. This can then be
adjusted by analyzing the CICS VTAM statistics and by resetting the value to the
maximum RPLs reached.

For HPO systems, a small value (<= 5) is usually sufficient if specified through the
value2 in the RAPOOL system initialization parameter. Thus, RAPOOL=20, for
example, is specified either RAPOOL=(20) or RAPOOL=(20,5) to achieve the same
effect.

How implemented
RAPOOL is a system initialization parameter.

How monitored
The CICS VTAM statistics contain values for the maximum number of RPLs posted
on any one dispatch of the terminal control program, and the number of times the
RPL maximum was reached. This maximum value may be greater than the
RAPOOL value if the terminal control program is able to reuse an RPL during one
dispatch. See “VTAM statistics” on page 51 for more information.

2. The RAPOOL figure does not include MRO sessions, so you should set RAPOOL to a low value in application- or file-owning
regions (AORs or FORs).

206 CICS TS for OS/390: CICS Performance Guide


High performance option (HPO) with VTAM
The MVS high performance option (HPO) can be used for processing VTAM
requests. The purpose of HPO is to reduce the transaction pathlength through
VTAM.

Effects
| HPO bypasses some of the validating functions performed by MVS on I/O
operations, and implements service request block (SRB) scheduling. This shortens
the instruction pathlength and allows some concurrent processing on MVS images
for the VTAM operations because of the SRB scheduling. This makes it useful in a
multi processor environment, but not in a single processor environment.

Limitations
HPO requires CICS to be authorized, and some risks with MVS integrity are
involved because a user-written module could be made to replace one of the CICS
system initialization routines and run in authorized mode. This risk can be reduced
by RACF protecting the CICS SDFHAUTH data set.

Use of HPO saves processor time, and does not increase real or virtual storage
requirements or I/O contention. The only expense of HPO is the potential security
| exposure that arises because of a deficiency in validation.

Recommendations
| The general recommendation is that all production systems with vetted
| applications can use HPO. It is totally application-transparent and introduces no
function restrictions while providing a reduced pathlength through VTAM. In the
case of VTAM, the reduced validation does not induce any integrity loss for the
messages.

How implemented
The SVCs and use of HPO are specified in the system initialization table (SIT) and,
if the default SVC numbers are acceptable, no tailoring of the system is required.

How monitored
There is no direct measurement of HPO. One way to tell if it is working is to take
detailed measurements of processor usage with HPO turned on (SIT option) and
with it turned off. Depending on the workload, you may not see much difference.
Another way to check whether it is working is that you may see a small increase
in the SRB scheduling time with HPO turned on.

RMF can give general information on processor usage. An SVC trace can show
how HPO was used.

| Note that you should be take care when using HPO in a system that is being used
| for early testing of a new application or CICS code (a new release or PUT). Much
of the pathlength reduction is achieved by bypassing control block verification
| code in VTAM. Untested code might possibly corrupt the control blocks that CICS
| passes to VTAM, and unvalidated applications can lead to security exposure.

Chapter 16. Networking and VTAM 207


SNA transaction flows (MSGINTEG, and ONEWTE)
Within CICS, the MSGINTEG option can be used to control the communication
requests and responses that are exchanged between the terminals in a network and
the VTAM and NCP communications programs.

Effects
One of the options in Systems Network Architecture (SNA) is whether the
messages exchanged between CICS and a terminal are to be in definite or
exception response mode. Definite response mode requires both the terminal and
CICS to provide acknowledgment of receipt of messages from each other on a
one-to-one basis.

SNA also ensures message delivery through synchronous data link control (SDLC),
so definite response is not normally required. Specifying message integrity
(MSGINTEG) causes the sessions for which it is specified to operate in definite
response mode.

In other cases, the session between CICS and a terminal operates in exception
response mode, and this is the normal case.

In SNA, transactions are defined within brackets. A begin bracket (BB) command
defines the start of a transaction, and an end bracket (EB) command defines the
end of that transaction. Unless CICS knows ahead of time that a message is the last
of a transaction, it must send an EB separate from the last message if a transaction
terminates. The EB is an SNA command, and can be sent with the message,
eliminating one required transmission to the terminal.

Specifying the ONEWTE option for a transaction implies that only one output
message is to be sent to the terminal by that transaction, and allows CICS to send
the EB along with that message. Only one output message is allowed if ONEWTE
is specified and, if a second message is sent, the transaction is abended.

The second way to allow CICS to send the EB with a terminal message is to code
the LAST option on the last terminal control or basic mapping support SEND
command in a program. Multiple SEND commands can be used, but the LAST
option must be coded for the final SEND in a program.

The third (and most common) way is to issue SEND without WAIT as the final
terminal communication. The message is then sent as part of task termination.

You have the following options:


v Not specifying MSGINTEG
v Specifying MSGINTEG (which simply asks for definite response to be forced)

Where useful
The above options can be used in all CICS systems that use VTAM.

Limitations
The MSGINTEG option causes additional transmissions to the terminal.
Transactions remain in CICS for a longer period, and tie up virtual storage and

208 CICS TS for OS/390: CICS Performance Guide


access to resources (primarily enqueues). MSGINTEG is required if the transaction
must know that the message was delivered.

When MSGINTEG is specified, the TIOA remains in storage until the response is
received from the terminal. This option can increase the virtual storage
requirements for the CICS region because of the longer duration of the storage
needs.

How implemented
With resource definition online (RDO) using the CEDA transaction, protection can
be specified in the PROFILE definition by means of the MSGINTEG, and ONEWTE
options. The MSGINTEG option is used with SNA LUs only. See the CICS Resource
Definition Guide for more information about defining a PROFILE.

How monitored
You can monitor the use of the above options from a VTAM trace by examining
the exchanges between terminals and CICS and, in particular, by examining the
contents of the request/response header (RH).

SNA chaining (TYPETERM RECEIVESIZE, BUILDCHAIN, and


SENDSIZE)
Systems Network Architecture (SNA) allows terminal messages to be chained, and
lets large messages be split into smaller parts while still logically treating the
multiple message as a single message.

Input chain size and characteristics are normally dictated by the hardware
requirements of the terminal in question, and so the CEDA BUILDCHAIN and
RECEIVESIZE attributes have default values which depend on device attributes.
The size of an output chain is specified by the CEDA SENDSIZE attribute.

Effects
Because the network control program (NCP) also segments messages into 256-byte
blocks for normal LU Type 0, 1, 2, and 3 devices, a SENDSIZE value of zero
eliminates the overhead of output chaining. A value of 0 or 1536 is required for
local devices of this type.

If you specify the CEDA SENDSIZE attribute for intersystem communication (ISC)
sessions, this must match the CEDA RECEIVESIZE attribute in the other system.
The CEDA SENDSIZE attribute or TCT BUFFER operand controls the size of the
SNA element that is to be sent, and the CEDA RECEIVESIZEs need to match so
that there is a corresponding buffer of the same size able to receive the element.

If you specify BUILDCHAIN(YES), CICS assembles a complete chain of elements


before passing them to an application. If you do not specify BUILDCHAIN(YES),
each individual RU is passed to an individual receive-any in the application. With
SNA/3270 BMS does not work correctly if you do not specify BUILDCHAIN(YES).

Chapter 16. Networking and VTAM 209


If you are dealing with very large inbound elements that exceed a maximum of
32KB, you cannot use the BUILDCHAIN attribute or CHNASSY operand. You
must use multiple individual RUs, and this extends the transaction life in the
system.

Where useful
Chaining can be used in systems that use VTAM and SNA terminals of types that
tolerate chaining.

Limitations
If you specify a low CEDA SENDSIZE value, this causes additional processing and
real and virtual storage to be used to break the single logical message into multiple
parts.

Chaining may be required for some terminal devices. Output chaining can cause
flickering on display screens, which can annoy users. Chaining also causes
additional I/O overhead between VTAM and the NCP by requiring additional
VTAM subtasks and STARTIO operations. This additional overhead is eliminated
with applicable ACF/VTAM releases by making use of the large message
performance enhancement option (LMPEO).

Recommendations
The CEDA RECEIVESIZE value for IBM 3274-connected display terminals should
be 1024; for IBM 3276-connected display terminals it should be 2048. These values
give the best line characteristics while keeping processor usage to a minimum.

How implemented
Chaining characteristics are specified in the CEDA DEFINE TYPETERM statement
with the SENDSIZE, BUILDCHAIN, and RECEIVESIZE attributes.

How monitored
Use of chaining and chain size can be determined by examining a VTAM trace.
You can also use the CICS internal and auxiliary trace facilities, in which the VIO
ZCP trace shows the chain elements. Some of the network monitor tools such as
NetView Performance Monitor (NPM) give this data.

Number of concurrent logon/logoff requests (OPNDLIM)


The OPNDLIM operand defines the number of concurrent VTAM logons and
logoffs that are to be processed by CICS. In systems running ACF/VTAM Release
3.2 and later, this operand is not necessary and will be ignored. In all other
instances this system initialization parameter limits the number of concurrent
logon OPNDST and logoff CLSDST requests. The smaller this value, the smaller
the amount of storage that is required during the open and close process.

Each concurrent logon/logoff requires storage in the CICS dynamic storage areas
for the duration of that processing.

210 CICS TS for OS/390: CICS Performance Guide


Effects
Particularly when logons are being done automatically with either the CICS
CONNECT=AUTO facility or the VTAM LOGAPPL facility, large numbers of
logons can occur at CICS startup or restart times. In systems running ACF/VTAM
with a release prior to 3.2 this can require significant amounts of storage, which
can be reduced with the OPNDLIM operand. In ACF/VTAM Release 3.2 and later
systems, this operand is not necessary and will be ignored.

If an automatic logon facility is required, the LOGAPPL facility offers two


advantages. It requires approximately 3500 bytes less storage in VTAM than the
CONNECT=AUTO facility, and it logs terminals back on to CICS each time the
device is activated to VTAM, rather than only at CICS initialization.

Where useful
The OPNDLIM system initialization parameter can be used in CICS systems that
use VTAM as the terminal access method.

The OPNDLIM system initialization parameter can also be useful if there are times
when all the user community tends to log on or log off at the same time, for
example, during lunch breaks.

Limitations
If too low a value is specified for OPNDLIM, real and virtual storage requirements
are reduced within CICS and VTAM buffer requirements may be cut back, but
session initializations and terminations take longer.

Recommendations
Use the default value initially and make adjustments if statistics indicate that too
much storage is required in your environment or that the startup time (DEFINE
TYPETERM AUTOCONNECT attribute in CEDA) is excessive.

OPNDLIM should be set to a value not less than the number of LUs connected to
any single VTAM line.

How implemented
OPNDLIM is a system initialization parameter.

How monitored
Logon and logoff activities are not reported directly by CICS or any measurement
tools, but can be analyzed using the information given in a VTAM trace or VTAM
display command.

Terminal scan delay (ICVTSD)


The terminal scan delay (ICVTSD) system initialization parameter determines the
frequency with which CICS attempts to process terminal output requests.

Chapter 16. Networking and VTAM 211


In general, this value defines the time that the terminal control program must wait
to process:
v Non-VTAM terminal I/O requests with WAIT specified
v Non-VTAM output deferred until task termination
v Automatic transaction initiation (ATI) requests
v VTAM terminal management, including output request handling, in busy CICS
systems with significant application task activity.

This last case arises from the way that CICS scans active tasks.

On CICS non-VTAM systems, the delay value specifies how long the terminal
control program must wait after an application terminal request, before it carries
out a TCT scan. The value thus controls batching and delay in the associated
processing of terminal control requests. In a low-activity system, it controls the
dispatching of the terminal control program.

The batching of requests reduces processor time at the expense of longer response
times. On CICS VTAM systems, it influences how quickly the terminal control
program completes VTAM request processing, especially when the MVS high
performance option (HPO) is being used.

Effects
VTAM
In VTAM networks, a low ICVTSD value does not cause full TCT scans because
the input from or output to VTAM terminals is processed from the activate queue
chain, and only those terminal entries are scanned.

With VTAM terminals, CICS uses bracket protocol to indicate that the terminal is
currently connected to a transaction. The bracket is started when the transaction is
initiated, and ended when the transaction is terminated. This means that there
could be two outputs to the terminal per transaction: one for the data sent and one
when the transaction terminates containing the end bracket. In fact, only one
output is sent (except for WRITE/SEND with WAIT and definite response). CICS
holds the output data until the next terminal control request or termination. In this
way it saves processor cycles and line utilization by sending the message and end
bracket or change direction (if the next request was a READ/RECEIVE) together in
the same output message (PIU). When the system gets very busy, terminal control
is dispatched less frequently and becomes more dependent upon the value
specified in ICVTSD. Because CICS may not send the end bracket to VTAM for an
extended period of time, the life of a transaction can be extended. This keeps
storage allocated for that task for longer periods and potentially increases the
amount of virtual storage required for the total CICS dynamic storage areas.

Setting ICVTSD to zero can overcome this effect.

Non-VTAM
ICVTSD is the major control on the frequency of full terminal control table (TCT)
scanning of non-VTAM terminals. In active systems, a full scan is done
approximately once every ICVTSD. The average extra delay before sending an
output message should be about half this period.

212 CICS TS for OS/390: CICS Performance Guide


In non-VTAM networks, partial scans occur for other reasons, such as an input
arriving from a terminal, and any outputs for that line are processed at the same
time. For that reason, a value of between 0.5 and one second is normally a
reasonable setting for non-VTAM networks.

CICS scans application tasks first, unless there is an ICVTSD-driven scan. In a


highly utilized system, input and output messages may be unreasonably delayed if
too large a ICVTSD value is specified.

All networks
The ICVTSD parameter can be changed in the system initialization table (SIT) or
through JCL parameter overrides. If you are having virtual storage constraint
problems, it is highly recommended that you reduce the value specified in
ICVTSD. A value of zero causes the terminal control task to be dispatched most
frequently. If you also have a large number of non-VTAM terminals, this may
increase the amount of nonproductive processor cycles. A value of 100—300
milliseconds may be more appropriate for that situation. In a pure VTAM
environment, however, the overhead is not significant, unless the average
transaction has a very short pathlength, and ICVTSD should be set to zero for a
better response time and best virtual storage usage.

Where useful
The ICVTSD system initialization parameter can be used in all except very
low-activity CICS systems.

Limitations
In TCAM systems, a low ICVTSD value can cause excessive processor time to be
used in slower processor units, and can delay the dispatch of user tasks because
too many full TCT scans have to be done. A high ICVTSD value can increase
response time by an average of one half of the ICVTSD value, and can tie up
resources owned by the task because the task takes longer to terminate. This
applies to conversational tasks.

In VTAM systems, a low value adds the overhead of scanning the activate queue
TCTTE chain, which is normally a minor consideration. A high value in
high-volume systems can increase task life and tie up resources owned by that task
for a longer period of time; this can be a significant consideration.

A low, nonzero value of ICVTSD can cause CICS to be dispatched more frequently,
which increases the overhead of performance monitoring.

Recommendations
Set ICVTSD to a value less than the region exit time interval (ICV), which is also in
the system initialization table (see page 192). Use the value of zero in an
environment that contains only VTAM terminals and consoles, unless your
| workload consists of many short transactions. ICVTSD=0 in a VTAM terminal-only
| environment is not recommended for a CICS workload consisting of low terminal
| activity but with high TASK activity. Periods of low terminal activity can lead to
| delays in CSTP being dispatched. Setting ICVTSD=100-500 resolves this by causing
| CSTP to be dispatched regularly. For non-VTAM systems, specify the value of zero
only for small networks (1 through 30 terminals).

Chapter 16. Networking and VTAM 213


For almost all systems that are not “pure” VTAM, the range should be somewhere
in the region of 100 milliseconds to 1000 milliseconds. ICVTSD can be varied
between, say, 300 and 1000 milliseconds without a very significant effect on the
response time, but increasing the value decreases the processor overhead. An
ICVTSD larger than 1000 milliseconds may not give any further improvement in
processor usage, at a cost of longer response times.

If ICVTSD is reduced, and, if there is ample processor resource, a small reduction


in response time can be achieved. If you go below 250 milliseconds, any
improvement in response time is likely to seem negligible to the end user and
would have an increased effect on processor usage.

The recommended absolute minimum level, for systems that are not “pure”
VTAM, is approximately 250 milliseconds or, in really high-performance,
high-power systems that are “pure” VTAM, 100 milliseconds.

How implemented
The ICVTSD system initialization parameter is defined in units of milliseconds.
Use the commands CEMT or EXEC CICS SET SYSTEM SCANDELAY (nnnn) to
reset the value of ICVTSD.

In reasonably active systems, a nonzero ICVTSD virtually replaces ICV (see page
194) because the time to the next TCT full scan (non-VTAM) or sending of output
requests (VTAM) is the principal influence on operating system wait duration.

How monitored
Use RMF to monitor task duration and processor requirements. The dispatcher
domain statistics reports the value of ICVTSD.

Negative poll delay (NPDELAY)


NPDELAY in the TCT TYPE=LINE macro helps reduce unsolicited-input errors on
TCAM lines.

NPDELAY and unsolicited-input messages in TCAM


Any CICS users who do not want unsolicited-input messages to be discarded
should consider using the optional NPDELAY operand for the DFHTCT
TYPE=LINE macro used to define each of the TCAM queues. This allows you to
define a time interval during which CICS suspends the reading of messages from
the respective TCAM queue following the receipt of unsolicited input. Upon
completion of the preceding transaction associated with the same terminal as the
unsolicited message, within the NPDELAY-defined interval, processing resumes
normally.

Effects
If the preceding transaction fails to terminate during the NPDELAY interval, the
X'87' unsolicited-input error condition is raised.

When NPDELAY is used, it is frequently advisable to define several input queues


to CICS, each defined by a separate LINE entry but with each entry naming the
214 CICS TS for OS/390: CICS Performance Guide
same corresponding output queue using the OUTQ parameter. An equivalent
number of queues must be defined to TCAM with the TPROCESS macro. This set
of queues can then be processed as a “cascade” list within TCAM.

Where useful
When several queues are defined for TCAM-to-CICS processing, CICS can suspend
the acceptance of input messages from one or more of the queues without
completely stopping the flow of input from TCAM to CICS.

Choosing an appropriate value for NPDELAY is a matter of tuning. Even with the
“cascade” list approach, some messages may be held up behind an unsolicited
message. The objective should be to find the minimum value that can be specified
for NPDELAY which is sufficient to eliminate the unsolicited-input errors.

Compression of output terminal data streams


For output messages, CICS provides user exits with access to the entire output
data stream. User code can be written to remove redundant characters from the
data stream before the data stream is sent to the terminal. This technique can
produce a dramatic improvement in response times if the proportion of characters
not needed is large, because telecommunication links are usually the slowest paths
in the network.

Limitations
Some additional processor cycles are required to process the exit code, and the
coding of the exit logic also requires some effort. Use of a compression exit reduces
the storage requirements of VTAM or TCAM and NCP, and reduces line
transmission time.

Recommendations
The simplest operation is to replace redundant characters, especially blanks, with a
repeat-to-address sequence in the data stream for 3270-type devices.

Note: The repeat-to-address sequence is not handled very quickly on some types
of 3270 cluster controller. In some cases, alternatives may give superior
performance. For example, instead of sending a repeat-to-address sequence
for a series of blanks, you should consider sending an ERASE and then
set-buffer-address sequences to skip over the blank areas. This is satisfactory
if nulls are acceptable in the buffer as an alternative to blanks.

Another technique for reducing the amount of data transmitted is to turn off any
modified data tags on protected fields in an output data stream. This eliminates
the need for those characters to be transmitted back to the processor on the next
input message, but you should review application dependencies on those fields
before you try this.

There may be other opportunities for data compression in individual systems, but
you may need to investigate the design of those systems thoroughly before you
can implement them.

Chapter 16. Networking and VTAM 215


How implemented
The global user exits used to compress terminal messages are the XZCOUT1 exit
for VTAM devices, and the XTCTOUT exit for TCAM-supported devices. See the
CICS Customization Guide for programming information.

How monitored
The contents of output terminal data streams can be examined in either a VTAM or
TCAM trace.

Automatic installation of terminals


During autoinstall processing, CICS obtains storage from the control subpool in the
extended CICS dynamic storage area (ECDSA), to handle each autoinstall request.
The amount of virtual storage obtained is mainly determined by the length of the
CINIT request unit, which varies for different LU types. For a typical autoinstall
request from an LU6.2 terminal, the amount of dynamic virtual storage obtained is
between 120 to 250 bytes.

Overall, the principal consumer of CICS resource in autoinstall processing is the


autoinstall task (CATA) itself. If, for some reason, the autoinstall process is not
proceeding at the rate expected during normal operations, there is a risk that the
system could be filled with CATA transaction storage.

Maximum concurrent autoinstalls (AIQMAX)


This system initialization parameter codes the maximum number of devices that
can be queued concurrently for autoinstall.

The AIQMAX value does not limit the total number of devices that can be
autoinstalled.

The restart delay parameter (AIRDELAY)


This system initialization parameter specifies whether you want autoinstalled
terminal definitions to be retained by CICS across a restart. The value of the restart
delay is specified as “hhmmss” and the default is “000700”, which is seven
minutes. This means that if a terminal does not log on to CICS within seven
minutes after an emergency restart, its terminal entry is scheduled for deletion.

Setting the restart delay to zero means that you do not want CICS to re-install the
autoinstalled terminal entries from the global catalog during emergency restart. In
this case, CICS does not write the terminal entries to the catalog while the terminal
is being autoinstalled. This can have positive performance effects on the following
processes:

Autoinstall By eliminating the I/O activity, autoinstall has a shorter pathlength


and becomes more processor-intensive. So, in general, the time taken to autoinstall
a terminal is reduced. However, the response time of other tasks may increase
slightly because CATA has a high priority and does not have to wait for as much
I/O activity.

216 CICS TS for OS/390: CICS Performance Guide


Emergency and warm restart When no autoinstalled terminal entries are cataloged,
CICS has to restore fewer entries from the GCD during emergency restart. Thus, if
you have a large number of autoinstalled terminals, the restart time can be
significantly improved when restart delay is set to zero.

Normal shutdown CICS deletes AI terminal entries from the GCD during normal
shutdown unless they were not cataloged (AIRDELAY=0) and the terminal has not
been deleted. If the restart delay is set to zero, CICS has not cataloged terminal
entries when they were autoinstalled, so they are not deleted. This can reduce
normal shutdown time.

XRF takeover The system initialization parameter, AIRDELAY, should not affect
XRF takeover. The tracking process still functions as before regardless of the value
of the restart delay. Thus, after a takeover, the alternate system still has all the
autoinstalled terminal entries. However, if a takeover occurs before the catchup
process completes, some of the autoinstalled terminals have to log on to CICS
again. The alternate CICS system has to rely on the catalog to complete the
catchup process and, if the restart delay is set to zero in the active system, the
alternate system is not able to restore the autoinstalled terminal entries that have
not been tracked. Those terminals have to log on to the new CICS system, rather
than being switched or rebound after takeover.

You have to weigh the risk of having some terminal users log on again because
tracking has not completed, against the benefits introduced by setting the restart
delay to zero. Because catchup takes only a few minutes, the chance of such a
takeover occurring is usually small.

The delete delay parameter (AILDELAY)


The delete delay system initialization parameter lets you control how long an
autoinstalled terminal entry remains available after the terminal has logged off.
The default value of zero means that the terminal entry is scheduled for deletion
as soon as the terminal is logged off. Otherwise, CICS schedules the deletion of the
TCTTE as a timer task.

In general, setting the delete delay to a nonzero value can improve the
performance of CICS when many autoinstalled terminals are logging on and off
during the day. However, this does mean that unused autoinstalled terminal entry
storage is not freed for use by other tasks until the delete delay interval has
expired. This parameter provides an effective way of defining a terminal whose
storage lifetime is somewhere between that of an autoinstalled terminal and a
statically defined terminal.

The effect of setting the delete delay to a nonzero value can have different effects
depending on the value of the restart delay:

Nonzero restart delay When the restart delay is nonzero, CICS catalogs
autoinstalled terminal entries in the global catalog.

If the delete delay is nonzero as well, CICS retains the terminal entry so that it is
re-used when the terminal logs back on. This can eliminate the overhead of:
v Deleting the terminal entry in virtual storage
v An I/O to the catalog and recovery log
v Re-building the terminal entry when the terminal logs on again.

Chapter 16. Networking and VTAM 217


Zero restart delay When the restart delay is zero, CICS does not catalog
autoinstalled terminal entries in the global catalog whatever value is specified for
the delete delay.

If the delete delay is nonzero, CICS retains the terminal entry so that it is re-used
when the terminal logs back on. This can save the overhead of deleting the
terminal entry in virtual storage and the rebuilding of the terminal entry when the
terminal logs on again.

Effects
You can control the use of resource by autoinstall processing in three ways:
1. By using the transaction class limit to restrict the number of autoinstall tasks
that can concurrently exist (see page 288).
2. By using the CATA and CATD transactions to install and delete autoinstall
terminals dynamically. If you have a large number of devices autoinstalled,
shutdown can fail due to the MXT system initialization parameter being
reached or CICS becoming short on storage. To prevent this possible cause of
shutdown failure, you should consider putting the CATD transaction in a class
of its own to limit the number of concurrent CATD transactions.
3. By specifying AIQMAX to limit the number of devices that can be queued for
autoinstall. This protects against abnormal consumption of virtual storage by
the autoinstall process, caused as a result of some other abnormal event.
If this limit is reached, the AIQMAX system initialization parameter affects the
LOGON and BIND processing by CICS. CICS requests VTAM to stop passing
LOGON and BIND requests to CICS. VTAM holds such requests until CICS
indicates that it can accept further LOGONs and BINDs (this occurs when CICS
has processed a queued autoinstall request).

Recommendations
If the autoinstall process is noticeably slowed down by the AIQMAX limit, raise it.
If the CICS system shows signs of running out of storage, reduce the AIQMAX
limit. If possible, set the AIQMAX system initialization parameter to a value higher
than that reached during normal operations.

In a non-XRF environment, settings of (restart delay=0) and (delete delay=


hhmmss>0) are the most efficient for processor and DASD utilization. However,
this efficiency is gained at a cost of virtual storage, because the TCT entries are not
deleted until the delay period expires.

A value of zero for both restart delay and delete delay is the best overall setting
for many systems from an overall performance and virtual-storage usage point of
view.

If restart delay is greater than zero (cataloging active), the performance of


autoinstall is significantly affected by the definition of the global catalog
(DFHGCD) . The default buffer specifications used by VSAM may not be sufficient
in a high activity system.

Because a considerable number of messages are sent to transient data during logon
and logoff, the performance of these output destinations should also be taken into
consideration.

218 CICS TS for OS/390: CICS Performance Guide


In an XRF environment, a restart delay value of greater than zero should give
better performance when catchup of a large number of autoinstalled terminals is
necessary.

How monitored
Monitor the autoinstall rate during normal operations by inspecting the autoinstall
| statistics regularly.

Chapter 16. Networking and VTAM 219


220 CICS TS for OS/390: CICS Performance Guide
|

| Chapter 17. CICS Web support


| This chapter includes the following topics:
| v “CICS Web performance in a sysplex”
| v “CICS Web support performance in a single address space” on page 222
| v “CICS Web use of DOCTEMPLATE resources” on page 222
| v “CICS Web support use of temporary storage” on page 223
| v “CICS Web support of HTTP 1.0 persistent connections” on page 223
| v “CICS Web security” on page 223
| v “CICS Web 3270 support” on page 223
| v “Secure sockets layer support” on page 224

|
| CICS Web performance in a sysplex
| The dynamic routing facility is extended to provide mechanisms for dynamically
| routing program—link requests received from outside CICS. The target program of
| a CICS Web application can be run anywhere in a sysplex by dynamically routing
| the EXEC CICS LINK to the target application. Web bridge transactions should
| either be not routed or always routed to the same region so that there are major
| affinitites. Using CICSPlex SM to route the program-link requests, the transaction
| ID becomes much more significant because CICSPlex SM’s routing logic is
| transaction-based. CICSPlex SM routes each DPL request according to the rules
| specified for its associated transaction. This dynamic routing means that there is
| extra pathlength for both routed and nonrouted links, and routing links.

| CICSPlex SM allows you to take advantage of two dynamic routing models:


| The hub model
| A hierarchical system used traditionally with CICS dynamic transaction
| routing. Routing is controlled by one region,
| The distributed model
| Each region may be both a routing region and a target region. A routing
| region runs in each region.

| In addition, you can define your own algorithm.

| Analyzer and converter programs must run in the same region as the instance of
| DFHWBBLI which invokes them, which in the case of CICS Web support, is the
| CICS region on which the HTTP request is received.

| If the Web API is being used by the application program to process the HTTP
| request and build the HTTP response, the application program must also run in
| the same CICS region as the instance of DFHWBBLI which is linking to it.

| In a typical current scenario, a Web-based business transaction might be


| implemented as a pseudoconversational CICS application. The initial request from
| the browser invokes a CICS transaction that does some setup work, returns a page
| of HTML to the browser, and ends. Subsequent requests are handled by other CICS

© Copyright IBM Corp. 1983, 1999 221


| transactions (or by invoking further the same transaction). The CICS application is
| responsible for maintaining state data between requests.

| Using Business Transaction Services, a Web-based business transaction could be


| implemented as a BTS process. An advantage of this is that state data is
| maintained by BTS.
|
| CICS Web support performance in a single address space
| The additional cost of using the WEB API commands, compared with the cost of
| commarea manipulation as a means of processing the received requests and
| building the HTTP responses, can be in the range of from 6% to 12%. Using very
| large number of bookmarks in the building of CICS documents can add more to
| this figure. However, the ease of programming offered by the WEB API commands
| makes the cost worthwhile.;

| The use of HTTP persistent connection operation, which is supported by most


| client Web browsers, is also supported by the CICS Web interface in CICS
| Transaction Server for OS/390 Release 3, and very significant savings in CICS and
| TCP/IP CPU cost, plus response times at the browser, are typically made by
| activating this feature in the TCPIPSERVICE definition. For more information
| about using the TCPIPSERVICE definition, see the CICS Internet Guide
|
| CICS Web use of DOCTEMPLATE resources
| In releases of CICS prior to CICS Transaction Server for OS/390 Release 3, CICS
| web applications use CICS HTML templates to facilitate the building of HTTP
| responses. These HTML templates had to reside in one MVS partitioned data set,
| designated by the DFHHTML DD statement n the CICS startup job. In CICS
| Transaction Server for OS/390 Release 3, CICS HTML template support has been
| extended, and each template should now be defined in the CICS CSD as a
| DOCTEMPLATE. When defining the DOCTEMPLATE, systems administrators can
| store their HTML templates in:
| v Extrapartition transient data
| v Temporary Storage
| v CICS loaded programs
| v MVS partitioned data sets or PDSEs
| v Another location, invoking a user-written program to load the template from
| that location (for example, DB2 or another database manager).

| To achieve optimum performance when using templates, you should ensure you
| have defined the template as DOCTEMPLATE and installed the definition before
| using it, especially when using the DFHWBTL program. If the template is not
| preinstalled when this program is used, DFHWBTL attempts to install it for you,
| assuming that it is a member of the partitioned dataset referenced by the
| DFHHTML DD statement.

| The fastest results can be achieved by storing your templates as CICS load
| modules. For more information about this, see the CICS Internet Guide. These
| modules are managed like other CICS loaded programs and may be flushed out by
| program compression when storage is constrained.

222 CICS TS for OS/390: CICS Performance Guide


|
| CICS Web support use of temporary storage
| CICS Web support now uses CICS temporary storage to store the inbound HTTP
| request and any outbound response built using the new Web API. You should
| define the characteristics of the TS queue used by CICS Web support for each
| TCPIPSERVICE by defining a TS model for the TS Q prefix identified on the
| relevant TCPIPSERVICE definition. A sample TS model named DFHWEB is
| provided in group DFHWEB, which defines the characteristics of a TS Queue with
| the prefix DFHWEB. The default definition uses MAIN temporary storage to
| mimimize the amount of I/O needed to process CICS Web requests. For those
| HTTP requests and responses which handle small amounts of data, this may be
| acceptable. If CICS Web support is being used to transfer large amounts of data,
| MAIN TS may not be appropriate, so the relevant TCPIPSERVICE should specify a
| TS Q prefix matching a model which uses AUXILIARY temporary storage.

| When the CICS Web Business Logic Interface is used, the TS queue prefix is
| always DFHWEB.
|
| CICS Web support of HTTP 1.0 persistent connections
| In most circumstances CICS Web performance will be improved by enabling
| support of the HTTP 1.0 Keepalive header.

| To enable CICS support of this header, you have to specify NO or a numeric value
| for the SOCKET CLOSE keyword on the relevant TCPIPSERVICE definition; if NO
| or a numeric value is specified, and the incoming HTTP request contains the
| Keepalive header, CICS keeps the socket open in order to allow further HTTP
| requests to be sent by the Web Browser. If a numeric value is specified, the interval
| between receipt of the last HTTP request and arrival of the next must be less than
| the interval specified on the TCPIPSERVICE, else CICS closes the socket. Some
| HTTP proxy servers do not allow the HTTP 1.0 Keepalive header to be passed to
| the end server (in this case, CICS), so Web Browsers which wish to use this header
| may not be able to pass it to CICS if the HTTP request arrives via such an HTTP
| proxy server.
|
| CICS Web security
| If Secure Sockets Layer is used to make CICS Web transactions more secure, there
| will be a significant increase in pathlength for these transactions. This increase can
| be minimized by use of the HTTP 1.0 Keepalive header. Keeping the socket open
| removes the need to perform a full SSL handshake on the second and any
| subsequent HTTP request. If CICS or the Web Browser closes the socket, the SSL
| handshake has to be executed again.
|
| CICS Web 3270 support
| Use of the HTTP 1.0 Keepalive header can improve the performance of CICS Web
| 3270 support, by removing the need for the Web Browser to open a new sockets
| connection for each leg of the 3270 conversation or pseudoconversation.

Chapter 17. CICS Web support 223


|
| Secure sockets layer support
| Transactions using Secure Sockets Layer for Web security will see an increase in
| pathlength because of the SSL handshake that occurs when the socket connection is
| established. Encryption and decryption impact performance, but degradation can
| be minimized by:
| v Installing the appropriate cryptographic hardware.
| v Making use of the HTTP 1.0 keepalive header.
| v Making the CICS region as large as possible (The SSL support can use large
| amounts of non-CICS storage.)
| v Only using SSL for applications that really need to use encrypted data flows.

| You should also only use client authentication (SSL(CLIENTAUTH) in the


| TCPIPSERVICE definition) when you really need your clients to identify
| themselves with a client certificate. This is because client authentication involves
| more network interchanges during the SSL handshake, and more internal CICS
| processing to handle the received certificate. This includes a search of the external
| security manager’s database to locate a user ID to associate with the certificate.

224 CICS TS for OS/390: CICS Performance Guide


Chapter 18. VSAM and file control
This chapter discusses performance tuning issues related to VSAM and file control.
v “VSAM considerations: general objectives”
v “VSAM resource usage (LSRPOOL)” on page 234
v “VSAM buffer allocations for NSR (INDEXBUFFERS and DATABUFFERS)” on
page 235
v “VSAM buffer allocations for LSR” on page 236
v “VSAM string settings for NSR (STRINGS)” on page 237
v “VSAM string settings for LSR (STRINGS)” on page 238
v “Maximum keylength for LSR (KEYLENGTH and MAXKEYLENGTH)” on
page 239
v “Resource percentile for LSR (SHARELIMIT)” on page 239
v “VSAM local shared resources (LSR)” on page 240
v “Hiperspace buffers” on page 240
v “Subtasking: VSAM (SUBTSKS=1)” on page 241
v “Data tables” on page 244
v “Coupling facility data tables” on page 245
v “VSAM record-level sharing (RLS)” on page 251

VSAM considerations: general objectives


Tuning consists of providing a satisfactory level of service from a system at an
acceptable cost. A satisfactory service, in the case of VSAM, is likely to be obtained
by providing adequate buffers to minimize physical I/O and, at the same time,
allowing several operations concurrently on the data sets.

The costs of assigning additional buffers and providing for concurrent operations
on data sets are the additional virtual and real storage that is required for the
buffers and control blocks.

Several factors influence the performance of VSAM data sets. The rest of this
section reviews these and the following sections summarize the various related
parameters of file control.

Note that, in this section, a distinction is made between “files” and “data sets”:
v A “file” means a view of a data set as defined by an installed CICS file resource
definition and a VSAM ACB.
v A “data set” means a VSAM “sphere”, including the base cluster with any
associated AIX® paths.

Local shared resources (LSR) or Nonshared resources (NSR)


The first decision to make for each file is whether to use LSR or NSR for its VSAM
buffers and strings. It is possible to use up to eight separate LSR pools for file
control files. There is also a decision to make on how to distribute the data sets
across the LSR pools.

© Copyright IBM Corp. 1983, 1999 225


Note that all files opened for access to a particular VSAM data set normally must
use the same resource type: see “Data set name sharing” on page 232.

CICS provides separate LSR buffer pools for data and index records. If only data
buffers are specified, only one set of buffers are built and used for both data and
index records.

LSR files share a common pool of buffers and a common pool of strings (that is,
control blocks supporting the I/O operations). Other control blocks define the file
and are unique to each file or data set. NSR files or data sets have their own set of
buffers and control blocks.

Some important differences exist between NSR and LSR in the way that VSAM
allocates and shares the buffers.

In NSR, the minimum number of data buffers is STRNO + 1, and the minimum
index buffers (for KSDSs and AIX paths) is STRNO. One data and one index buffer
are preallocated to each string, and one data buffer is kept in reserve for CI splits.
If there are extra data buffers, these are assigned to the first sequential operation;
they may also be used to speed VSAM CA splits by permitting chained I/O
operations. If there are extra index buffers, they are shared between the strings and
are used to hold high-level index records, thus providing an opportunity for saving
physical I/O.

In LSR, there is no preallocation of buffers to strings, or to particular files or data


sets. When VSAM needs to reuse a buffer, it picks the buffer that has been
referenced least recently. Strings are always shared across all data sets.

Before issuing a read to disk when using LSR, VSAM first scans the buffers to
check if the control interval it requires is already in storage. If so, it may not have
to issue the read. This buffer “lookaside” can reduce I/O significantly.

Another important difference between LSR and NSR is in concurrent access to


VSAM CIs. NSR allows multiple copies of a CI in storage; you can have one (but
only one) string updating a CI and other strings reading different copies of the
same CI. In LSR, there is only one copy of a CI in storage; the second of the
requests must queue until the first operation completes. LSR permits several read
operations to share access to the same buffer, but updates require exclusive use of
the buffer and must queue until a previous update or previous reads have
completed; reads must wait for any update to finish. It is possible, therefore, that
transactions with concurrent browse and update operations that run successfully
with NSR may, with LSR, hit a deadlock as the second operation waits
unsuccessfully for the first to complete.

Transactions should always be designed and programmed to avoid deadlocks. For


further discussions, see the CICS Application Programming Guide.

LSR has significant advantages, by providing:


v More efficient use of virtual storage because buffers and strings are shared.
v Better performance because of better buffer lookaside, which can reduce I/O
operations.
v Self-tuning because more buffers are allocated to busy files and frequently
referenced index control intervals are kept in its buffers.
v Better read integrity because there is only one copy of a CI in storage.

226 CICS TS for OS/390: CICS Performance Guide


v Use of synchronous file requests and a UPAD exit. CA and CI splits for LSR files
do not cause either the subtask or main task to wait. VSAM takes the UPAD exit
while waiting for physical I/O, and processing continues for other CICS work
during the CA/CI split.
File control requests for NSR files are done asynchronously, however, and still
cause the CICS main task or subtask to stop during a split.
NSR, on the other hand:
v Allows for specific tuning in favor of a particular data set
v Can provide better performance for sequential operations.

The general recommendation is to use LSR for all VSAM data sets except where
you have one of the following situations:
v A file is very active but there is no opportunity for lookaside because, for
instance, the file is very large.
v High performance is required by the allocation of extra index buffers.
v Fast sequential browse or mass insert is required by the allocation of extra data
buffers.
v Control area (CA) splits are expected for a file, and extra data buffers are to be
allocated to speed up the CA splits.

If you have only one LSR pool, a particular data set cannot be isolated from others
using the same pool when it is competing for strings, and it can only be isolated
when it is competing for buffers by specifying unique CI sizes. In general, you get
more self-tuning effects by running with one large pool, but it is possible to isolate
busy files from the remainder or give additional buffers to a group of high
performance files by using several pools. It is possible that a highly active file has
more successful buffer lookaside and less I/O if it is set up as the only file in an
LSR subpool rather than using NSR. Also the use of multiple pools eases the
restriction of 255 strings for each pool.

Number of strings
The next decision to be made is the number of concurrent accesses to be supported
for each file and for each LSR pool.

This is achieved by specifying VSAM “strings”. A string is a request to a VSAM


data set requiring “positioning” within the data set. Each string specified results in
a number of VSAM control blocks (including a “placeholder”) being built.

VSAM requires one or more strings for each concurrent file operation. For
nonupdate requests (for example, a READ or BROWSE), an access using a base
needs one string, and an access using an AIX needs two strings (one to hold
position on the AIX and one to hold position on the base data set). For update
requests where no upgrade set is involved, a base still needs one string, and a path
two strings. For update requests where an upgrade set is involved, a base needs
1+n strings and a path needs 2+n strings, where n is the number of members in
the upgrade set (VSAM needs one string per upgrade set member to hold
position). Note that, for each concurrent request, VSAM can reuse the n strings
required for upgrade set processing because the upgrade set is updated serially.
See “CICS calculation of LSR pool parameters” on page 231.

Chapter 18. VSAM and file control 227


| A simple operation such as read direct frees the string or strings immediately, but a
| read for update, mass insert, or browse retains them until a corresponding update,
| unlock, or end browse is performed.

The interpretation of the STRNO parameter by CICS and by VSAM differs


depending upon the context:
v The equivalent STRINGS parameter of the file definition has the same meaning
as the STRNO in the VSAM ACB for NSR files: that is, the actual number of
concurrent outstanding VSAM requests that can be handled. When AIX paths or
upgrade sets are used, the actual number of strings which VSAM allocates to
support this may be greater than the STRINGS value specified.
v The equivalent STRINGS parameter of the LSR pool definition (LSRPOOL) has
the same meaning as the STRNO in the VSAM BLDVRP macro: that is, the
absolute number of strings to be allocated to the resource pool. Unless an LSR
pool contains only base data sets, the number of concurrent requests that can be
handled is less than the STRINGS value specified.

| Note: There are some special considerations for setting the STRINGS value for an
| ESDS file (see “Number of strings considerations for ESDS files” on
| page 229).

| For LSR, it is possible to specify the precise numbers of strings, or to have CICS
calculate the numbers. The number specified in the LSR pool definition is the
actual number of strings in the pool. If CICS is left to calculate the number of
strings, it derives the pool STRINGS from the RDO file definition and interprets
this, as with NSR, as the actual number of concurrent requests. (For an explanation
of CICS calculation of LSR pool parameters, see “CICS calculation of LSR pool
parameters” on page 231.)

You must decide how many concurrent read, browse, updates, mass inserts, and so
on you need to support.

If access to a file is read only with no browsing, there is no need to have a large
number of strings; just one may be sufficient. Note that, while a read operation
only holds the VSAM string for the duration of the request, it may have to wait for
the completion of an update operation on the same CI.

| In general (but see“Number of strings considerations for ESDS files” on page 229)
| where some browsing or updates are used, STRINGS should be set to 2 or 3
initially and CICS file statistics should be checked regularly to see the proportion
of wait-on-strings encountered. Wait-on-strings of up to 5% of file accesses would
usually be considered quite acceptable. You should not try, with NSR files, to keep
wait-on-strings permanently zero.

CICS manages string usage for both files and LSR pools. For each file, whether it
uses LSR or NSR, CICS limits the number of concurrent VSAM requests to the
STRINGS= specified in the file definition. For each LSR pool, CICS also prevents
more requests being concurrently made to VSAM than can be handled by the
strings in the pool. Note that, if additional strings are required for upgrade-set
processing at update time, CICS anticipates this requirement by reserving the
additional strings at read-for-update time. If there are not enough file or LSR pool
strings available, the requesting task waits until they are freed. The CICS statistics
give details of the string waits.

228 CICS TS for OS/390: CICS Performance Guide


When deciding the number of strings for a particular file, consider the maximum
number of concurrent tasks. Because CICS command level does not allow more
than one request to be outstanding against a particular data set from a particular
task, there is no point in allowing strings for more concurrent requests.

If you want to distribute your strings across tasks of different types, the transaction
classes may also be useful. You can use transaction class limits to control the
transactions issuing the separate types of VSAM request, and for limiting the
number of task types that can use VSAM strings, thereby leaving a subset of
strings available for other uses.

All placeholder control blocks must contain a field long enough for the largest key
associated with any of the data sets sharing the pool. Assigning one inactive file
that has a very large key (primary or alternate) into an LSR pool with many strings
| may use excessive storage.

| Number of strings considerations for ESDS files


| There are some special performance considerations when choosing a STRINGS
| value for an ESDS file.

| If an ESDS is used as an ‘add-only’ file (that is, it is used only in write mode to
| add records to the end of the file), a string number of 1 is strongly recommended.
| Any string number greater than 1 can significantly affect performance, because of
| exclusive control conflicts that occur when more than one task attempts to write to
| the ESDS at the same time.

| If an ESDS is used for both writing and reading, with writing, say, being 80% of
| the activity, it is better to define two file definitions—using one file for writing and
| the other for reading.

Size of control intervals


The size of the data set control intervals is not an parameter specified to CICS; it is
defined through VSAM AMS. However, it can have a significant performance effect
on a CICS system that provides access to the control interval.

In general, direct I/O runs slightly more quickly when data CIs are small, whereas
sequential I/O is quicker when data CIs are large. However, with NSR files, it is
possible to get a good compromise by using small data CIs but also assigning extra
buffers, which leads to chained and overlapped sequential I/O. However, all the
extra data buffers get assigned to the first string doing sequential I/O.

VSAM functions most efficiently when its control areas are the maximum size, and
it is generally best to have data CIs larger than index CIs. Thus, typical CI sizes for
data are 4KB to 12KB and, for index, 1KB to 2KB.

In general, you should specify the size of the data CI for a file, but allow VSAM to
select the appropriate index CI to match. An exception to this is if key compression
turns out to be less efficient than VSAM expects it to be. In this case, VSAM may
select too small an index CI size. You may find an unusually high rate of CA splits
occurring with poor use of DASD space. If this is suspected, specify a larger index
CI.

In the case of LSR, there may be a benefit in standardizing on the CI sizes, because
this allows more sharing of buffers between files and thereby allow a lower total
Chapter 18. VSAM and file control 229
number of buffers. Conversely, there may be a benefit in giving a file unique CI
sizes to prevent it from competing for buffers with other files using the same pool.

Try to keep CI sizes at 512, 1KB, 2KB, or any multiple of 4KB. Unusual CI sizes
like 26KB or 30KB should be avoided. A CI size of 26KB does not mean that
physical block size will be 26KB; the physical block size will most likely be 2KB in
this case (it is device-dependent).

Number of buffers (NSR)


The next decision is the number of buffers to be provided for each file. Enough
buffers must be provided to support the concurrent accesses specified in the
STRINGS parameter for the file (in fact VSAM enforces this for NSR).

Specify the number of data and index buffers for NSR using the DATABUFFER
and INDEXBUFFER parameters of the file definition. It is important to specify
sufficient index buffers. If a KSDS consists of just one control area (and, therefore,
just one index CI), the minimum index buffers equal to STRINGS is sufficient. But
when a KSDS is larger than this, at least one extra index buffer needs to be
specified so that at least the top level index buffer is shared by all strings. Further
index buffers reduces index I/O to some extent.

DATABUFFERS should generally be the minimum at STRINGS + 1, unless the aim


is to enable overlapped and chained I/O in sequential operations or it is necessary
to provide the extra buffers to speed up CA splits.

Note that when the file is an AIX path to a base, the same INDEXBUFFERS (if the
base is a KSDS) and DATABUFFERS are used for AIX and base buffers (but see
“Data set name sharing” on page 232).

Number of buffers (LSR)


The set of buffers of one size in an LSR pool is called a “subpool.” The number of
buffers for each subpool is controlled by the DATA and INDEX parameters of the
LSRPOOL definition It is possible to specify precise numbers or to have CICS
calculate the numbers. (The method used by CICS to calculate the number of
buffers is described below.)

Allowing CICS to calculate the LSR parameters is easy but it requires additional
overhead (when the first file that needs the LSR pool is opened) to build the pool
because CICS must read the VSAM catalog for every file that is specified to use the
pool. Also it cannot be fine-tuned by specifying actual quantities of each buffer
size. When making changes to the size of an LSR pool, refer to the CICS statistics
before and after the change is made. These statistics show whether the proportion
of VSAM reads satisfied by buffer lookaside is significantly changed or not.

In general, you would expect to benefit more by having extra index buffers for
lookaside, and less by having extra data buffers. This is a further reason for
standardizing on LSR data and index CI sizes, so that one subpool does not have a
mix of index and data CIs in it.

Note: Data and index buffers are specified separately with the LSRPOOL
definition. Thus, there is not a requirement to use CI size to differentiate
between data and index values.

230 CICS TS for OS/390: CICS Performance Guide


Take care to include buffers of the right size. If no buffers of the required size are
present, VSAM uses the next larger buffer size.

CICS calculation of LSR pool parameters


If you have not specified LSR parameters for a pool, CICS calculates for you the
buffers and strings required. To do this, it scans all the installed file resource
definitions for files specified to use the pool. For each, it uses:
v From the CICS file resource definitions:
– The number of strings, as specified on the STRINGS parameter
v From the VSAM catalog:
– The levels of index for each of these files
– The CI sizes
– The keylengths for the base, the path (if it is accessed through an AIX path),
and upgrade set AIXs.

Note: If you have specified only buffers or only strings, CICS performs the
calculation for what you have not specified.

The following information helps you calculate the buffers required. A particular file
may require more than one buffer size. For each file, CICS determines the buffer
sizes required for:
v The data component
v The index component (if a KSDS)
v The data and index components for the AIX (if it is an AIX path)
v The data and index components for each AIX in the upgrade set (if any).

The number of buffers for each is calculated as follows:


v For data components (base and AIX) = (STRINGS= in the file resource definition
entry) + 1
v For index components (base and AIX) = (STRINGS= in the file resource
definition entry) + (the number of levels in the index) – 1
v For data and index components for each AIX in the upgrade set, one buffer
each.

When this has been done for all the files that use the pool, the total number of
buffers for each size is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT in the
LSRPOOL definition. The SHARELIMIT parameter takes precedence.
v If necessary, increased to a minimum of three buffers.
v Rounded up to the nearest 4KB boundary.

To calculate the number of strings, CICS determines the number of strings to


handle concurrent requests for each file as the sum of:
v STRINGS parameter value for the base
v STRINGS parameter value for the AIX (if it is an AIX path)
v n strings if there is an upgrade set (where n is the number of members in the
upgrade set).

Note: If the LSR pool is calculated by CICS and the data sets have been archived
by HSM, when the first file that needs the LSR pool is opened, the startup

Chapter 18. VSAM and file control 231


time of a CICS system can be considerably lengthened because the data sets
are needed one by one. CICS obtains the necessary catalog information, but
it does not open the database. Therefore the database is still effectively
archived. This problem recurs when the region is started again, and remains
until the data set has been opened.

When the strings have been accumulated for all files, the total is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT
parameter in the LSR pool definition. The SHARELIMIT parameter takes
precedence.
v Reduced to 255 (the maximum number of strings allowed for a pool by VSAM).
v Increased to the largest specified STRINGS value for a particular file.

The parameters calculated by CICS are shown in the CICS statistics.

Switching data sets from RLS mode to LSR mode


Although it is not generally recommended, there may be occasions when you need
to switch a data set from RLS mode to non-RLS mode (for example, to read-only
LSR mode during a batch update). This could lead to the LSR pools that are not
explicitly defined, and which CICS builds using default values, not having
sufficient resources to support files switched to LSR mode after the pool has been
built.

To avoid files failing to open because of the lack of adequate resources, you can
specify that CICS should include files opened in RLS mode when it is calculating
the size of an LSR pool using default values. To specify the inclusion of files
defined with RLSACCESS(YES) in an LSR pool being built using values that CICS
calculates, use the RLSTOLSR=YES system initialization parameter
(RLSTOLSR=NO is the default)

See the CICS System Definition Guide for more information about the RLSTOLSR
parameter.

Data set name sharing


Data set name (DSN) sharing (MACRF=DSN specified in the VSAM ACB) is the
default for all VSAM data sets. It causes VSAM to create a single control block
structure for the strings and buffers required by all the files that relate to the same
base data set cluster, whether as a path or direct to the base. VSAM makes the
connection at open time of the second and subsequent files. Only if DSN sharing is
specified, does VSAM realize that it is processing the same data set.

This single structure:


v Provides VSAM update integrity for multiple ACBs updating one VSAM data
set
v Allows the use of VSAM share options 1 or 2, while still permitting multiple
update ACBs within the CICS region
v Saves virtual storage.

DSN sharing is the default for files using both NSR and LSR. The only exception
to this default is made when opening a file that has been specified as read-only
(READ=YES or BROWSE=YES) and with DSNSHARING(MODIFYREQS) in the file
resource definition. CICS provides this option so that a file (represented by an

232 CICS TS for OS/390: CICS Performance Guide


installed file resource definition) can be isolated from other users of that same data
set in a different LSR pool or in NSR by suppressing DSN sharing. CICS ignores
this parameter for files with update, add, or delete options because VSAM would
not then be able to provide update integrity if two file control file entries were
updating the same data set concurrently.

| The NSRGROUP= parameter is associated with DSN sharing. It is used to group


| together file resource definitions that are to refer to the same VSAM base data set.
| NSRGROUP=name has no effect for data sets that use LSR.

When the first member of a group of DSN-sharing NSR files is opened, CICS must
specify to VSAM the total number of strings to be allocated for all file entries in
the group, by means of the BSTRNO value in the ACB. VSAM builds its control
block structure at this time regardless of whether the first data set to be opened is
a path or a base. CICS calculates the value of BSTRNO used at the time of the
open by adding the STRINGS values in all the files that share the same
NSRGROUP= parameter.

If you do not provide the NSRGROUP= parameter, the VSAM control block
structure may be built with insufficient strings for later processing. This should be
avoided for performance reasons. In such a case, VSAM invokes the dynamic
string addition feature to provide the extra control blocks for the strings as they
are required, and the extra storage is not released until the end of the CICS run.

AIX considerations
For each AIX defined with the UPGRADE attribute, VSAM upgrades the AIX
automatically when the base cluster is updated.

For NSR, VSAM uses a special set of buffers associated with the base cluster to do
this. This set consists of two data buffers and one index buffer, which are used
serially for each AIX associated with a base cluster. It is not possible to tune this
part of the VSAM operation.

For LSR, VSAM uses buffers from the appropriate subpool.

Care should be taken when specifying to VSAM that an AIX should be in the
upgrade set. Whenever a new record is added, an existing record deleted, or a
record updated with a changed attribute key, VSAM updates the AIXs in the
upgrade set. This involves extra processing and extra I/O operations.

Situations that cause extra physical I/O


Listed below are some situations that can lead to a lot of physical I/O operations,
thus affecting both response times and associated processor pathlengths:
v When a KSDS is defined with SHROPT of 4, all direct reads cause a refresh of
both index and data buffers (to ensure latest copy).
v Any sequence leading to CICS issuing ENDREQ invalidates all data buffers
associated with the operation. This may occur when you end a get-update
(without the following update), a browse (even a start browse with a
no-record-found response), a mass-insert or any get-locate from a program. If the
operation is not explicitly ended by the program, CICS ends the operation at
syncpoint or end of task.

Chapter 18. VSAM and file control 233


v If there are more data buffers than strings, a start browse causes at least half the
buffers to participate immediately in chained I/O. If the browse is short, the
additional I/O is unnecessary.

Other VSAM definition parameters


Free space parameters need to be selected with care, and can help reduce the
number of CI and CA splits. Where records are inserted all over a VSAM data set,
it is appropriate to include free space in each CI. Where the inserts are clumped,
free space in each CA is required. If all the inserts take place at just a few positions
in the file, VSAM should be allowed to split the CA, and it is not necessary to
specify any free space at all.

Adding records to the end of a VSAM data set does not cause CI/CA splits.
Adding sequential records to anywhere but the end causes splits. An empty file
with a low-value dummy key tends to reduce splits; a high-value key increases the
number of splits.

VSAM resource usage (LSRPOOL)


The default for all VSAM data sets is LSR. If multiple pools are supported CICS
provides for the use of pools 1 through 8

Effects
The LSRPOOLID parameter specifies whether a file is to use LSR or NSR and, if
LSR, which pool.

Where useful
The LSRPOOLID parameter can be used in CICS systems with VSAM data sets.

Limitations
All files with the same base data set, except read-only files with
DSNSHARING(MODIFYREQS) specified in the file definition, must use either the
same LSR pool or all use NSR.

SERVREQ=REUSE files cannot use LSR.

Recommendations
See “VSAM considerations: general objectives” on page 225. Consider removing
files from an LSR pool.

How implemented
The resource usage is defined by the LSRPOOL definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.

234 CICS TS for OS/390: CICS Performance Guide


VSAM buffer allocations for NSR (INDEXBUFFERS and
DATABUFFERS)
For files using nonshared resources (NSR), the INDEXBUFFERS and
DATABUFFERS parameters define VSAM index buffers and data buffers
respectively.

Effects
INDEXBUFFERS and DATABUFFERS specify the number of index and data buffers
for an NSR file.

The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings) and efficient sequential operations and
CA splits. Providing extra buffers for high-level index records can reduce physical
I/O operations.

Buffer allocations above the 16MB line represent a significant part of the virtual
storage requirement of most CICS systems.

INDEXBUFFERS and DATABUFFERS have no effect if they are specified for files
using LSR.

Where useful
The INDEXBUFFERS and DATABUFFERS parameters should be used in CICS
systems that use VSAM NSR files in CICS file control.

Limitations
These parameters can be overridden by VSAM if they are insufficient for the
strings specified for the VSAM data set. The maximum specification is 255. A
specification greater than this will automatically be reduced to 255. Overriding of
VSAM strings and buffers should never be done by specifying the AMP= attribute
on the DD statement.

Recommendations
See “VSAM considerations: general objectives” on page 225.

How implemented
The INDEXBUFFERS and DATABUFFERS parameters are defined in the file
definition on the CSD. They correspond exactly to VSAM ACB parameters:
INDEXBUFFERS is the number of index buffers, DATABUFFERS is the number of
data buffers.

For LSR files, they are ignored.

Chapter 18. VSAM and file control 235


How monitored
The effects of these parameters can be monitored through transaction response
times and data set and paging I/O rates. The CICS file statistics show data set
activity to VSAM data sets. The VSAM catalog and RMF can show data set
activity, I/O contention, space usage, and CI size.

VSAM buffer allocations for LSR


For files using local shared resources (LSR), the number of buffers to be used is not
specified explicitly by file. The files share the buffers of the appropriate sizes in the
LSR pool. The number of buffers in the pool may either be specified explicitly
using the BUFFERS parameter in the file definition on the CSD, or be left to CICS
to calculate. For more information about the CSD, see the CICS Resource Definition
Guide.

Effects
The BUFFERS parameter allows for exact definition of specific buffers for the LSR
pool.

The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings). It can also increase the chance of
successful buffer lookaside with the resulting reduction in physical I/O operations.

The number of buffers should achieve an optimum between increasing the I/O
saving due to lookaside and increasing the real storage requirement. This optimum
is different for buffers used for indexes and buffers used for data. Note that the
optimum buffer allocation for LSR is likely to be significantly less than the buffer
allocation for the same files using NSR.

Where useful
The BUFFERS parameter should be used in CICS systems that use VSAM LSR files
in CICS file control.

Recommendations
See “VSAM considerations: general objectives” on page 225.

How implemented
The BUFFERS parameter is defined in the file definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.

How monitored
The effects of these parameters can be monitored through transaction response
times and data set and paging I/O rates. The effectiveness affects both file and
lsrpool statistics. The CICS file statistics show data set activity to VSAM data sets.
The VSAM catalog and RMF can show data set activity, I/O contention, space
usage, and CI size.

236 CICS TS for OS/390: CICS Performance Guide


VSAM string settings for NSR (STRINGS)
STRINGS is used to determine the number of concurrent operations possible
against the file and against the VSAM base cluster to which the file relates.

Effects
The STRINGS parameter for files using NSR has the following effects:
v It specifies the number of concurrent asynchronous requests that can be made
against that specific file.
v It is used as the STRINGS in the VSAM ACB.
v It is used, in conjunction with the BASE parameter, to calculate the VSAM
BSTRNO.
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| invalidating the buffers for each of the strings is greater than waiting for the
| string, and there can be a significant increase in the number of VSAM EXCP
| requests.

Strings represent a significant part of the virtual storage requirement of most CICS
systems. With CICS, this storage is above the 16MB line.

Where useful
The STRINGS parameter should be used in CICS systems that use VSAM NSR files
in CICS file control.

Limitations
A maximum of 255 strings can be used as the STRNO or BSTRNO in the ACB.

Recommendations
See “Number of strings considerations for ESDS files” on page 229 and “VSAM
considerations: general objectives” on page 225.

How implemented
| The number of strings is defined by the STRINGS parameter in the CICS file
definition on the CSD. It corresponds to the VSAM parameter in the ACB except
where a base file is opened as the first for a VSAM data set; in this case, the
CICS-accumulated BSTRNO value is used as the STRNO for the ACB.

How monitored
The effects of the STRINGS parameter can be seen in increased response times and
monitored by the string queueing statistics for each file definition. RMF can show
I/O contention in the DASD subsystem.

Chapter 18. VSAM and file control 237


VSAM string settings for LSR (STRINGS)
STRINGS is used to determine the number of strings and thereby the number of
concurrent operations possible against the LSR pool (assuming that there are
buffers available).

Effects
The STRINGS parameter relating to files using LSR has the following effects:
v It specifies the number of concurrent requests that can be made against that
specific file.
v It is used by CICS to calculate the number of strings and buffers for the LSR
pool.
v It is used as the STRINGS for the VSAM LSR pool.
v It is used by CICS to limit requests to the pools to prevent a VSAM
short-on-strings condition (note that CICS calculates the number of strings
required per request).
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| resolving exclusive control conflicts is greater than waiting for a string. Each
| time exclusive control is returned, a GETMAIN is issued for a message area,
| followed by a second call to VSAM to obtain the owner of the control interval.

Where useful
The STRINGS parameter can be used in CICS systems with VSAM data sets.

Limitations
A maximum of 255 strings is allowed per pool.

Recommendations
| See “Number of strings considerations for ESDS files” on page 229 and “VSAM
| considerations: general objectives” on page 225.

How implemented
The number of strings is defined by the STRNO parameter in the file definition on
the CSD, which limits the concurrent activity for that particular file.

How monitored
The effects of the STRINGS parameter can be seen in increased response times for
each file entry. The CICS LSRPOOL statistics give information on the number of
data set accesses and the highest number of requests for a string.

Examination of the string numbers in the CICS statistics shows that there is a
two-level check on string numbers available: one at the data set level (see “File
control” on page 385), and one at the shared resource pool level (see “LSRpool” on
page 416).

238 CICS TS for OS/390: CICS Performance Guide


RMF can show I/O contention in the DASD subsystem.

Maximum keylength for LSR (KEYLENGTH and MAXKEYLENGTH)


| The KEYLENGTH, parameter in the file definition in the CSD, or the
| MAXKEYLENGTH in the LSR pool definition specifies the size of the largest key
| to be used in an LSR pool.

The maximum keylength may be specified explicitly using the KEYLENGTH


parameter in the file definition on the CSD, or it may be left to CICS to determine
from the VSAM catalog. For more information about the CSD, see the CICS
Resource Definition Guide.

Effects
The KEYLENGTH parameter causes the “placeholder” control blocks to be built
with space for the largest key that can be used with the LSR pool. If the
KEYLENGTH specified is too small, it prevents requests for files that have a longer
key length.

Where useful
The KEYLENGTH parameter can be used in CICS systems with VSAM data sets.

Recommendations
See “VSAM considerations: general objectives” on page 225.

The key length should always be as large as, or larger than, the largest key for files
using the LSR pool.

How implemented
The size of the maximum keylength is defined in the KEYLEN parameter in the
file definition on the CSD. For more information about the CSD, see the CICS
Resource Definition Guide.

Resource percentile for LSR (SHARELIMIT)


The SHARELIMIT parameter in the LSR pool definition specifies the percentage of
the buffers and strings that CICS should apply to the value that it calculates.

Effects
The method used by CICS to calculate LSR pool parameters and the use of the
SHARELIMIT value is described in “VSAM considerations: general objectives” on
page 225.

This parameter has no effect if both the BUFFERS and the STRINGS parameters
are specified for the pool.

Chapter 18. VSAM and file control 239


Where useful
The SHARELIMIT parameter can be used in CICS systems with VSAM data sets.

Recommendations
See “VSAM considerations: general objectives” on page 225.

Because SHARELIMIT can be applied only to files that are allocated at


initialization of the LSR pool (when the first file in the pool is opened), it is always
wise to specify the decimal STRINGS and BUFFERS for an LSR pool.

How implemented
The SHARELIMIT parameter is specified in the LSR pool definition. For more
information, see the CICS Resource Definition Guide.

VSAM local shared resources (LSR)

Effects
CICS always builds a control block for LSR pool 1. CICS builds control blocks for
other pools if either a LSR pool definition is installed, or a file definition at CICS
initialization time has LSRPOOL= defined with the number of the pool.

Where useful
VSAM local shared resources can be used in CICS systems that use VSAM.

Recommendations
See “VSAM considerations: general objectives” on page 225.

How implemented
CICS uses the parameters provided in the LSR pool definition to build the LSR
pool.

How monitored
VSAM LSR can be monitored by means of response times, paging rates, and CICS
LSRPOOL statistics. The CICS LSRPOOL statistics show string usage, data set
activity, and buffer lookasides (see “LSRpool” on page 416).

Hiperspace buffers
VSAM Hiperspace buffers reside in MVS expanded storage. These buffers are
backed only by expanded storage. If the system determines that a particular page
of this expanded storage is to be used for another purpose, the current page’s
contents are discarded rather than paged-out. If VSAM subsequently requires this

240 CICS TS for OS/390: CICS Performance Guide


page, it retrieves the data from DASD. VSAM manages the transfer of data
between its Hiperspace buffers and its CICS address space buffers. CICS file
control can only work with VSAM data when it is in a CICS address space buffer.
Data is transferred between Hiperspace buffers and address space buffers in blocks
of pages using CREAD and CWRITE commands. See “Hiperspace: data buffer
statistics” on page 419 for more information.

Effects
The use of a very large number of Hiperspace buffers can reduce both physical
I/O and pathlength when accessing your CICS files because the chance of finding
the required records already in storage is relatively high.

Limitations
Because the amount of expanded storage is limited, it is possible that the
installation will overcommit its use and VSAM may be unable to allocate all of the
Hiperspace buffers requested. MVS may use expanded storage pages for purposes
other than those allocated to VSAM Hiperspace buffers. In this case CICS
continues processing using whatever buffers are available.

If address space buffers are similarly overallocated then the system would have to
page. This overallocation of address space buffers is likely to seriously degrade
CICS performance whereas overallocation of Hiperspace buffers is not.

Hiperspace buffer contents are lost when an address space is swapped out. This
causes increased I/O activity when the address is swapped in again. If you use
Hiperspace buffers, you should consider making the CICS address space
nonswappable.

Recommendations
Keeping data in memory is usually very effective in reducing the CPU costs
provided adequate central and expanded storage is available. Using mostly
Hiperspace rather than all address space buffers can be the most effective option
especially in environments where there are more pressing demands for central
storage than VSAM data.

How implemented
CICS never requests Hiperspace buffers as a result of its own resource calculations.
You have to specify the size and number of virtual buffers and Hiperspace buffers
that you need.

You can use the RDO parameters of HSDATA and HSINDEX, which are added to
the LSRPOOL definition to specify Hiperspace buffers. Using this method you can
adjust the balance between Hiperspace buffers and virtual buffers for your system.

For further details of the CEDA transaction, see the CICS Resource Definition Guide.

Subtasking: VSAM (SUBTSKS=1)


Modes of TCB are as follows:

Chapter 18. VSAM and file control 241


QR mode
There is always one quasi-reentrant mode TCB. It is used to run
quasi-reentrant CICS code and non-threadsafe application code.
FO mode
There is always one file-owning TCB. It is used for opening and closing
user datasets.
RO mode
There is always one resource-owning TCB. It is used for opening and
closing CICS datasets, loading programs, issuing RACF calls, etc.
CO mode
The optional concurrent mode TCB is used for processes which can safely
run in parallel with other CICS activity such as VSAM requests. The SIT
keyword SUBTSKS has been defined to have numeric values (0 and 1) to
specify whether there is to be a CO TCB.
SZ mode
The single optional SZ mode TCB is used by the FEPI interface.
RP mode
The single optional RP mode TCB is used to make ONC/RPC calls.
J8 mode
A task has a J8 mode TCB for its sole use if it needs to run a JVM.
L8 mode
L8 mode TCBs are not in use for CICS Transaction Server for OS/390
Release 3.
SO mode
The SO mode TCB is used to make calls to the sockets interface of TCP/IP.
SL mode
The SL mode TCB is used to wait for activity on a set of listening sockets.
S8 mode
A task has an S8 TCB for its sole use if it needs to use the system Secure
Sockets Layer.

Effects
The objective of subtasks is to increase the maximum throughput of a single CICS
system on multiprocessors. However, the intertask communication increases total
processor utilization.

When I/O is done on subtasks, any extended response time which would cause
the CICS region to stop, such as CI/CA splitting in NSR pools, causes only the
additional TCB to stop. This may allow more throughput in a region that has very
many CA splits in its file, but has to be assessed cautiously with regard to the
extra overhead associated with using the subtask.

When the SUBTSKS=1 system initialization parameter has been specified:


| v All Non-RLS VSAM file control WRITE requests to KSDS are subtasked.
| v All other file control requests are never subtasked.
| v Auxiliary temporary storage or intrapartition transient data requests are
| subtasked.
| v Resource security checking requests are subtasked when the CICS main TCB
| (quasi-reentrant mode) exceeds approximately 70% activity.

242 CICS TS for OS/390: CICS Performance Guide


Where useful
Subtasking can be useful with CICS systems that use VSAM.

Subtasking should only be used in a multiprocessing system in a region that is


limited by a single processor but has spare capacity on other processors in the
MVS image. If used in other circumstances, it can cause throughput degradation
because of the dispatching of multiple tasks.

Limitations
Subtasking can improve throughput only in multiprocessor MVS images, because
additional processor cycles are required to run the extra subtask. For that reason,
we do not recommend the use of this facility on uniprocessors (UPs). It should be
used only for a region that reaches the maximum capacity of one processor in a
complex that has spare processor capacity or has NSR files that undergo frequent
CI/CA splitting.

Regions that do not contain significant amounts of VSAM data set activity
(particularly update activity) do not gain from VSAM subtasking.

Application task elapsed time may increase or decrease because of conflict between
subtasking overheads and better use of multiprocessors. Task-related DSA
occupancy increases or decreases proportionately.

Recommendations
SUBTSKS=1 should normally be specified only when the CICS system is run on a
MVS image with two or more processors and the peak processor utilization due to
the CICS main TCB in a region exceeds, say, about 70% of one processor, and a
significant amount of I/O activity within the CICS address space is eligible for
subtasking.

In this environment, the capacity of a second processor can be utilized to perform


the I/O scheduling activity for VSAM data sets, auxiliary temporary storage, and
intrapartition transient data.

The maximum system throughput of this sort of CICS region can be increased by
using the I/O subtask, but at the expense of some additional processing for
communication between the subtask and the MVS task under which the
transaction processing is performed. This additional processing is seldom justified
unless the CICS region has reached or is approaching its throughput limit.

A TOR that is largely or exclusively routing transactions to one or more AORs has
very little I/O that is eligible for subtasking. It is not, therefore, a good candidate
for subtasking.

An AOR is a good candidate only if a significant amount of VSAM I/O is


performed within the AOR rather than being function-shipped to an FOR.

Subtasking should be considered for a busy FOR that often has a significant
amount of VSAM I/O (but remember that DL/I processing of VSAM data sets is
not subtasked).

Chapter 18. VSAM and file control 243


How implemented
The system initialization parameter, SUBTSKS=1, defines that subtasking is to be
| used.

| How monitored
| CICS dispatcher domain statistics include information about the modes of TCB
| listed in “Subtasking: VSAM (SUBTSKS=1)” on page 241.

| Note: CMF data and CICS trace are fully available.

|
Data tables
Data tables enable you to build, maintain and have rapid access to data records
contained in tables held in virtual storage above the 16MB line. Therefore, they can
provide a substantial performance benefit by reducing DASD I/O and pathlength
resources. The pathlength to retrieve a record from a data table is significantly
shorter than that to retrieve a record already in a VSAM buffer.

Effects
v After the initial data table load operation, DASD I/O can be eliminated for all
user-maintained and for read-only CICS-maintained data tables.
v Reductions in DASD I/O for CICS-maintained data tables are dependent on the
READ/WRITE ratio. This is a ratio of the number of READs to WRITEs that
was experienced on the source data set, prior to the data table implementation.
They also depend on the data table READ-hit ratio, that is, the number of
READs that are satisfied by the table, compared with the number of requests
that go against the source data set.
v CICS file control processor consumption can be reduced by up to 70%. This is
dependent on the file design and activity, and is given here as a general
guideline only. Actual results vary from installation to installation.

For CICS-maintained data tables, CICS ensures the synchronization of source data
set and data table changes. When a file is recoverable, the necessary
synchronization is already effected by the existing record locking. When the file is
nonrecoverable, there is no CICS record locking and the note string position (NSP)
mechanism is used instead for all update requests. This may have a small
performance impact of additional VSAM ENDREQ requests in some instances.

Recommendations
v Remember that data tables are defined by two RDO parameters, TABLE and
MAXNUMRECS of the file definition. No other changes are required.
v Start off gradually by selecting only one or two candidates. You may want to
start with a CICS-maintained data table because this simplifies recovery
considerations.
v Select a CICS-maintained data table with a high READ to WRITE ratio. This
information can be found in the CICS LSRPOOL statistics (see page 416) by
running a VSAM LISTCAT job.
v READ INTO is recommended, because READ SET incurs slightly more internal
overhead.

244 CICS TS for OS/390: CICS Performance Guide


v Monitor your real storage consumption. If your system is already real-storage
constrained, having large data tables could increase your page-in rates. This in
turn could adversely affect CICS system performance. Use your normal
performance tools such as RMF to look at real storage and paging rates.
v Remember to select files that have a high proportion of full keyed direct reads as
CICS-maintained data table candidates.
v Files that have a large proportion of update activity that does not require to be
recovered across a restart would be better suited for user-maintained data tables.
v User-maintained data tables can use the global user exit XDTRD to modify as
well as select records. This could allow the user-maintained data table to contain
only the information relevant to the application.
v If storage isolation is specified allow for the extra storage needed by the data
tables to prevent CICS incurring increased paging.

How implemented
Data tables can be defined using either the DEFINE FILE command of the CEDx
transaction or the DFHCSDUP utility program. See the CICS Resource Definition
Guide for more information.

How monitored
Performance statistics are gathered to assess the effectiveness of the data table.
They are in addition to those available through the standard CICS file statistics.

The following information is recorded:


v The number of attempts to read from the table
v The number of unsuccessful read attempts
v The number of bytes allocated to the data table
v The number of records loaded into the data table
v The number of attempts to add to the table
v The number of records rejected by a user exit when being added to the table
either during loading or via the API
v The number of attempts to add a record which failed due to the table being full
(already at its maximum number of records)
v The number of attempts to update table records via rewrite requests.
v The number of attempts to delete records from the table
v The highest value which the number of records in the table has reached since it
was last opened.

There are circumstances in which apparent discrepancies in the statistics may be


| seen, caused, for example, by the existence of inflight updates.
|
| Coupling facility data tables
| For a description of how to define a coupling facility data table (CFDT), and start a
| coupling facility data table server, see the CICS System Definition Guide

| A CFDT is similar in many ways to a shared user-maintained data table, and the
| API used to store and retrieve the data is based on the file control API used for
| user-maintained data tables. The data, unlike a UMT, is not kept in a dataspace in

Chapter 18. VSAM and file control 245


| an MVS image and controlled by a CICS region, but kept in a coupling facility list
| structure, and control is shared between CFDT server regions. A CICS region
| requesting access to a CFDT communicates with a CFDT server region running in
| the same MVS image, using the MVS authorised cross-memory (AXM) server
| environment. This is the same technique used by CICS temporary storage servers.

| CFDTs are particularly useful for informal shared data. Uses could include a
| sysplex-wide shared scratchpad, look-up tables of telephone numbers, and creating
| a subset of customers from a customer list. Compared with existing methods of
| sharing data of this kind, such as shared data tables, shared temporary storage or
| RLS files, CFDTs offer some distinct advantages:
| v If the data is frequently accessed for modification, CFDT provides superior
| performance compared with function-shipped UMT requests, or using an RLS
| file
| v CFDT-held data can be recoverable within a CICS transaction. Recovery of the
| structure is not supported, but the CFDT server can recover from a unit of work
| failure, and in the event of a CICS region failure, a CFDT server failure, and an
| MVS failure (that is, updates made by units of work that were in-flight at the
| time of the failure are backed out). Such recoverability is not provided by shared
| temporary storage.

| There are two models of coupling facility data table, a contention model or locking
| model.

| Using the contention model, an exception condition (CHANGED) notifies an


| application that a rewrite following a read for update, or a delete following a read
| for update, needs to be retried because the copy of the record in the table has been
| updated by another task before the rewrite or delete could be performed. The
| contention model does not lock a record, but uses the version number of the table
| entry for the record to check that it has not been altered. If the version of this
| record on rewrite or delete is not the same as when the original read for update
| was performed, the CHANGED condition is returned.

| The locking model causes records to be locked following a read for update request
| so that multiple updates cannot occur.

| A contention model CFDT is non-recoverable. A locking model CFDT may be


| recoverable or non-recoverable. For a non-recoverable locking model, CFDT locks
| are held until a read for update sequence is completed by a rewrite or delete, but
| not until the next syncpoint. Changes are not backed out if a unit of work fails. In
| the recoverable case, locks are held until syncpoint, and the CFDT record is
| recoverable in the event of a unit of work failure or CICS region failure.

| The relative cost of using update models and recovery is related to the number of
| coupling facility accesses needed to support a request. Contention requires the least
| number of accesses, but if the data is changed, additional programming and
| coupling facility accesses would be needed to handle this condition. Locking
| requires more coupling facility accesses, but does mean a request will not need to
| be retried, whereas retries can be required when using the contention model.
| Recovery also requires further coupling facility accesses, because the recovery data
| is kept in the coupling facility list structure.

| The following table shows the number of coupling facility accesses needed to
| support the CFDT request types by update model.

246 CICS TS for OS/390: CICS Performance Guide


| Table 12. Coupling facility acess by request type and update model.
| Request description Contention Locking Recoverable

| Open, Close 3 3 6
| Read, Point 1 1 1
| Write new record 1 1 2
| Read for Update 1 2 2
| Unlock 0 1 1
| Rewrite 1 1 3
| Delete 1 1 2
| Delete by key 1 2 3
| Syncpoint 0 0 3
| Lock WAIT 0 2 2
| Lock POST 0 2 2
| Cross-system POST 0 2 per waiting 2 per waiting
| server server

| Locking model
| Records held in a coupling facility list structure are marked as locked by updating
| the adjunct area associated with the coupling facility list structure element that
| holds the data. Locking a record requires an additional coupling facility access to
| set the lock, having determined on the first access that the data was not already
| locked.

| If, however, there is an update conflict, a number of extra coupling facility accesses
| are needed, as described in the following sequence of events:
| 1. The request that hits lock contention is initially rejected.
| 2. The requester modifies the locked record adjunct area to express an interest in
| it. This is a second extra coupling facility access for the lock waiter.
| 3. The lock owner has its update rejected because the record adjunct area has
| been modified, requiring the CICS region to re-read and retry the update. This
| results in two extra coupling facility accesses.
| 4. The lock owner sends a lock release notification message. If the lock was
| requested by a different server, this results in a coupling facility access to write
| a notification message to the other server and a coupling facility access to read
| it on the other side.

| Contention model
| The contention update model uses the entry version number to keep track of
| changes. The entry version number is changed each time the record is updated.
| This allows an update request to check that the record has not been altered since
| its copy of the record was acquired.

| When an update conflict occurs, additional coupling facility accesses are needed:-
| v The request that detects that the record has changed is initially rejected and a
| CHANGED response is sent.
| v The application receiving the response has to decide whether to retry the
| request.

Chapter 18. VSAM and file control 247


| Effects
| In a test that compared the use of a CFDT with a function-shipped UMT between 2
| CICS regions running on different MVS members of a sysplex, it was found that
| overall CPU utilization was reduced by over 40% by using CFDTs. Some general
| observations that may be useful are:
| v Access to CFDT records of 4094 bytes or less (4096 or 4K including 2 bytes of
| prefix data) are handled as synchronous coupling facility requests by the CFDT
| server. Requests for records of greater then 4K bytes are made asynchronously.
| These asynchronous accesses cost a little more in CPU usage and response time.
| In a benchmark test comparing the same transaction rates (337 per second) but
| different record sizes, the less-than-4K CFDT workload took 41.7% less CPU
| than the UMT equivalent. The greater than 4K CFDT workload took 41.1% less
| CPU with no measurable degradation of response time.
| v Using the contention model requires the least coupling facility accesses but
| because the CHANGED condition needs to be handled and may need to be
| retried, maximum benefit is derived when there are few CHANGED conditions.
| These occurrences are reported in the CICS statistics which follow.
| v If the CFDT records are 63 bytes or less in length, the record data is stored in the
| entry adjunct area of the coupling facility list structure, which gives improved
| performance when using the contention update mode.
| v Using the locking model with recovery is the most costly mode of CFDT
| operation. Not only does this require more coupling facility accesses, but the
| CFDT server is also acting as a resource manager, co-ordinating the committal of
| updates in conjunction with the requesting CICS region. In a benchmark test
| involving the READ/UPDATE and REWRITE of CFDT records at a transaction
| rate of 168 per second, there was no significant difference in CPU utilization
| between transactions using contention and locking CFDTs. However, if the CFDT
| was defined as recoverable, the CPU utilization of the same transactions
| increased by approximately 15%.

| Recommendations
| Choose an appropriate use of a CFDT. For example, for cross-system, recoverable
| scratchpad storage, where shared TS does not give the required functional, or
| VSAM RLS incurs too much overhead.

| A large file requires a large amount of coupling facility storage to contain it.
| Smaller files are better CFDT candidates (unless your application is written to
| control the number of records held in a CFDT).

| The additional cost of using a locking model compared with a contention model is
| not great. Considering that using the contention model may need application
| changes if you are using an existing program, locking is probably the best choice
| of update model for your CFDT. If coupling facility accesses are critical to you,
| they are minimized by the contention model.

| Recovery costs slightly more in CPU usage and in coupling facility utilisation.

| Allow for expansion when sizing the CFDT. The amount of coupling facility
| storage a structure occupies can be increased dynamically up to the maximum
| defined in the associated coupling facility resource management (CFRM) policy
| with a SETXCF ALTER command. The MAXTABLES value defined to the CFDT
| server should allow for expansion. Therefore, consider setting it to a value higher

248 CICS TS for OS/390: CICS Performance Guide


| than your initial requirements. If a CFDT does become full, its capacity can be
| increased using the CFDT operator command SET TABLE=name,MAXRECS=n.

| The utilization of the CFDT should be regularly monitored both through CICS and
| CFDT statistics and RMF. Check that the size of the structure is reasonable for the
| amount of data it contains. A maximum-used of 80% is a reasonable target.
| Defining a maximum coupling facility list structure size in the CFRM policy
| definition to be greater than the initial allocation size specified by the POOLSIZE
| parameter in the CFDT server startup parameters enables you to enlarge the
| structure dynamically with a SETXCF ALTER command if the structure does fill in
| extraordinary circumstances.

| Ensure that the AXMPGANY storage pool is large enough. This can be increased
| by increasing the REGION size for the CFDT server. Insufficient AXMPGANY
| storage may lead to 80A abends in the CFDT server.

| How implemented
| A CFDT is defined to a CICS region using a FILE definition with the following
| parameters:
| v TABLE(CF)
| v MAXNUMRECS(NOLIMIT|number(1 through 99999999))
| v CFDTPOOL(pool_name)
| v TABLENAME(name)
| v UPDATEMODEL(CONTENTION|LOCKING)
| v LOAD(NO│YES)

| MAXNUMRECS specifies the maximum number of records that that CFDT can
| hold.

| The first CICS region to open the CFDT determines the attributes for the file. Once
| opened successfully, these attributes remain associated with the CFDT through the
| data in the coupling facility list structure. Unless this table or coupling facility list
| structure is deleted or altered by a CFDT server operator command, the attributes
| persist even after CICS and CFDT server restarts. Other CICS regions attempting to
| open the CFDT must have a consistent definition of the CFDT, for example using
| the same update model.

| The CFDT server controls the coupling facility list structure and the data tables
| held in this structure. The parameters documented in the CICS System Definition
| Guide describe how initial structure size, structure element size, and
| entry-to-element ratio can be specified.

| How monitored
| Both CICS and the CFDT server produce statistics records. These are described in
| “Appendix C. Coupling facility data tables server statistics” on page 509.

| The CICS file statistics report the various requests by type issued against each
| CFDT. They also report if the CFDT becomes full, the highest number of records
| held and a Changed Response/Lock Wait count. This last item can be used to
| determine for a contention CFDT how many times the CHANGED condition was
| returned. For a locking CFDT this count reports how many times requests were
| made to wait because the requested record was already locked.

Chapter 18. VSAM and file control 249


| CFDT statistics
| The CFDT server reports comprehensive statistics on both the coupling facility list
| structure it uses and the data tables it supports. It also reports on the storage used
| within the CFDT region by the AXM routines executed (the AXMPGLOW and
| AXMPGANY areas). This data can be written to SMF and may also be produced
| automatically at regular intervals or by operator command to the joblog of the
| CFDT server.

| The following is an example of coupling facility statistics produced by a CFDT


| server:
| DFHCF0432I Table pool statistics for coupling facility list structure DFH
| CFLS_PERFCFT2:
| Structure: Size Max size Elem size Tables: Current Highest
| 12288K 30208K 256 4 4
| Lists: Total In use Max used Control Data
| 137 41 41 37 4
| 100% 30% 30% 27% 3%
| Entries: Total In use Max used Free Min free Reserve
| 3837 2010 2010 1827 1827 191
| 100% 52% 52% 48% 48% 5%
| Elements: Total In use Max used Free Min free Reserve
| 38691 12434 12434 26257 26257 1934
| 100% 32% 32% 68% 68% 5%

| This above example shows the amount of space currently used in a coupling
| facility list structure (Size) and the maximum size (Max size) defined for the
| structure. The structure size can be increased by using a SETXCF ALTER
| command. The number of lists defined is determined by the MAXTABLES
| parameter for the CFDT server. In this example, the structure can support up to
| 100 data tables (and 37 lists for control information).

| Each list entry comprises a fixed length section for entry controls and a variable
| number of data elements. The size of these elements is fixed when the structure is
| first allocated in the coupling facility, and is specified to the CFDT server by the
| ELEMSIZE parameter. The allocation of coupling facility space between entry
| controls and elements will be altered automatically and dynamically by the CFDT
| server to improve space utilization if necessary.

| The reserve space is used to ensure that rewrites and server internal operations can
| still function if a structure fills with user data.

| The amount of storage used with the CFDT region to support AXM requests is also
| reported. For example:-
| AXMPG0004I Usage statistics for storage page pool AXMPGANY:
| Size In Use Max Used Free Min Free
| 30852K 636K 672K 30216K 30180K
| 100% 2% 2% 98% 98%
| Gets Frees Retries Fails
| 3122 3098 0 0
| AXMPG0004I Usage statistics for storage page pool AXMPGLOW:
| Size In Use Max Used Free Min Free
| 440K 12K 12K 428K 428K
| 100% 3% 3% 97% 97%
| Gets Frees Retries Fails
| 3 0 0 0

| The CFDT server uses storage in its own region for AXMPGANY and
| AXMPGLOW storage pools. AXMPGANY accounts for most of the available

250 CICS TS for OS/390: CICS Performance Guide


| storage above 16MB in the CFDT region. The AXMPGLOW refers to
| 24-bit-addressed storage (below 16MB) and accounts for only 5% of this storage in
| the CFDT region. The CFDT server has a small requirement for such storage.

| RMF reports
| In addition to the statistics produced by CICS and the CFDT server, you can
| monitor the performance and use of the coupling facility list structure using the
| RMF facilities available on OS/390. A ‘Coupling Facility Activity’ report can be
| used to review the use of a coupling facility list structure. For example, this section
| of the report shows the DFHFCLS_PERFCFT2 structure size (12M), how much of
| the coupling facility is occupied (0.6%), some information on the requests handled,
| and how this structure has allocated and used the entries and data elements within
| this particular list structure.
| % OF % OF AVG LST/DIR DATA LOCK DIR REC/
| STRUCTURE ALLOC CF # ALL REQ/ ENTRIES ELEMENTS ENTRIES DIR REC
| TYPE NAME STATUS CHG SIZE STORAGE REQ REQ SEC TOT/CUR TOT/CUR TOT/CUR XI'S
|
| LIST DFHCFLS_PERFCFT2 ACTIVE 12M 0.6% 43530 93.2% 169.38 3837 39K N/A N/A
| 1508 11K N/A N/A

| RMF will also report on the activity (performance) of each structure, for example:-
|
|
| STRUCTURE NAME = DFHCFLS_PERFCFT2 TYPE = LIST
| # REQ -------------- REQUESTS ------------- -------------- DELAYED REQUESTS -------------
| SYSTEM TOTAL # % OF -SERV TIME(MIC)- REASON # % OF ---- AVG TIME(MIC) -----
| NAME AVG/SEC REQ ALL AVG STD_DEV REQ REQ /DEL STD_DEV /ALL
|
| MV2A 43530 SYNC 21K 49.3% 130.2 39.1
| 169.4 ASYNC 22K 50.7% 632.7 377.7 NO SCH 0 0.0% 0.0 0.0 0.0
| CHNGD 0 0.0% INCLUDED IN ASYNC
| DUMP 0 0.0% 0.0 0.0

| This report shows how many requests were processed for the structure
| DFHFCLS_PERFCFT2 and average service times (response times) for the two
| categories of requests, synchronous and asynchronous. Be aware that requests of
| greater then 4K are handled asynchronously. For an asynchronous request, the
| CICS region can continue to execute other work and is informed when the request
| completes. CICS waits for a synchronous request to complete, but these are
| generally very short periods. The example above shows an average service time of
| 130.2 microseconds (millionths of a second). CICS monitoring records show delay
| time for a transaction due waiting for a CFDT response. In the example above, a
| mixed workload of small and large files was used. You can see from the SERV
| TIME values that, on average, the ASYNC requests took nearly 5 times longer to
| process and that there was a wide variation in service times for these requests. The
| STD_DEV value for SYNC requests is much smaller.
|
| VSAM record-level sharing (RLS)
| VSAM record-level sharing (RLS) is a VSAM data set access mode, introduced in
| DFSMS™ Version 1 Release 3, and supported by CICS. RLS enables VSAM data to
| be shared, with full update capability, between many applications running in many
| CICS regions. With RLS, CICS regions that share VSAM data sets can reside in one
| or more MVS images within an MVS parallel sysplex.

| RLS also provides some benefits when data sets are being shared between CICS
| regions and batch jobs.

| RLS involves the use of the following components:


| v A VSAM server, subsystem SMSVSAM, which runs in its own address space to
| provide the RLS support required by CICS application owning regions (AORs),
| and batch jobs, within each MVS image in a Parallel Sysplex environment.

Chapter 18. VSAM and file control 251


| The CICS interface with SMSVSAM is through an access control block (ACB),
| and CICS registers with this ACB to open the connection. Unlike the DB2 and
| DBCTL database manager subsystems, which require user action to open the
| connections, if you specify RLS=YES as a system initialization parameter, CICS
| registers with the SMSVSAM control ACB automatically during CICS
| initialization.
| A CICS region must open the control ACB to register with SMSVSAM before it
| can open any file ACBs in RLS mode. Normal file ACBs remain the interface for
| file access requests.
| v Sharing control data sets. VSAM requires a number of these for RLS control.
| The VSAM sharing control data sets are logically-partitioned, linear data sets.
| They can be defined with secondary extents, but all the extents for each data set
| must be on the same volume.
| Define at least three sharing control data sets, for use as follows:
| – VSAM requires two active data sets for use in duplexing mode
| – VSAM requires the third data set as a spare in case one of the active data sets
| fails.

| See the DFSMS/MVS DFSMSdfp Storage Administration Reference for more


| information about sharing control data sets, and for a JCL example for defining
| them.
| v Common buffer pools and control blocks. For data sets accessed in non-RLS
| mode, VSAM control blocks and buffers (local shared resources (LSR) pools) are
| located in each CICS address space and are thus not available to batch
| programs, and not even to another CICS region.
| With RLS, all the control blocks and buffers are allocated in an associated data
| space of the SMSVSAM server. This provides one extremely large buffer pool for
| each MVS image, which can be shared by all CICS regions connected to the
| SMSVSAM server, and also by batch programs. Buffers in this data space are
| created and freed automatically.
| DFSMS provides the RLS_MAX_POOL_SIZE parameter that you can specify in
| the IGDSMSxx SYS1.PARMLIB member. There are no other tuning parameters
| for RLS as there are with LSR pools—management of the RLS buffers is fully
| automatic.

| Effects
| There is an increase CPU costs when using RLS compared with function-shipping
| to an FOR using MRO. When measuring CPU usage using the standard DSW
| workload, the following comparisons were noted:
| v Switching from local file access to function-shipping across MRO cross-memory
| (XM) connections incurred an increase of 7.02 ms per transaction in a single
| CPC.
| v Switching from MRO XM to RLS incurred an increase of 8.20ms per transaction
| in a single CPC.
| v Switching from XCF/MRO to RLS using two CPCs produced a reduction of
| 2.39ms per transaction.
| v Switching from RLS using one CPC to RLS using two CPCs there was no
| appreciable difference.

| In terms of response times, the performance measurements showed that:

252 CICS TS for OS/390: CICS Performance Guide


| v Function-shipping with MRO XM is better than RLS, but this restricts
| function-shipping to within one MVS image, and prevents full exploitation of a
| Parallel Sysplex with multiple MVS images or multiple CPCs.
| v RLS is better than function-shipping with XCF/MRO, when the FOR is running
| in a different MVS image from the AOR.

| However, performance measurements on their own don’t tell the whole story, and
| do not take account of other factors, such as:
| v As more and more applications need to share the same VSAM data, the load
| increases on the single file-owning region (FOR) to a point where the FOR can
| become a throughput bottleneck. The FOR is restricted, because of the CICS
| internal architecture, to the use of a single TCB for user tasks, which means that
| a CICS region generally does not exploit multiple CPs
| v Session management becomes more difficult as more and more AORs connect to
| to the FOR.
| v In some circumstances, high levels of activity can cause CI lock contention,
| causing transactions to wait for a lock even the specific record being accessed is
| not itself locked.

| These negative aspects of using an FOR are resolved by using RLS, which provides
| the scalability lacking in a FOR.

| How implemented
| To use RLS access mode with CICS files:
| 1. Define the required sharing control data sets
| 2. Specify the RLS_MAX_POOL_SIZE parameter in the IGDSMSxx SYS1.PARMLIB
| member.
| 3. Ensure the SMSVSAM server is started in the MVS image in which you want
| RLS support.
| 4. Specify the system initialization parameter RLS=YES. This enables CICS to
| register automatically with the SMSVSAM server by opening the control ACB
| during CICS initialization. RLS support cannot be enabled dynamically later if
| you start CICS with RLS=NO.
| 5. Ensure that the data sets you plan to use in RLS-access mode are defined, using
| Access Method Services (AMS), with the required recovery attributes using the
| LOG and LOGSTREAMID parameters on the IDCAMS DEFINE statements. If
| you are going to use an existing data set that was defined without these
| attributes, redefine the data set with them specified.
| 6. Specify RLSACCESS(YES) on the file resource definition.

| This chapter has covered the three different modes that CICS can use to access a
| VSAM file. These are non-shared resources (NSR) mode, local shared resources
| (LSR) mode, and record-level sharing (RLS) mode. (CICS does not support VSAM
| global shared resources (GSR) access mode.) The mode of access is not a property
| of the data set itself—it is a property of the way that the data set is opened. This
| means that a given data set can be opened by a user in NSR mode at one time,
| and RLS mode at another. The term non-RLS mode is used as a generic term to
| refer to the NSR or LSR access modes supported by CICS. Mixed-mode operation
| means a data set that is opened in RLS mode and a non-RLS mode concurrently,
| by different users.

Chapter 18. VSAM and file control 253


| Although data sets can be open in different modes at different times, all the data
| sets within a VSAM sphere must normally be opened in the same mode. (A sphere
| is the collection of all the components—the base, index, any alternate indexes and
| alternate index paths—associated with a given VSAM base data set.) However,
| VSAM does permit mixed-mode operations on a sphere by different applications,
| subject to some CICS restrictions.

| How monitored
| Using RLS-access mode for VSAM files involves SMSVSAM as well as the CICS
| region issuing the file control requests. This means monitoring the performance of
| both CICS and SMSVSAM to get the full picture, using a combination of CICS
| performance monitoring data and SMF Type 42 records written by SMSVSAM:
| CICS monitoring
| For RLS access, CICS writes performance class records to SMF containing:
| v RLS CPU time on the SMSVSAM SRB
| v RLS wait time.
| SMSVSAM SMF data
| SMSVSAM writes Type 42 records, subtypes 15, 16, 17, 18, and 19,
| providing information about coupling facility cache sets, structures, locking
| statistics, CPU usage, and so on. This information can be analyzed using
| RMF III post processing reports.

| The following is an example of the JCL that you can use to obtain a report of
| SMSVSAM data:
| //RMFCF JOB (accounting_information),MSGCLASS=A,MSGLEVEL=(1,1),CLASS=A
| //STEP1 EXEC PGM=IFASMFDP
| //DUMPIN DD DSN=SYS1.MV2A.MANA,DISP=SHR
| //DUMPOUT DD DSN=&&SMF,UNIT=SYSDA,
| // DISP=(NEW,PASS),SPACE=(CYL,(10,10))
| //SYSPRINT DD SYSOUT=*
| //SYSIN DD *
| INDD(DUMPIN,OPTIONS(DUMP))
| OUTDD(DUMPOUT,TYPE=000:255))
| //POST EXEC PGM=ERBRMFPP,REGION=0M
| //MFPINPUT DD DSN=&&SMF,DISP=(OLD,PASS)
| //SYSUDUMP DD SYSOUT=A
| //SYSOUT DD SYSOUT=A
| //SYSPRINT DD SYSOUT=A
| //MFPMSGDS DD SYSOUT=A
| //SYSIN DD *
| NOSUMMARY
| SYSRPTS(CF)
| SYSOUT(A)
| REPORTS(XCF)
| /*
|

| CICS file control statistics contain the usual information about the numbers of file
| control requests issued in the CICS region. They also identify which files are
| accessed in RLS mode and provide counts of RLS timeouts. They do not contain
| EXCP counts, ar any information about the SMSVSAM server, or its buffer usage,
| or its accesses to the coupling facility.

254 CICS TS for OS/390: CICS Performance Guide


|

| Chapter 19. Java program objects


| This chapter describes CICS performance considerations for Java program objects
| built using VisualAge for Java, Enterprise Toolkit for OS/390 (ET/390). The
| following topics are included:
| v “Overview”
| v “Performance considerations”
| v “Workload balancing of IIOP method call requests” on page 258

|
| Overview
| The high level of abstraction required for Java or any OO language involves
| increased layering and more dynamic runtime binding as a necessary part of the
| language. This incurs extra runtime performance cost.

| The benefits of using Java language support include the ease of use of Object
| Oriented programming, and access to existing CICS applications and data from
| Java program objects. The cost of these benefits is currently runtime CPU and
| storage. Although there is a significant initialization cost, even for a Java program
| object built with ET/390, that cost amounts to only a few milliseconds of CPU time
| on the latest S/390® G5 processors. You should not see a noticeable increase in
| response time for a transaction written in Java unless CPU is constrained, although
| there will be a noticeable increase in CPU utilization. You can, however, take
| advantage of the scalability of the CICSplex architecture, and in particular, its
| parallel sysplex capabilities, to scale transaction rates.
|
| Performance considerations
| The main areas that may affect the CPU costs associated with running Java
| program objects with CICS, are discussed in the following sections:
| v “DLL initialization”
| v “LE runtime options” on page 256
| v “API costs” on page 257
| v “CICS system storage” on page 257

| DLL initialization
| At run time, when a Java program is initialized, all dynamic link libraries (DLLs)
|