Responsive Systems Company

DB2 System Performance Metrics

Joel Goldstein

Responsive Systems Company

Morganville, NJ 07751

(908) 972-1261

Buffer Pool Tool™ for DB2
The only product that can Predict the effect of changes to the buffer pools.

Coupling Facility Sizing Module
Sizes the CF structures based on statistical data from your 'operating' DB2 system(s).

DB2 is a complex database management system, that has a major impact upon your overall processing environment. Installations are constantly looking for ways to measure, analyze, and control its resource consumption. This presentation assumes that the reader has a basic understanding of DB2, and does not define basic DB2 terminology. It addresses the origins and data available for analyzing, tuning, and tracking the performance of the DB2 System and Applications, and inter-dependencies and relationships between the various data elements. Many of the system tuning options are explained and the trade-offs discussed, since systems and applications are always somewhat different for each corporation. The Data Sharing Sysplex is not discussed in this presentation.


DB2 is viewed by many people as unpredictable, and sometimes as a "black hole" that consumes all available resources. However, DB2 does not have to be unpredictable, or consume all available resources. There is adequate information available from DB2 and other sources to accurately track resource consumption, determine when the system or applications are not performing properly, and determine the cause so corrective action may be taken.

This paper focuses on the data in the DB2 Statistics and Accounting records, and discusses how this data can be used together with other MVS system information sources, such as RMF, for performance analysis and tuning of the DB2 system and applications.

DB2 Record Structures

The Statistics (SMF Type 100) and Accounting (SMF Type 101) records both use self defining dsects that may or may not occur, depending upon the functions and types of activity at the system and application level. SMF Type 102 records contain two categories of records: The first two are general in nature, and contain useful system information such as the parameters that DB2 is currently using (ZPARM and dynamic system specifications), and tables that map Database ID (DBID) and Object ID (OBID) to the actual Database and Tablespace/Table names. The balance of the 102 records produced by DB2 are specific performance trace records. Many of these are also useful for several performance tuning approaches.

The first place to start, is with an assembly of the applicable macros to produce record formats. The macros exist in the DB2 Maclib, and are:




Note: The Statistics macro requires the 'subtype' parameter, because there are actually two records produced by DB2. Both records use SMF type 100. However, each has a different DB2 Instrumentation Facility Counter ID (IFCID). IFCID 1 contains the system related functions handled by the DB2 System Services address space, commonly called the DB2 Master. IFCID 2 contains database related functions handled by the DB2 Database Services address space, commonly known as the DB2 Database Manager. The performance trace records use IFCIDs from 4 through 241 in DB2 V3.1, with an expansion up to IFCID 306 with V4.1. The Accounting record is IFCID 3. In order to obtain a total picture of all possible records that are produced by a DB2 system, you should assemble DSNDQWIW, also using SUBTYPE=ALL.

Statistics Records

The Statistics records are produced when DB2 is started or stopped, at timed intervals as specified in the ZPARM, when Performance or Statistics traces are started, and when DB2 takes a checkpoint. The normal, and default interval for Statistics is 30 minutes, plus or minus a few seconds. It is not possible to 'synch' these occurrences to any interval as with RMF.

The Statistics records are much more difficult to deal with than the Accounting because most of the fields contain accumulated values since DB2 was last started. The values from the preceding record must be saved and subtracted from the current values to obtain the activity for the current time interval. Given this information, it is then important to know which fields are not accumulated. Without listing all of them, the more important ones are: the EDM Pool data, RID Pool data, and the number of active buffers for each Buffer Pool that has been used.

Accounting Records

The first thing necessary to understand, is when Accounting records are produced, since this is dependant upon the attaching environment and options of the Allied Agent. Accounting records are produced when a thread is terminated, i.e. a batch job ends, or the Authorization ID (AUTHID) changes if an existing thread is reused.

Whenever an AUTHID changes, DB2 considers this a SIGNON process, and must validate the authorization of the AUTHID to access the system, PLAN, and underlying resources. This is obviously overhead we would like to reduce as much as possible for high volume transaction environments. DB2 provides an 'Authorization Cache' as a 1K (default size) block in the EDM Pool where up to 124 of the most frequently used AUTHIDs are maintained for each PLAN. This cache can be specified up to 4K (specified when a PLAN is bound) to Cache the AUTHIDs. The Authorization Cache eliminates Catalog access for validation if the AUTHID is in the Cache. For large systems with thousands of users, there will still be a high level of Catalog access, even if the 4K cache is used that can cache 508 Authids. A PLAN may be granted to 'Public', and not require authorization checking; but this may create a security exposure. The various CICS RCT options can dramatically reduce this overhead by passing the same AUTHID to DB2 for all users. This can improve throughput; however, it causes a loss of accountability and performance tracking facilities because the activity and resource consumption from many users would be combined into one DB2 Accounting record. A CICS application 'Synchpoint' causes a DB2 COMMIT, but does not cause thread termination or an Accounting record to be produced. However, using an option of Tokene=Yes in the CICS RCT causes a signon to take place, and will produce an accounting record for each transaction. DB2 provides a 'token id' as part of the record that can be matched up to activity records on the CICS side.

Looking at Performance Indicators

There are two areas that must be considered before we start to look at actual performance indicators. First, the functions and responsibilities of the DB2 System address spaces and the Allied Agent address spaces, and secondly, where the CPU components accrue and the type of component based upon the function. The following chart (Chart 1) will begin to shed some light upon where the charges accrue, why, and what is considered normal versus indicators of unusual activity. This chart does not include information for the Stored Procedures address space for Version 4, because of a lack of measurement data at this time.

System Performance Indicators

The first area to address is the CPU component of the three major DB2 system address spaces. This paper will not address the DDF address space, or distributed facilities. DB2 uses both TCB and SRB mode for its system level operations. It uses SRB mode extensively for the performance critical functions because it requires less overhead to dispatch an SRB than a TCB, and because SRBs are non pre-emptable (not considering the Enclave SRB available with MVS/ESA V5). Let's start with the DB2 System Services address spaces, commonly referred to as the DB2 Master or MSTR or SSAS. The normal high priority tasks, Logging and Checkpointing, use SRB mode. They usually do not consume large amounts of CPU, with the impacts of the logging and checkpointing facilities directly related to the insert/update/delete activity of the applications. The other function that uses SRB mode is BACKOUT. Backout is a critical function for maintaining data integrity. When a backout situation is encountered by DB2, it is handled as the highest priority task in the system, and will consume major amounts of CPU at the dispatching priority of the MSTR address space (refer to my paper in the CMG90 proceedings for an analysis of this activity). The other tasks performed by the MSTR address space are BSDS access, Archiving, and allocating the Archive Log datasets. If no Archiving or Backout activity takes place within a Statistics interval, the total CPU time will be quite small compared to the Database Manager address space, and should be about 90% SRB time. If Archiving takes place, the ratio may be about 50% each of TCB and SRB, or may weigh slightly more to the TCB side. There is quite a bit of latitude here. If a Backout of any size occurs, there will be little doubt in your mind when you look at the SRB component, and you may have seen the lights dim at your processing complex while this address space consumed most of your CPU resources during the Backout process. Chart 1 shows the address space incurring the cost of the various functions in a DB2 environment.

Where Do the CPU Charges Accrue ?
Application/DB2 Function Charged to:
Addr. Space Type
Create Thread (Partial) User TCB
SQL Processing User TCB
Synchronous Read User TCB
Synchronous Writes User TCB
Logical Logging User TCB
Locking - Get/Release User TCB
Fetch/Data Retrieval User TCB
Sequential/List/Dynamic Prefetch DBAS SRB
Asynchronous Writes DBAS SRB
Space Management DBAS SRB
Create Thread (Initial Part) DBAS SRB
Open/Close Datasets, Manage Drain Queues DBAS TCB
Physical Logging SSAS SRB
Checkpoint Processing SSAS SRB
Backout Processing SSAS SRB
Archiving Logs SSAS TCB
Allocate Archive Datasets SSAS TCB
BSDS Processing SSAS TCB
Deadlock Processing IRLM SRB
Lock Resume IRLM SRB
Lock Error Management IRLM TCB

Chart 1

The Database Manager (DBAS or DBM1) address space should show the largest CPU component, and most of it should be indicated as SRB time during normal processing hours. Note the "normal processing hours". This is normally your prime shift, possibly 9AM to 5PM, possibly longer depending upon your installation. Unless your system has decision support applications or something similar, that accesses some tablespaces infrequently, there should not be any TCB time reflected in the CPU component. If there is, it means that some tablespaces and/or indices have the default Closerule of YES (default). This will certainly be major impact upon response time. If there is true open/close activity, this will noticeably degrade overall system performance and may tend to fragment the EDM Pool. Using a Closerule of Yes causes additional overhead for the DBAS address space even when objects are not actually closed/opened, because they are managed on a "slow close" or drain queue. Additionally, DB2 will switch an object to "read only" status and close the Syslgrng (based on Zparm specification) to avoid both backout/recovery implications, and to decrease the amount of work and records written at checkpoint intervals.

The IRLM (IMS/VS Resource Lock Manager) handles only locking, and uses SRB mode 99.9% of the time. There is an occasional slight tick of TCB mode; however, it is so small that it probably doesn't matter. Normally, the total amount of time accrued to the IRLM, compared to the DBAS and SSAS, is between small and insignificant. Someplace in the range of 1% or less of the SRB component of the DBAS. Obviously, there are exceptions. Since the IRLM must maintain all locks upon all objects in the system, the greater the number of total locks is, the more work it will require. There are several parameters that can effect IRLM overhead. The first is MAXLKTS (Maximum Locks per Tablespace), the second is IRLMRWT (Timeout for Lock Waits), the third is the Deadlock Detection Cycle time specified on the Proc for the IRLM, and the last is the PC parameter on the IRLM PROC. The Deadlock Detection cycle time should be set at 50% or less of the Timeout value, so deadlocks will be detected before a Timeout occurs. However, if it is set too low, the IRLM may consume more resources than necessary. Additionally, when the IRLM is performing its detection cycle, it cannot handle any other locking requests, and they must wait. As the number of locked resources increase, the overhead to perform the detection cycle will also increase. Whenever a timeout is detected, the IRLM will perform Deadlock Detection twice to ensure that no Deadlock situations exist that may be effecting and/or causing the Timeout. The trade-off here is to balance the overhead with overall system and application performance by getting rid of deadlock situations quickly, while keeping the overhead as low as possible. Some installations with high numbers of deadlocks have reduced the detection cycle time as low as one (1) second. DB2 Version 3 lock avoidance techniques, and Version 4 Type 2 indexes (especially) substantially reduce the number of locks for a system. Some installations experienced a 50%+ reduction of the number of locks system-wide when Type 2 indexes were implemented. The last parameter is the PC=N (should be this) on the JCL Proc. This places all locks (250 bytes for each) into ECSA instead of the private area of the IRLM. This reduces overall locking overhead by 5%. It is important that ECSA is sized large enough to hold 3-5 Meg. of locks. If it isn't, DB2 becomes very unfriendly - the IRLM will simply get whatever additional space it requires from CSA below the 16M line (up to the maximum amount specified in the startup proc), and never tell you or issue any warning messages. If your CSA utilization is somewhat on the "tight" side, this will certainly ruin your day. Now let's start to look at specific performance indicators available in the Statistics record(s).

EDM Pool
The first indicator is Space Failures. This is easy, since it should always be zero. The following recommendations are based upon your peak processing periods. The next easiest is I/o rates for the DBDs, Cts, and PKCTs. IBM recommends that the ratio should be less than 20% of the number of requests. Experience has shown that the ratios should be considerably less for good performance in a high transaction volume system. The ratios to look for are 1% or less for the DBDs, (should usually be zero for transaction processing systems during prime shift) and 10% or less for both the Cts and PKCTs. Now let's look at Free Space. This is normally evaluated during your prime activity shift, but some installations have heavier pool requirements during the batch workload - so this period should be evaluated occasionally too. The minimum amount of space you should ever have available is 20%, and should not have more than 40% free as a maximum amount. More than 40% free is simply wasting memory, unless you know that some new applications will be coming into the system in the near future.

Buffer Pools
The major impact thresholds of SPTH, DMTH, and IWTH should be familiar to everyone, so I won't berate the obvious and will address other items that are not quite so obvious, or where my approach is substantially different most of the online monitors and other products. Current Buffers Active is not a high usage count, it is only that microsecond in time when the Statistics record is being created. However, it does not mean that the number is totally without value. It must be coupled with the Deferred Write Threshold counter and the Number of Buffers before it may or may not indicate a problem. DB2 Version 3 split the DWTH counter into two counters, VWTH and DWTH. DB2 maintains a Vertical Deferred Write Queue (VWTH) per tablespace. The default for VWTH is 10% and the default for DWTH is 50%. Reaching either of these thresholds will trigger deferred writes of those pages, and will also increment the respective counter. Most installations should lower both of these thresholds, based on a calculation of the average number of pages/write. So far, most observed installations show a page/write ratio between two and five. This means that DB2 must schedule multiple physical writes each time these thresholds are reached. DB2 can (potentially) write up to 128 pages (for one object) with a single I/o; however, it will only write Cis within a range of 150 RBAs with a single I/o to reduce the duration of write events. Therefore, it is usually beneficial to lower the VWTH as far as possible (1% is the lowest you should use) to trigger the asynchronous write events sooner. This smooths the write intensity both for normal processing, and especially at checkpoint intervals. The DWTH should also be lowered, especially when there are many objects in a pool that are updated on a frequent basis. The Buffer Pool Hit Ratio is a significant performance indicator, and should be used in conjunction with the overall I/o rate/Sec. The proper system hit ratio calculation is: add the number of Synchronous I/os and the Pages Read by Sequential Prefetch, List Prefetch, and Dynamic Prefetch, and divide this sum by the number of Getpage Requests.

  • Overall System Hit Ratio

a Prefetch Request may often read fewer pages than the maximum prefetch quantity if some of the pages in the requested block of pages are already in the pool. This hit ratio should be used in conjunction with the I/o rate/Sec., that can be calculated by adding all the I/o events and dividing by the duration of the statistics interval in seconds.

A major impact, although not encountered too often, is Prefetch Disabled - No Read Engine. This is based upon the maximum number of Media Manager Request Blocks (MMRBs) that DB2 has available. The number is presently fixed at 300; however, this can change at IBM's discretion, and will take effect when you apply that maintenance to your system. This situation has been encountered several times; however, it was not something that could be repeated to determine its cause. It is often an indication of I/o subsystem contention that is not allowing Prefetch to complete fast enough and all the "engines" have requests queued waiting for completion. This will show up in the application accounting record as a "wait for asynchronous I/o"; however, these wait times are for all asynchronous reads the application waited for, not only or necessarily because of a lack of read engines. Another indicator that is not totally understood by many people is Synchronous Writes. These are triggered if the IWTH is reached; however, there are two other instances that will cause this counter to be incremented that are not of the same overall performance impact. The other two situations are: closing a tablespace when updated buffers are still in the pool, and a second checkpoint occurring while updated pages that have not been written are still in the buffer pool. So, if prefetch has not been disabled, and DMTH has not been reached, then synchronous writes (low numbers) are nothing to worry about.

Aside from application design and SQL coding, buffer pool tuning is the single greatest tuning lever available to improve application performance. Hopefully, the existence of sixty pools dispels any myth that using one large pool is the best way to optimize performance. HiperPools are an effective way to leverage less expensive expanded storage to obtain better overall hit ratios and eliminate I/o. However, many installations have implemented HiperPools with minimal benefit. Not because they don't work, but because the basics of pool tuning was not done. It is important to understand what types of objects can benefit from larger amounts of pool memory, whether it is all VP, or a mix of VP and HP. Very large objects that are accessed using sequential prefetch will not benefit very much, if at all. Very large objects with very random access usually will not see great improvements. Objects with very high update rates will not get much improvement from a HP. When HPs are implemented it becomes just as important to evaluate their benefit as to track overall performance. A few different approaches should be used to evaluate the benefit from a HP.

  • % of Pages Read Back from the HP

This formula shows the amount of benefit provided by the HP. While low percentages usually imply little benefit, the number of pages read into the VP from the HP should be divided by the measurement period to determine the I/o per second that the HP saved. If this is very low then the HP is not providing any benefit. However, for very large volume systems, it is possible to have a percentage in the low teens, and still save several I/o per second.

  • VP Hit Ratio

The VP hit ratio shows the percentage of pages found in the VP only, differing from the overall system hit ratio provided earlier.

  • % of Overall Pages found in HP

This formula provides the percentage of requested pages that were found in the HP.

Report 1

Sysid=MVSB, Date=07/10/1996, Hour=17


Buffer Pool = 0


BP Parameters: VP Size = 7000 VP Sequential Thresh = 80% Defer Write Thresh = 50%

HP Size = 21000 HP Sequential Thresh = 80% Vert Defer Wrt Thresh = 10%

Castout Attribute = YES VP Parallel Seq Thresh = 50%

General: VP Buffers Allocated = 28000 Active Buffers = 1991 VP Expand/Contract = 0

HP Buffers Allocated = 84000 HP Buffers Backed = 84000 HP Expand/Contract = 0

Recall Migrated Dataset = 0 Dataset Opens = 16 VP/HP Expand Failed = 0

Recall Timeout = 0

Read Operations: Getpage Requests = 8775990 Getpage Reqs - Seq = 4131386 Getpage Reqs - Random = 4644604

GetP Fail - VP Full = 0 GetP Fail - Cond Req = 0 GetP/Sync Read I/o = 27

Sync Read I/os = 327016 Sync Read I/os - Seq = 90551 Sync Read I/os - Random = 236465

Seq Prefetch Reqs = 135172 List Prefetch Reqs = 28795 Dynamic Prefetch Reqs = 144208

Seq Prefetch Pages = 3725599 List Prefetch Pages = 7660 Dynamic Prefetch Pages = 768200

Seq Prefetch I/os = 120453 List Prefetch I/os = 1104 Dynamic Prefetch I/os = 35786

Pgs Read/SPrefetch I/o = 31 Pgs Read/LPrefetch I/o = 7 Pgs Read/DPrefetch I/o = 21

PFetch Fail - No Buff = 0 PFetch Fail - No Engine = 0 Page-ins Due to Read = 7

HP Pgs Read Sync = 21965 HP Pgs Rd Async-No ADM = 13936 HP Pgs Rd Async - ADM = 100262

HP Read Fail - No ADM = 0 HP Read Fail - ADM = 0

Write Operations:Pages Updated = 807356 Pages Written = 66035 Updates/Pages Written = 12

Write I/o Operations = 8801 Immed (Sync) Writes = 1294 Pages/Write I/o = 7

VDWTH Reached = 81 DWTH Reached = 10 DMTH Reached = 0

Write Engine Not Avail = 0 Page-ins Due to Write = 84 HP Pages Written Sync = 0

HP Pgs Wrt Async-No ADM = 3077 HP Pgs Wrt Async - ADM = 4972455 HP Write Fail - No ADM = 0

HP Write Fail - ADM = 0

Parallel I/o: Parallel Group Reqs = 0 I/o Parallel Downgraded = 0 Prefetch I/o Strm Reduce= 0

Max Prefetch I/o = 0 Prefetch Reduced to 1/2 = 0 Prefetch Reduced to 1/4 = 0

Sort/Merge: Sort Merge Passes Req = 0 Workfile Requests = 0 Workfile Req Denied = 0

Merge Pass - Low Buff = 0 WF Prefetch Disabled = 0 WFCreate Fail - No Buff = 0

Workfile Maximum = 0 Destructive Read Req = 0 Destruct Rd Page Dequeue= 0

There are many reporting facilities available to track performance. The example above, report 1, is from one of the widely used online monitors. Using the information above, the percentage of pages read into the VP from the HP is only 2.7% - extremely low. If we just looked at the number of pages read back from the HP, and divided by 3600 seconds for one hour, this shows a saving of 37 I/os per second. However, if we add the pages read back in asynchronously both with and without ADMF, and divide by 32 (for a normal prefetch quantity), then add the synchronous read, and divide this sum by 3600, we get a saving of 7 I/os per second. While this I/o saving might be significant, it has to be viewed considering the 84 Megabytes of expanded storage used to provide this saving.

Log Manager
The Log Manager section has several fields that interest us. The first three are Reads from the Output Buffer, Active Log, and Archive Log. These all indicate that a Backout has occurred, with increasing levels of impact. Obviously, the least impact is caused by reads from the Output Buffer because I/os are not necessary. The Write Force counter is incremented only by Checkpoints, or closing a tablespace. In most reports, the value in this counter will be close to the number of Checkpoints. Unavailable Output Log Buffers indicates a critical performance problem that should be addressed immediately, since DB2 must STOP all processing until a buffer becomes available. This is usually caused by Logging delays in the I/o subsystem, although it can also be impacted by an improperly sized buffer or the deferred write threshold set too high. This threshold is usually more important for large batch updaters than online transactions that do a small amount of work and issue a Commit, since the Commit will drive the asynchronous logging process. It is not immediately clear to everyone that a default buffer size of 400K, is really 100 4K buffers. When this is understood, it becomes clear that the recommended threshold for deferred write is 20 (20% of the number of buffers). Experience has shown this parameter is often set improperly. Another parameter that can affect a backout process is the input buffer size specified in the ZPARM. This should be equal to the output blocksize for the Archive Log, and be set at 28K for tape, and 20K (for 3380) or 24K (for 3390) if DASD Archiving is used. Making the Input Buffer larger than the Archive Blocksize will not help performance. This is one buffer, not multiple buffers, so making it larger only wastes space.

Locking Data
The first items on the agenda are Timeout and Deadlocks, and we would like to see these counters indicating a zero. We certainly expect that Lock Escalations will also be zero, especially for online environments during the prime shift. It would be nice to see Lock Suspensions as zero also, but some level of Suspension is normal. The number of Suspensions should be around .1% or less of the number of Lock Requests. If it is substantially higher, then Locking Contention may be a significant performance impact. One additional factor that may impact Locking is the overall Dispatching Priority of the IRLM address space. The number of Unlock Requests will always be less than the number of Lock Requests, because the Lock Manager can release multiple Locks with one Unlock command. The amount of locking activity, and the rate of suspensions and deadlocks will decrease noticeably when Version 4.1 Type 2 indexes are implemented. Another feature of Version 4.1 is Row Level Locking. This should only be used if you are sure that it is necessary, since this will increase both CPU and locking overhead.

SQL Activity
The first concern should be that no DDL or BINDS are occurring during the prime shift, since these will have a major impact upon overall Catalog Access, and are not compatible with any users of Dynamic SQL. They will lock each other out of the Catalog. The second, which is seen infrequently, is Incremental Binds. This may be caused either by a PLAN that was bound using VALIDATE(RUN), or it had been marked invalid for some reason and DB2 must re-validate it for each execution.

Certainly we expect to see the counter for URE Indoubt and Resolved as zero. The primary performance indicator in this area is the number of Checkpoints. An online system should take a Checkpoint every 15 - 20 minutes. This is a generally accepted industry guideline. If your system has no Update activity, there should not be any Checkpoints occurring. The Checkpoint frequency is controlled by the LOGLOAD parameter in the ZPARM. Checkpointing too frequently is a major performance impact on the DB2 system, and on the Logging function. If your system is Checkpointing too frequently, you will have to change the LOGLOAD parameter. The number of Write-No-Wait counts from the Logging section can be used to calculate the LOGLOAD value. Take your average value per minute during your prime shift, and multiply it by 20. This is your new LOGLOAD value.

RID Pool
The RID Pool was an enhancement added with DB2 V2.2, and is used whenever the Optimizer determines that List Prefetch (LP) and/or Multiple Index Access (MIAP) will be used as the most efficient Access Path for you SQL request. The size of the RID Pool is specified in the ZPARM, and it can expand dynamically up to its maximum amount. The number in the ZPARM specifies the maximum number of blocks for the pool, and each block is 16K. Let's look at how we determine how much space may be necessary to use MIAP or LP. a RID (Row ID) is 4 bytes. If DB2 must sort 1 million RIDs, this is 4 Megabytes - times two, because you need twice the amount of space to be able to perform the sort. This is now 8 Megabytes. One user may not use more than 50% of the RID Pool, so the actual space necessary is 16 Megabytes. There are several causes for LP or MIAP to fail dynamically, and the failure can be an expensive process since DB2 must then restart the access using a different access path (usually a tablespace scan). The normal RDS section of the Statistics record (QXST) provides the basic information: Successful Usage, Failures for Storage, Number of RIDS Exceeded Limit. There is another section (QIST) that provides additional information: Times Number of RIDS Exceeded RDS Limit, Times Number of RIDS Exceeded DM Limit, High Number of RID Blocks, Current Number of RID Blocks, Number of Times Process Terminates for Concurrent Processes. The Failure situations should be mapped back to the Accounting records, and the applications examined to determine a course of action to reduce and possibly eliminate the failures.

Application Performance Indicators

One of the most common calls from an Application project, is a request for a Performance Trace. This is usually not necessary to diagnose most performance problems. The first place to start, is with an Accounting record (SMF 101, IFCID 3). There is a great deal of information available in these records. One vital point, is that Accounting Trace Classes 2 and 3 (plus 7 and 8 if using packages) should be on, in addition to Class 1. This is my general recommendation for all systems. Yes, it is more overhead for Classes 2 and 3, about 3-5%. However, if you are using an online monitor, most of them turn on Monitor Classes 2 and 3, or other specific IFCIDs that cause the same overhead to be generated, and will produce this data in the records anyway. So you might as well code it in the ZPARM. The only exception to this is when your online monitor can write all the records to its own datasets instead of SMF; eliminating SMF usage provides a significant performance improvement.

Since online transactions often have more noticeable performance problems, the following discussion primarily addresses this area; however, the same approach is applied to TSO and Batch applications. First we look at Class 1, 2, and 3 times. If the difference between Class 1 and Class 2 elapsed time is large, the problem is usually within portions of the application code not related to DB2. TSO is an exception here, since Class 1 time will continue to accrue as long as the thread exists. Typically, a well performing online transaction will have a Class 2 elapsed time in the 95 - 98% range of Class 1 elapsed. The Class 3 time is shown as synchronous I/o elapsed, Lock/Latch wait time, and several other wait events. This is often the primary cause of the problem, and is related to the type and amount of work performed by the application. Divide the I/o elapsed time by the number of Synchronous Read I/os. This will provide the average I/o elapsed time, and indicates an I/o subsystem problem if greater than 28 Ms. for 3380, or 23 Ms. for 3390. Another obvious consideration here, should be the number of read requests. This is directly related to the Access Path, and the Buffer Pool hit ratio. How many Getpage Requests, and Prefetch Requests (why is it using sequential Prefetch)? Does the application appear to be performing too much work? These may also be reduced if a more efficient Access Path is chosen. How about Lock and Latch elapsed? If there are Lock Suspensions indicated in the Locking section, this is impacting response time. If the Lock/Latch time is only for Latches, it should be much less than .01 seconds. If it is greater, the IRLM may not be at the proper dispatching priority, or receiving a proper level of service.

Subtract the total Class 3 Time(s) from Class 2 elapsed time. Now compare the remaining Class 2 elapsed to Class 2 CPU. You should see a factor of 1.5 or less for a transaction environment. If higher, the CICS or IMS MPP is not receiving an adequate level of service, or perhaps the processor complex is just too heavily utilized. Look for other Locking exceptions such as the Maximum Concurrent Locks - any online application that holds more than 30 or so concurrent locks, is a potential impact upon other users, and possibly other users of the same PLAN. Were there any failures when using MIAP or LP? This will effect CPU and elapsed time, and generate much higher numbers of Getpages and I/o requests. Two last items to consider. First, how many Commits were issued? If there were several, find out why. Second, what kind of a relationship is there between Open/Close Cursors, and Fetch Requests. If there are several Open/Closes, and the number of Fetches is only a few higher, there may be inefficiencies in the application code.

Preventive Planning
There is that old adage about the "ounce of prevention". This is especially true with DB2 applications. The tools are available. The knowledge, methodologies, and procedures for good and efficient design exist. There is really no excuse for a poorly designed and poor performing application to appear on your system... and the poorly designed and coded applications is (unfortunately) the norm rather than the exception. This is especially true for purchased application software that almost always uses I/o interface modules rather than direct SQL coding, and the application was ported from some other environment, usually VSAM. It is quite rare to find a purchased application that was design ed for DB2 and performs efficiently.

The first level of defense is the Design Review. There should be several of these, by knowledgeable DB2 personnel, to ensure the basic design is correct from a performance perspective. Performance measurement and tracking of an application should begin when the first SQL statements are being tested, and continue through Systems Test and Implementation. Catch the problems early, and get them fixed. The Explain facility must be used for every SQL statement to evaluate its Access Path, and PLANS must always be bound using 'EXPLAIN(YES)'. The key to success is having someone responsible for the performance, with the authority to ensure the monitoring and necessary application tuning will be done in a timely manner.

Beyond DB2

There is a wealth of information available outside the DB2 Statistics and Accounting records, that can be used to monitor, evaluate, and track performance - not only for the DB2 system, but as the DB2 system affects the rest of your MVS environment. The DB2 System address spaces should be placed into their own Reporting Performance Groups, so that their storage and CPU consumption can be tracked.

DB2 requires a lot of storage to run efficiently, and for some systems, your performance will continue to improve as you give DB2 more storage. Several large sites use close to a Gigabyte just for the Virtual Buffer Pools, and more than 2 Gigabytes for HiperPools. The major storage consumer is the DBAS address space, containing the Virtual Buffer Pools, the RID Pool, and memory Sort Work space that is obtained dynamically for each user that performs a sort. The working set size of the DBAS should be tracked, and compared/plotted against the overall system paging rate using the RMF 71 and 72 records.

I/O Workload
DB2 is the first major software product to use the low level VSAM "Media Manager" interface. If your software levels are up to date, the RMF type 42 subtype 6 records provide information for DB2 access; however, they do not provide any information about the type of access - whether the I/o events are synchronous, asynchronous (prefetch), or a mix of both. Using DB2 Version 4.1, EXCP counts do appear in Job related SMF records.

Performance History Database

A Performance History Database (PHDB) is necessary for Trending, Capacity Planning, and to help analyze the causes for gradual or sudden performance degradations. One of the key words here is History. This means a period of time longer than a few days. Transaction volumes and mixes can vary considerably from day to day, and are impacted by normal business activity such as end of month or end of quarter processing, etc. We probably need at least a few months of data to effectively perform trend analysis and capacity planning functions. We also do not need, nor can we afford to keep every individual Accounting record. These should be summarized by PLAN, and possibly Connection, and can be maintained not only on a daily basis, but have summaries kept for specific measurement periods such as AM and PM peak periods. Where should the data be stored? DB2 might be an attractive choice, except for some immediate problems - you will have to write the programs to combine the data prior to adding it the DB2 database(s). The alternative would be to use the DB2 Load Utility - its fast, its easy, no application code is necessary. However, this only allows you to load each detail record. Many online systems process more than a hundred thousand transactions per day. This is more than 2 million detail records per month. Loads of this size on a daily basis will cause problems. Should you have one or more Indices on the Table? This will severely impact the load time, and require frequent re-organizations and Runstats Utility executions. Not having any Index requires a Tablespace scan. It also seems logical that you will perform your analyses during normal business hours. Your access of this data will impact the performance of the same applications you are measuring and tuning. SO, it seems to me that the data belongs outside of DB2 so its use will not impact your critical online applications. It would also be nice to be able to combine DB2 performance data with MVS performance data to obtain a complete picture of the environment and determine the impacts of each system upon the other. This is all quite possible to attain, its just a matter of writing some code.........


There is a wealth of information available within the DB2 records (100, 101,102), that is necessary for monitoring and tuning the system and applications. The area that all the IBM and other publications have neglected thus far, is the relationship between different thresholds. a great many DB2 performance criteria depend upon several other functions in the system. It is necessary to obtain the entire picture before charging off and just changing parameters. For example, adding several megabytes to your Buffer Pool may improve the hit ratio and decrease your I/o rate, but may increase the system paging rate so that your overall performance and the performance of other applications will degrade rather than improve. Paging to Expanded is not usually too serious (depending upon the rate); however, if DB2 suffers a page fault when the needed page is on auxiliary storage, DB2 and all attached allied agents must wait for the page fault to be resolved. DB2, as a system, does not have as many tuning parameters as other DBMSs, such as IMS/VS. This certainly does not mean that it is less tuneable; it means that you have to understand how it functions, and how your changes will impact the rest of the MVS environment before you are able to tune it effectively.

At least 70% or more of DB2 performance (and resource consumption) lies within the application design, and SQL coding. SQL coding is not simple; there are many nuances and subtleties that can cause the Optimizer to select an efficient or inefficient Access Path. If your overall application design is normalized, and many of your online transactions must Join tables to satisfy the SQL request - you WILL have performance problems. The application design and performance problems are easily preventable with an adequate level of understanding and commitment from management, and the use of available tools and knowledge.

It is much more cost effective to spend the time in design, and to correct inefficiencies before an application reaches production, than to be forced into a multi-million dollar processor upgrade because the application cannot provide adequate response to your end user community. The alternative to proper design is spending millions on hardware attempting to place a band-aid on a severed artery...


[Company] [Products] [Consulting] [Education][Home]