DB2 9.1 Utility Guide and Reference
DB2 9.1 Utility Guide and Reference
SC18-9855-00
DB2 Version 9.1 for z/OS
SC18-9855-00
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
939.
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 6. CATENFM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Syntax and options of the control statement . . . . . . . . . . . . . . . . . . . . . . . . 53
Instructions for converting the catalog . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Concurrency and compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Contents vii
Sample DSN1PRNT control statements . . . . . . . . . . . . . . . . . . . . . . . . . . 835
DSN1PRNT output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
Programming interface information . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
Information resources for DB2 for z/OS and related products . . . . . . . . . . . 1005
Contents ix
x Utility Guide and Reference
About this book
This book contains usage information for the tasks of system administration,
database administration, and operation. It presents detailed information about
using utilities, specifying syntax (including keyword and parameter descriptions),
and starting, stopping, and restarting utilities. This book also includes job control
language (JCL) and control statements for each utility.
Important
In this version of DB2® for z/OS®, the DB2 Utilities Suite is available as an
optional product. You must separately order and purchase a license to such
utilities, and discussion of those utility functions in this publication is not
intended to otherwise imply that you have a license to them. See Chapter 2,
“DB2 utilities packaging,” on page 7 for packaging details.
Recommendation: Familiarize yourself with DB2 for z/OS prior to using this
book.
When referring to a DB2 product other than DB2 for z/OS, this information uses
the product’s full name to avoid ambiguity.
When you use a parameter for an object that is created by SQL statements (for
example, tables, table spaces, and indexes), identify the object by following the
SQL syntactical naming conventions. See the description for naming conventions in
DB2 SQL Reference.
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
required_item repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
| v Sometimes a diagram must be split into fragments. The syntax fragment is
| shown separately from the main syntax diagram, but the contents of the
| fragment should be read as if they are on the main path of the diagram.
| fragment-name:
| required_item
optional_name
|
| v With the exception of XPath keywords, keywords appear in uppercase (for
| example, FROM). They must be spelled exactly as shown. XPath keywords are
| defined as lowercase names, and must be spelled exactly as shown. Variables
appear in all lowercase letters (for example, column-name). They represent
user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
Accessibility features
The following list includes the major accessibility features in z/OS products,
including DB2 Version 9.1 for z/OS. These features support:
v Keyboard-only operation.
v Interfaces that are commonly used by screen readers and screen magnifiers.
v Customization of display attributes such as color, contrast, and font size
Keyboard navigation
You can access DB2 Version 9.1 for z/OS ISPF panel functions by using a keyboard
or keyboard shortcut keys.
For information about navigating the DB2 Version 9.1 for z/OS ISPF panels using
TSO/E or ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User’s Guide, and
the z/OS ISPF User’s Guide. These guides describe how to navigate each interface,
including the use of keyboard shortcuts or function keys (PF keys). Each guide
includes the default settings for the PF keys and explains how to modify their
functions.
www.ibm.com/software/db2zos/library.html
This Web site has an online reader comment form that you can use to send
comments.
v You can also send comments by using the feedback link at the footer of each
page in the Information Management Software for z/OS Solutions Information
Center at http://publib.boulder.ibm.com/infocenter/db2zhelp.
DB2 for z/OS, Version 9.1 changes to stand-alone utilities are included in the
following chapters:
Chapter 36, “DSNJU003 (change log inventory),” on page 729
Chapter 37, “DSNJU004 (print log map),” on page 753
The following appendixes have changed for DB2 for z/OS, Version 9.1:
Appendix A, “Limits in DB2 for z/OS,” on page 851.
Appendix B, “DB2-supplied stored procedures,” on page 857
Appendix C, “Advisory or restrictive states,” on page 895
Appendix D, “Running the productivity-aid sample programs,” on page 905
All technical changes to the text are indicated by vertical bars (|) in the left
margin.
A process is represented to DB2 by a set of identifiers (IDs). What the process can
do with DB2 is determined by privileges and privileges that can be held by its
identifiers. The phrase ″privilege set of a process″ means the entire set of privileges
and authorities that can be used by the process in a specific situation.
If you use the access control authorization exit routine, that exit routine might
control the authorization rules, rather than the exit routines that are documented
for each utility.
For detailed information about target object support, see the “Concurrency and
compatibility” section in each utility chapter.
You can populate table spaces whose data sets are not yet defined by using the
LOAD utility with either the RESUME keyword, the REPLACE keyword, or both.
Using LOAD to populate these table spaces results in the following actions:
1. DB2 allocates the data sets.
2. DB2 updates the SPACE column in the catalog table to show that data sets
exist.
For a partitioned table space, all partitions are allocated even if the LOAD utility is
loading only one partition. Avoid attempting to populate a partitioned table space
with concurrent LOAD PART jobs until after one of the jobs has caused all the data
sets to be created.
Online utilities that encounter an undefined target object might issue informational
message DSNU185I, but processing continues.
The following online utilities issue informational message DSNU185I when a table
space or index space with the DEFINE NO attribute is encountered. The object is
not processed.
v CHECK DATA
v CHECK INDEX
v COPY
v MERGECOPY
v MODIFY RECOVERY
v QUIESCE
v REBUILD INDEX
v RECOVER
v REORG INDEX
v REORG TABLESPACE
v REPAIR, but not REPAIR DBD
v RUNSTATS TABLESPACE INDEX(ALL) 1
v RUNSTATS INDEX 1
v UNLOAD
Notes:
1. RUNSTATS recognizes DEFINE NO objects and updates the catalog’s access
path statistics to reflect the empty objects.
You cannot use stand-alone utilities on objects whose data sets have not been
defined.
However, running any of the following utilities on encrypted data might produce
unexpected results:
v CHECK DATA
v LOAD
v REBUILD INDEX
v REORG TABLESPACE
v REPAIR
v RUNSTATS
v UNLOAD
v DSN1PRNT
| All other utilities are available as a separate product called the DB2 Utilities Suite
| (5655-N97, FMIDs JDB991K), which includes the following utilities:
| v BACKUP SYSTEM
| v CHECK DATA
| v CHECK INDEX
| v CHECK LOB
| v COPY
| v COPYTOCOPY
| v EXEC SQL
| v LOAD
| v MERGECOPY
| v MODIFY RECOVERY
| v MODIFY STATISTICS
| v REBUILD INDEX
| v RECOVER
| v REORG INDEX
| v REORG TABLESPACE
| v RESTORE SYSTEM
| v RUNSTATS
| v STOSPACE
| v UNLOAD
All DB2 utilities operate on catalog, directory, and sample objects, without
requiring any additional products.
| The SMP/E RECEIVE job, DSNRECVK, loads the DB2 Utilities Suite Version 9
| program modules, macros, and procedures into temporary data sets (SMPTLIBs). If
| these jobs fail or abnormally terminate, correct the problem and rerun the jobs. Use
| job DSNRECV1, which is described in DB2 Installation Guide, as a guide to help
| you with the RECEIVE job.
| The SMP/E APPLY job, DSNAPPLK, copies and link-edits the program modules,
| macros, and procedures for the DB2 Utilities Suite Version 9 into the DB2 target
| libraries. Use job DSNAPPL1, which is described in DB2 Installation Guide, as a
| guide to help you with the APPLY job.
| The SMP/E ACCEPT job, DSNACCPK, copies the program modules, macros, and
| procedures for the DB2 Utilities Suite Version 9 into the DB2 distributed libraries.
| Use job DSNACEP1, which is described in DB2 Installation Guide, as a guide to
| help you with the ACCEPT job.
Creating utility control statements is the first step that is required to run an online
utility.
After creating the utility statements, use one of the following methods for invoking
the online utilities:
1. “Using the DB2 Utilities panel in DB2I” on page 21
2. “Using the DSNU CLIST command in TSO” on page 24
3. “Using the supplied JCL procedure (DSNUPROC)” on page 31
4. “Creating the JCL data set yourself by using the EXEC statement” on page 34
5. “Invoking utilities as a stored procedure (DSNUTILS)” on page 860 or
“DSNUTILU stored procedure” on page 870
Requirement: In the JCL for all utility jobs, specify a load library that is at a
maintenance level that is compatible with the DB2 system. Otherwise, errors can
occur.
For the least involvement with JCL, use either the first or second method, and then
edit the generated JCL to alter or add necessary fields on the JOB or ROUTE cards
before submitting the job. Both of these methods require TSO, and the first method
also requires access to the DB2 Utilities Panel in DB2 Interactive (DB2I).
If you want to work with JCL or create your own JCL, choose the third or fourth
method.
To invoke online utilities from a DB2 application program, use the fifth method.
For more information about these stored procedures and other stored procedures
that are supplied by DB2, see Appendix B, “DB2-supplied stored procedures,” on
page 857.
Create the utility control statements with the ISPF/PDF edit function. Use the rules
that are listed in “Control statement coding rules” on page 18.
The options that you can specify after the online utility name depend on which
online utility you use. To specify a utility option, specify the option keyword,
followed by its associated parameter or parameters, if any. The parameter value
can be a keyword. You need to enclose the values of some parameters in
parentheses. The syntax diagrams for utility control statements that are included in
this book show parentheses where they are required.
You can specify more than one utility control statement in the SYSIN stream.
However, if any of the control statements returns a return code of 8 or greater, the
subsequent statements in the job step are not executed.
When specifying in a utility control statement multiple numeric values that are
meant to be delimited, you must delimit these values with a comma (″,″),
regardless of the definition of DECIMAL in DSNHDECP. Likewise, when
specifying a decimal number in a utility control statement, you must use a period
(″.″), regardless of the definition of DECIMAL in DSNHDECP.
You can enter comments within the SYSIN stream. Comments must begin with two
hyphens (--) and are subject to the following rules:
v You must use two hyphens on the same line with no space between them.
v You can start comments wherever a space is valid, except within a delimiter
token.
v The end of a line terminates a comment.
| For output data sets, the online utilities determine both the logical record length
| and the record format. Any specified values for LRECL or RECFM are ignored. If
| you supply block size, that size is used; otherwise, the utility lets the system
| determine the optimal block size for the storage device. DB2 supports the large
| block interface (LBI) that allows block sizes that are greater than 32 KB on certain
| tape drives. Partitioned data sets (PDS) are not allowed for output data sets. The
| TAPEBLKSZLIM parameter of the DEVSUPxx member of SYS1.PARMLIB controls
| the block size limit for tapes. See the z/OS MVS Initialization and Tuning Guide for
| more details.
| For both input and output data sets, the online utilities use the value that you
| supply for the number of buffers (BUFNO), with a maximum of 99 buffers. The
| default number of buffers is 20. The utilities set the number of channel programs
| equal to the number of buffers. The parameters that specify the buffer size
| (BUFSIZE) and the number of channel programs (NCP) are ignored. If you omit
| any DCB parameters, the utilities choose default values.
| Increasing the number of buffers (BUFNO) can result in an increase in real storage
| utilization and page fixing below the 16-MB line.
| Restriction: DB2 does not support the undefined record format (RECFM=U) for
| any data set.
| Because you might need to restart a utility, take the following precautions when
| defining the disposition of data sets:
| v Use DISP=(NEW,CATLG,CATLG) or DISP=(MOD,CATLG) for data sets that you
| want to retain.
| v Use DISP=(MOD,DELETE,CATLG) for data sets that you want to discard after
| utility execution.
| v Use DISP=(NEW,DELETE) for DFSORT™ SORTWKnn data sets, or refer to
| DFSORT Application Programming: Guide for alternatives.
| v Do not use temporary data set names.
| See Table 159 on page 861 and Table 160 on page 861 for information about the
| default data dispositions that are specified for dynamically allocated data sets.
All other utilities ignore the row-level granularity. They check only for
authorization to operate on the table space; they do not check row-level
authorization. For more information about multilevel security, see Part 3 of DB2
Administration Guide.
| Restriction: You cannot use the DB2 Utilities panel in DB2I to submit a BACKUP
| SYSTEM job, a COPYTOCOPY job, a RESTORE SYSTEM job, or a COPY job for a
| list of objects.
If your site does not have default JOB and ROUTE statements, you must edit the
JCL to define them. If you edit the utility job before submitting it, you must use
the ISPF editor and submit your job directly from the editor. Use the following
procedure:
1. Create the utility control statement for the online utility that you intend to
execute, and save it in a sequential or partitioned data set.
For example, the following utility control statement specifies that the COPY
utility is to make an incremental image copy of table space
DSN8D91A.DSN8S91D with a SHRLEVEL value of CHANGE:
COPY TABLESPACE DSN8D91A.DSN8S91D
FULL NO
SHRLEVEL CHANGE
For the rest of this example, suppose that you save the statement in the
default data set, UTIL.
2. From the ISPF Primary Option menu, select the DB2I menu.
3. On the DB2I Utilities panel, select the UTILITIES option. Items that you must
specify are highlighted on the DB2 Utilities panel, as shown in Figure 1.
|
| Figure 1. DB2 Utilities panel
4. Fill in field 1 with the function that you want to execute. In this example, you
want to submit the utility job, but you want to edit the JCL first, so specify
EDITJCL. After you edit the JCL, you do not need to return to this panel to
submit the job. Instead, type SUBMIT on the editor command line.
5. Ensure that Field 2 is a unique identifier for your utility job. The default value
is TEMP. In this example, that value is satisfactory; leave it as is.
| 6. Fill in field 3 with the utility that you want to run.
| In this example, specify COPY.
7. Fill in field 4 if you want to use an input data set other than the default data
set. Unless you enclose the data set name between apostrophes, TSO adds
your user identifier as a prefix. In this example, specify UTIL, which is the
default data set.
8. Change field 5 if this job restarts a stopped utility or if you want to execute a
utility in PREVIEW mode. In this example, leave the default value, NO.
9. Specify in field 6 whether you are using LISTDEF statements or TEMPLATE
statements in this utility. If you specify YES for LISTDEF or TEMPLATE, DB2
displays the Control Statement Data Set Names panel, but the field entries are
optional.
| 10. Fill in field 7 with the data set name of the DB2 subsystem library when you
| want the generated JCL to use the default DB2 subsystem library.
11. Press Enter.
Enter output data sets for local/current site for COPY, MERGECOPY,
LOAD, or REORG:
3 COPYDSN ==> ABC
4 COPYDSN2 ==>
Enter output data sets for recovery site for COPY, LOAD, or REORG:
5 RCPYDSN1 ==> ABC1
6 RCPYDSN2 ==>
Enter output data sets for REORG or UNLOAD:
7 PUNCHDSN ==>
PRESS: ENTER to process END to exit HELP for more information
If the Data Set Names panel is displayed, complete the following steps. If you do
not specify COPY, LOAD, MERGECOPY, REORG TABLESPACE, or UNLOAD in
field 3 of the DB2 Utilities panel, the Data Set Names panel is not displayed; skip
this procedure and continue with Figure 3 on page 24.
1. Fill in field 1 if you are running LOAD, REORG, or UNLOAD. For LOAD, you
must specify the data set name that contains the records that are to be loaded.
For REORG or UNLOAD, you must specify the unload data set. In this
example, you do not need to fill in field 1, because you are running COPY.
2. Fill in field 2 if you are running LOAD or REORG with discard processing, in
which case you must specify a discard data set. In this example, you do not
need to fill in field 2, because you are running COPY.
3. Fill in field 3 with the primary output data set name for the local site if you are
running COPY, LOAD, or REORG, or with the current site if you are running
MERGECOPY. The DD name that the panel generates for this field is SYSCOPY.
This is an optional field for LOAD and for REORG with SHRLEVEL NONE;
this field is required for COPY, for MERGECOPY, and for REORG with
SHRLEVEL REFERENCE or CHANGE. In this example, the primary output
data set name for the local site is ABC.
4. Fill in field 4 with the backup output data set name for the local site if you are
running COPY, LOAD, or REORG, or the current site if you are running
MERGECOPY. The DD name that the panel generates for this field is
SYSCOPY2. This is an optional field. In this example, you do not need to fill in
field 4.
5. Fill in field 5 with the primary output data set for the recovery site if you are
running COPY, LOAD, or REORG. The DD name that the panel generates for
this field is SYSRCOPY1. This is an optional field. In this example, the primary
output data set name for the recovery site is ABC1.
The Control Statement Data Set Names panel, which is shown in Figure 3, is
displayed if either LISTDEF YES or TEMPLATE YES is specified on the DB2
Utilities panel.
Enter the data set name for the LISTDEF data set (SYSLISTD DD):
1 LISTDEF DSN ===>
OPTIONAL or IGNORED
Enter the data set name for the TEMPLATE data set (SYSTEMPL DD):
2 TEMPLATE DSN ===>
OPTIONAL or IGNORED
1. Fill in field 1 to specify the data set that contains a LISTDEF control statement.
The default is the SYSIN data set. This field is ignored if you specified NO in
the LISTDEF? field in the DB2 Utilities panel.
For information about using a LISTDEF control statement, see Chapter 15,
“LISTDEF,” on page 185.
2. Fill in field 2 to specify the data set that contains a TEMPLATE. The default is
the SYSIN data set. This field is ignored if you specified NO in the
TEMPLATE? field in the DB2 Utilities panel.
For information about using TEMPLATE, see Chapter 31, “TEMPLATE,” on
page 641.
| Restriction: You cannot use the DSNU CLIST command to submit a COPY job for
| a list of objects.
The CLIST command creates a job that performs only one utility operation.
However, you can invoke the CLIST command for each utility operation that you
need, and then edit and merge the outputs into one job or step.
You can execute the DSNU CLIST command from the TSO command processor or
from the DB2I Utilities panel.
CONTROL ( control-option )
COPYDSN(data-set-name)
COPYDSN2(data-set-name)
RCPYDSN1(data-set-name) RECDSN(data-set-name)
RCPYDSN2(data-set-name)
EDIT ( NO ) RESTART ( NO )
PUNCHDSN ( data-set-name ) EDIT ( SPF ) RESTART ( CURRENT )
TSO PHASE
PREVIEW
| UNIT ( SYSDA )
UNIT ( unit-name ) VOLUME(vol-ser) LIB(data-set-name)
UNIT (unit-name)
Assigns a unit address, a generic device type, or a user-assigned group name
for a device on which a new temporary or permanent data set resides. When
the CLIST command generates the JCL, it places unit-name after the UNIT
clause of the generated DD statement. The default is SYSDA.
VOLUME (vol-ser)
Assigns the serial number of the volume on which a new temporary or
permanent data set resides. When the CLIST command generates the JCL, it
places vol-ser after the VOL=SER clause of the generated DD statement. If you
omit VOLUME, the VOL=SER clause is omitted from the generated DD
statement.
| LIB (data-set-name)
| Specifies the data set name of the DB2 subsystem library. The value that you
| specify is used as the LIB parameter value when the DSNUPROC JCL
| procedure is invoked.
/*
Figure 4. Control file DSNUCOP.CNTL. This is an example of the JCL data set before editing.
The following list describes the required JCL data set statements:
Statement Description
The CLIST command builds the necessary JCL DD statements. Those statements
vary depending on the utility that you execute. Data sets that might be required
are listed under “Data sets that online utilities use” on page 19. The following DD
statements are generated by the CLIST command:
SYSPRINT DD SYSOUT=A
Defines OUTPUT, SYSPRINT as SYSOUT=A. Utility messages are sent to the
SYSPRINT data set. You can use the TSO command to control the disposition
of the SYSPRINT data set. For example, you can send the data set to your
terminal. For more information, see z/OS TSO/E Command Reference.
UTPRINT DD SYSOUT=A
Defines UTPRINT as SYSOUT=A. If any utility requires a sort, it executes
DFSORT. Messages from that program are sent to UTPRINT.
SYSIN DD *
Defines SYSIN. To build the SYSIN DD * job stream, DSNU copies the data set
that is named by the INDSN parameter. The INDSN data set does not change,
and you can reuse it when the DSNU procedure has finished running.
If you use a ddname that is not the default on a utility statement that you use, you
must change the ddname in the JCL that is generated by the DSNU procedure. For
example, in the REORG TABLESPACE utility, the default option for UNLDDN is
SYSREC, and DSNU builds a SYSREC DD statement for REORG TABLESPACE. If
you use a different value for UNLDDN, you must edit the JCL data set and change
SYSREC to the ddname that you used.
When you finish editing the data set, you can either save changes to the data set
(by issuing SAVE), or instruct the editor to ignore all changes.
The SUBMIT parameter specifies whether to submit the data set statement as a
background job. The temporary data set that holds the JCL statement is reused. If
you want to submit more than one job that executes the same utility, you must
rename the JCL data sets and submit them separately.
Examples
Example 1: The following CLIST command statement generates a data set that is
called authorization-id.DSNURGT.CNTL and that contains JCL statements that
invoke the DSNUPROC procedure.
Example 2: The following example shows how to invoke the CLIST command for
the COPY utility.
%DSNU
UTILITY (COPY)
INDSN (’MYCOPY(STATEMNT)’)
COPYDSN (’MYCOPIES.DSN8D91A.JAN1’)
EDIT (TSO)
SUBMIT (YES)
UID (TEMP)
RESTART (NO)
To execute the DSNUPROC procedure, write and submit a JCL data set like the
one that the DSNU CLIST command builds (An example is shown in Figure 4 on
page 29.) In your JCL, the EXEC statement executes the DSNUPROC procedure.
DSNUPROC syntax
The EXEC statement can be a procedure that contains the required JCL, or it can be
of the following form:
//stepname EXEC PGM=DSNUTILB,PARM=’system,[uid],[utproc]’
The brackets, [ ], indicate optional parameters. The parameters have the following
meanings:
DSNUTILB
Specifies the utility control program. The program must reside in an
APF-authorized library.
system Specifies the DB2 subsystem.
uid The unique identifier for your utility job. Do not reuse the utility ID of a
stopped utility that has not yet been terminated. If you do use the same
utility ID to invoke a different utility, DB2 tries to restart the original
stopped utility with the information that is stored in the SYSUTIL directory
table.
utproc The value of the UTPROC parameter in the DSNUPROC procedure.
Specify this option only when you want to restart the utility job. Specify:
'RESTART'
To restart at the most recent commit point. This option has the
same meaning as ’RESTART(CURRENT).’
'RESTART(CURRENT)'
To restart the utility at the most recent commit point. This option
has the same meaning as ’RESTART.’
'RESTART(PHASE)'
To restart at the beginning of the phase that executed most
recently.
'RESTART(PREVIEW)'
To restart the utility in preview mode. While in PREVIEW mode,
the utility checks for syntax errors in all utility control statements,
but normal utility execution does not take place.
For the example in Figure 5 on page 33 you can use the following EXEC statement:
//stepname
EXEC PGM=DSNUTILB,PARM=’DSN,TEMP’
Use the DB2 DISPLAY UTILITY command to check the current status of online
utilities. Figure 6 shows an example of the output that the DISPLAY UTILITY
command generates. In the example output, DB2 returns a message that indicates
the member name (A), utility identifier (B), utility name (C), utility phase
(D), the number of pages or records that are processed by the utility1 (E), the
number of objects in the list (F), the last object that started (G), and the utility
status (H). The output might also report additional information about an
executing utility, such as log phase estimates or utility subtask activity.
1. In a data sharing environment, the number of records is current when the command is issued from the same member on which
the utility is executing. When the command is issued from a different member, the count might lag substantially. For some
utilities in some build phases, the count number is not updated when the command is issued from a different member.
To determine why a utility failed to complete, consider the following problems that
can cause a failure during execution of the utility:
v Problem: DB2 terminates the utility job step and any subsequent utility steps.
Solution: Submit a new utility job to execute the terminated steps. Use the same
utility identifier for the new job to ensure that no duplicate utility job is running.
v Problem: DB2 does not execute the particular utility function, but prior utility
functions are executed.
Solution: Submit a new utility step to execute the function.
v Problem: DB2 places the utility function in the stopped state.
Solution: Restart the utility job step at either the last commit point or the
beginning of the phase by using the same utility identifier. Alternatively, use a
TERM UTILITY (uid) command to terminate the job step and resubmit it.
v Problem: DB2 terminates the utility and issues return code 8.
Solution: One or more objects might be in a restrictive or advisory status. See
Appendix C, “Advisory or restrictive states,” on page 895 for more information
on resetting the status of an object.
Alternatively, a DEADLINE condition in online REORG might have terminated
the reorganization.
For more information about the DEADLINE condition, see the description of this
option in Chapter 24, “REORG INDEX,” on page 419 or in Chapter 25, “REORG
TABLESPACE,” on page 449.
If the utility supports parallelism, it can use additional threads to support the
parallel subtasking. Consider increasing the values of subsystem parameters that
control threads, such as MAX BATCH CONNECT and MAX USERS. These
parameters are on installation panel DSNTIPE and are described in DB2 Installation
Guide.
See DB2 Performance Monitoring and Tuning Guide for a description of the claim
classes and the use of claims and drains by online utilities.
Submitting online utility jobs: When you submit a utility job, you must specify the
name of the DB2 subsystem to which the utility is to attach or the group attach
name. If you do not use the group attach name, the utility job must run on the
z/OS system where the specified DB2 subsystem is running. Ensure that the utility
job runs on the appropriate z/OS system. You must use one of several z/OS
installation-specific statements to make sure this happens. These include:
v For JES2 multi-access spool (MAS) systems, insert the following statement into
the utility JCL:
/*JOBPARM SYSAFF=cccc
v For JES3 systems, insert the following statement into the utility JCL:
//*MAIN SYSTEM=(main-name)
The preceding JCL statements are described in z/OS MVS JCL Reference. Your
installation might have other mechanisms for controlling where batch jobs run,
such as by using job classes.
Stopping and restarting utilities: In a data sharing environment, you can terminate
an active utility by using the TERM UTILITY command only on the DB2
subsystem on which it was started. If a DB2 subsystem fails while a utility is in
progress, you must restart that DB2 subsystem, and then you can terminate the
utility from any system.
You can restart a utility only on a member that is running the same DB2 release
level as the member on which the utility job was originally submitted. The same
utility ID (UID) must be used to restart the utility. That UID is unique within a
Use the TERM UTILITY command to terminate the execution of an active utility or
to release the resources that are associated with a stopped utility.
Restriction: If the utility was started in a previous release of DB2, issue the TERM
UTILITY command from that release.
After you issue the TERM UTILITY command, you cannot restart the terminated
utility job. The objects on which the utility was operating might be left in an
indeterminate state. In many cases, you cannot rerun the same utility without first
recovering the objects on which the utility was operating. The situation varies,
depending on the utility and the phase that was in process when you issued the
command. These considerations about the state of the object are particularly
important when terminating the COPY, LOAD, and REORG utilities.
In a data sharing environment, TERM UTILITY is effective for active utilities when
the command is submitted from the DB2 subsystem that originally issued the
command. You can terminate a stopped utility from any active member of the data
sharing group.
If the utility is active, TERM UTILITY terminates it at the next commit point. It
then performs any necessary cleanup operations.
You might choose to put TERM UTILITY in a conditionally executed job step; for
example, if you never want to restart certain utility jobs. Figure 7 shows a sample
job stream.
Alternatively, consider specifying the TIMEOUT TERM parameter for some Online
REORG situations.
Before you restart a job, correct the problem that caused the utility job to stop.
Then resubmit the job. DB2 recognizes the utility ID and restarts the utility job if
possible. DB2 retrieves information about the stopped utility from the SYSUTIL
directory table.
Do not reuse the utility ID of a stopped utility that has not yet been terminated,
unless you want to restart that utility. If you do use the same utility ID to invoke a
different utility, DB2 tries to restart the original stopped utility with the
information that is stored in the SYSUTIL directory table.
For each utility, DB2 uses the default RESTART value that is specified in Table 3.
For a complete description of the restart behavior for an individual utility,
including any phase restrictions, refer to the restart section for that utility.
You can override the default RESTART value by specifying the RESTART
parameter in the original JCL data set. DB2 ignores the RESTART parameter if you
are submitting the utility job for the first time. For instructions on how to specify
this parameter, see “Using the RESTART parameter” on page 40.
Table 3. Default RESTART values for each utility
Utility Default RESTART value
BACKUP SYSTEM RESTART(CURRENT)
CATMAINT No restart
CHECK DATA RESTART(CURRENT)
CHECK INDEX RESTART(CURRENT)
CHECK LOB RESTART(CURRENT)
COPY RESTART(CURRENT)
COPYTOCOPY RESTART(CURRENT)
DIAGNOSE Restarts from the beginning
EXEC SQL Restarts from the beginning
LISTDEF Restarts from the beginning
LOAD RESTART(CURRENT) or RESTART(PHASE)1
MERGECOPY RESTART(PHASE)
MODIFY RECOVERY RESTART(CURRENT)
MODIFY STATISTICS RESTART(CURRENT)
Notes:
1. The RESTART value that DB2 uses for these utilities depends on the situation.
Refer to the restart section for each utility for a complete explanation.
If you cannot restart a utility job, you might have to terminate it to make the data
available to other applications. To terminate a utility job, issue the DB2 TERM
UTILITY command. Use the command only if you must start the utility from the
beginning.
To add the RESTART parameter, you can use one of the following three methods:
v Using DB2I. Add the RESTART parameter by following these steps:
1. Access the DB2 Utilities panel.
2. Fill in the panel fields, as documented in Figure 2 on page 23, except for field
5.
3. Change field 5 to CURRENT or PHASE, depending on the desired method of
restart.
4. Press Enter.
v Using the DSNU CLIST command. When you invoke the DSNU CLIST
command, as described in “Using the DSNU CLIST command in TSO” on page
24, change the value of the RESTART parameter by specifying either RESTART,
RESTART (CURRENT), or RESTART(PHASE).
Use caution when changing LISTDEF lists prior to a restart. When DB2 restarts list
processing, it uses a saved copy of the list. Modifying the LISTDEF list that is
referred to by the stopped utility has no effect. Only control statements that follow
the stopped utility are affected.
Do not change the position of any other utilities that have been executed.
If the utility that you are restarting was processing a LIST, you will see a list size
that is greater than 1 on the DSNU100 or DSNU105 message. DB2 checkpoints the
expanded, enumerated list contents prior to executing the utility. DB2 uses this
checkpointed list to restart the utility at the point of failure. After a successful
restart, the LISTDEF is re-expanded before subsequent utilities in the same job step
use it.
Restart is not always possible. The restrictions applying to the phases of each
utility are discussed under the description of each utility.
The BACKUP SYSTEM utility uses copy pools. A copy pool is a defined set of
storage groups that contain data that DFSMShsm can backup and recover
collectively. For more information about copy pools, see z/OS DFSMSdfp Storage
Administration Reference.
Each DB2 subsystem can have up to two copy pools, one for databases and one for
logs. BACKUP SYSTEM copies the volumes that are associated with these copy
pools at the time of the copy.
| With the BACKUP SYSTEM utility you can manage the dumping of system-level
| backups (copy of the database, the log copy pools, or both) to tape. To use this
| functionality, you need to have z/OS DFSMShsm V1R8 or above.
Output: The output for BACKUP SYSTEM is the copy of the volumes on which the
DB2 data and log information resides. The BACKUP SYSTEM history is recorded
in the bootstrap data sets (BSDSs).
Authorization required: To execute this utility, you must use a privilege set that
includes SYSCTRL or SYSADM authority.
When you specify BACKUP SYSTEM, you can specify only the following
statements in the same step:
v DIAGNOSE
v OPTIONS PREVIEW
v OPTIONS OFF
v OPTIONS KEY
v OPTIONS EVENT WARNING
In addition, BACKUP SYSTEM must be the last statement in SYSIN.
Syntax diagram
FULL
BACKUP SYSTEM
DATA ONLY ESTABLISH FCINCREMENTAL
END FCINCREMENTAL
|
FORCE
DUMP
dumpclass-spec FORCE
DUMPONLY
TOKEN (X'byte-string') dumpclass-spec
dumpclass-spec:
DUMPCLASS ( dc1 )
dc2
dc3
dc4
dc5
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
FULL
Indicates that you want to copy both the database copy pool and the log copy
pool. The default is FULL.
You must ensure that the database copy pool is set up to contain the volumes
for the databases and the associated integrated catalog facility (ICF) catalogs.
You must also ensure that the log copy pool is set up to contain the volumes
for the BSDSs, the active logs, and the associated catalogs.
Use BACKUP SYSTEM FULL to allow for recovery of both data and logs. You
can use the RESTORE SYSTEM utility to recover the data. However, RESTORE
SYSTEM does not restore the logs; the utility only applies the logs. If you want
to restore the logs, you must use another method to restore them.
DATA ONLY
Indicates that you want to copy only the database copy pool. You must ensure
that the database copy pool is set up to contain the volumes for the databases
and the associated ICF catalogs.
| ESTABLISH FCINCREMENTAL
| Specifies that a persistent incremental FlashCopy® relationship is to be
| established, if none exists, for source copy volumes in the database copy pool.
| Use this keyword once to establish the persistent incremental FlashCopy
| relationships. Subsequent invocations of BACKUP SYSTEM (without this
| keyword) will automatically process the persistent incremental FlashCopy
| relationship.
| END FCINCREMENTAL
| Specifies that a last incremental FlashCopy be taken and for the persistent
| incremental FlashCopy relationship to be withdrawn for all of the volumes in
| the database copy pool. Use this keyword only if no further incremental
| FlashCopy backups of the database copy pool are desired.
| FORCE
| Indicates that you want to overwrite the oldest DFSMShsm version of the fast
| replication copy of the database copy pool. You can overwrite these copy pools
| even if the dump to tape or the copy pool's DFSMShsm dump classes have
| been initiated, but are only partially completed.
| You should only use the FORCE option if it is more important to take a new
| system-level backup than to save a previous system-level backup to tape.
| DUMP
| Indicates that you want to create a fast replication copy of the database copy
| pool and the log copy pool on disk and then initiate a dump to tape of the fast
| replication copy. The dump to tape begins after DB2 successfully establishes
| relationships for the fast replication copy.
| The BACKUP SYSTEM utility does not wait for the dump processing to
| complete.
| This option requires z/OS Version 1.8.
| DUMPCLASS
| Indicates the DFSMShsm dump class that you want to use for the dump
| processing. You can specify up to five dump classes. If you do not specify a
| dump class, DB2 uses the default dump classes that are defined for the copy
| pools.
| DUMPONLY
| Indicates that you want to create a dump on tape of an existing fast replication
| copy (that is currently residing on the disk) of the database copy pool and the
| log copy pool. You can also use this option to resume a dump process that has
| failed.
| The BACKUP SYSTEM utility does not wait for the dump processing to
| complete.
| This option requires z/OS Version 1.8.
| TOKEN (X'byte-string')
| Specifies which fast replication copy of the database copy pool and the log
| copy pool to dump to tape. The token is a 36-digit hexadecimal byte string that
| uniquely identifies each system-level backup and is reported in the DSNJU0004
| job output. For a data sharing system, you should run DSNJU0004 with the
| MEMBER option so that the system-level backup information is displayed for
| all members.
| If you do not specify TOKEN, the most recent fast replication copy of the copy
| pools is dumped to tape.
For information about defining copy pools and associated backup storage groups,
see z/OS DFSMSdfp Storage Administration Reference. Use the following DB2 naming
convention when you define these copy pools:
DSN$locn-name$cp-type
The variables that are used in this naming convention have the following
meanings:
DSN The unique DB2 product identifier.
$ A delimiter. You must use the dollar sign character ($).
locn-name
The DB2 location name.
cp-type The copy pool type. Use DB for database and LG for log.
| To dump a fast replication copy of a system-level backup to tape that was taken
| without the DUMP option, or to re-initiate dump processing that has failed:
| 1. Identify the token (a 36 digit hexadecimal byte string) in the DSNJU004 output.
| 2. Create and run your utility control statement with the DUMPONLY option.
| Specify the token if the system-level backup is not the most recent system-level
| backup taken.
| Restriction: Do not dump system-backups to the same tape that contains image
| copies or concurrent copies because the RECOVER utility requires access to
| both.
| 3. Run the DFSMShsm command LIST COPYPOOL with the ALLVOLS option to
| verify that the dump to tape was successful.
| The BACKUP SYSTEM utility issues the DFSMShsm command to initiate a
| dump, but it does not wait for the dump to be completed.
You can restart a BACKUP SYSTEM utility job, but it starts from the beginning
again.
| Example 3: Creating a fast replication copy of the database copy pool and
| dumping the copy to tape. The following control statement specifies that BACKUP
| SYSTEM is to create a fast replication copy of the database copy pool and initiate a
| dump to tape of the fast replication copy.
| //SYSOPRB JOB (ACCOUNT),’NAME’,CLASS=K
| //UTIL EXEC DSNUPROC,SYSTEM=V91A,UID=’TEMB’,UTPROC=’’
| //*
| //*
| //DSNUPROC.SYSUT1 DD DSN=SYSOPR.SYSUT1,
| // DISP=(MOD,DELETE,CATLG),
| // SPACE=(16384,(20,20),,,ROUND),
| // UNIT=SYSDA
| //DSNUPROC.SYSIN DD *
| BACKUP SYSTEM DATA ONLY DUMP
| /*
| Example 4: Creating a fast replication copy of the database copy pool, dumping
| the copy to tape, and allowing oldest copy to be overwritten. The following
| control statement specifies that BACKUP SYSTEM is to create a fast replication
| copy of the database copy pool, initiate a dump to tape of the fast replication copy,
| and allow the oldest fast replication copy to be overwritten.
| //SYSOPRB JOB (ACCOUNT),’NAME’,CLASS=K
| //UTIL EXEC DSNUPROC,SYSTEM=V91A,UID=’TEMB’,UTPROC=’’
| //*
| //*
| //DSNUPROC.SYSUT1 DD DSN=SYSOPR.SYSUT1,
| // DISP=(MOD,DELETE,CATLG),
| // SPACE=(16384,(20,20),,,ROUND),
| // UNIT=SYSDA
| //DSNUPROC.SYSIN DD *
| BACKUP SYSTEM DATA ONLY DUMP FORCE
| /*
Chapter 6. CATENFM
The CATENFM utility enables a DB2 subsystem to enter DB2 Version 9.1
enabling-new-function mode and Version 9.1 new-function mode. It also enables a
DB2 subsystem to return to enabling-new-function mode from new-function mode.
All new Version 9.1 functions are unavailable when the subsystem is in
compatibility mode or enabling-new-function mode.
Syntax diagram
For guidance in interpreting syntax diagrams, see “How to read the syntax
diagrams” on page xiv.
| CATENFM START
COMPLETE
ENFMON
CMON
CONVERT INPUT table-space-name
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
START
| Invokes the CATENFM utility and indicates the start of enabling-new-function
| mode processing. Drops and recreates the DSNKSX01 index with new key
Converting to new-function mode: When you migrate to DB2 Version 9.1, the DB2
subsystem enters compatibility mode. In compatibility mode, the DB2 subsystem
can coexist with other data sharing members that are at either Version 8 or Version
9.1 compatibility mode.
After enabling-new-function mode completes, the DB2 subsystem can enter Version
9.1 new-function mode. All new Version 9.1 functions are unavailable until the DB2
subsystem enters new-function mode.
The DSNTIJEN job runs CATENFM START, which causes the DB2 subsystem to
enter enabling-new-function mode. Run CATENFM START only when you are
ready to begin the enabling-new-function mode conversion process.
Chapter 6. CATENFM 55
56 Utility Guide and Reference
Chapter 7. CATMAINT
The CATMAINT utility updates the catalog; run this utility during migration to a
new release of DB2 or when IBM Software Support instructs you to do so.
Syntax diagram
For guidance in interpreting syntax diagrams, see “How to read the syntax
| diagrams” on page xiv.
|
| CATMAINT UPDATE
|
SCHEMA SWITCH(schema_name,new_schema_name)
,
|
|
VCAT SWITCH(vcat_name,new_vcat_name)
|
|
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
UPDATE
Indicates that you want to update the catalog. Run this option only when you
migrate to a new release of DB2 or when IBM Software Support instructs you
to do so.
To calculate the size of the work file database, see DB2 Installation Guide.
Updating the catalog for a new release: When you install or migrate to a new
release of DB2, you must update the catalog for the prior release to the new
version. The DSNTIJTC job runs CATMAINT UPDATE to update the catalog. DB2
displays migration status message DSNU777I at several points during CATMAINT
execution.
| Renaming the owner, creator, and schema of database objects, plans, and packages:
| To rename the owner, creator, and schema of database objects, plan, and packages,
| run the CATMAINT SCHEMA SWITCH. This process updates every owner, creator
| or schema name in the catalog and directory that matches the schema_name value.
| All grants that were made by or received by the original owner are changed to the
| new owner. You can change multiple names by repeating the SWITCH keyword,
| but you can not specify the same name more than once. The names cannot be
| longer than 8 bytes in EBCDIC representation. ’SYSIBM’ is not allowed as a
| schema_name or new_schema_name. OWNER FROM and SCHEMA SWITCH are
| mutually exclusive. You cannot specify both clauses in the same CATMAINT
| UPDATE statement.
| Ownership of roles is changed like other objects. However, if the associated trusted
| context role is owned by the owner_name, the ownership of the role will not be
| changed because a role cannot be owned by itself.
Chapter 7. CATMAINT 59
| OWNER FROM and SCHEMA SWITCH are mutually exclusive. You cannot specify
| both clauses in the same CATMAINT UPDATE statement.
| Changing the catalog name used by storage groups or index spaces and table
| spaces: To change the catalog name that is used by storage groups or index spaces
| and table spaces, run the CATMAINT VCAT SWITCH utility. The VCAT SWITCH
| option is similar to the ALTER TABLESPACE USING VCAT statement for changing
| the catalog name. You need to move the data for the affected indexes or table
| spaces to the data set on the new catalog in a separate step. For procedures for
| moving DB2 data sets, see DB2 Administration Guide. You can change multiple
| names by repeating the SWITCH keyword, but you cannot specify the same name
| more than once. The names cannot be longer than 8 bytes in EBCDIC
| representation. The VCAT SWITCH option has no effect on the system indexes and
| table spaces in DSNDB06/DSNDB01 because the catalog name is maintained in the
| parameter. ’SYSIBM’ is not allowed as a vcat_name or new_vcat_name.
| Identifying invalidated plans and packages after the owner, creator, or schema
| name of an object is renamed: When the schema name of an object is changed, any
| plans or packages that are dependent on the object are invalidated. Automatic
| rebind occurs when the invalidated plan or package is executed. Rebind might not
| be successful if the object is referenced in the application explicitly with the
| original schema name. In this case, you need to modify the application. The
| following queries identify the plans or packaged that will be invalidated:
| SELECT DISTINCT DNAME
| FROM SYSIBM.SYSPLANDEP
| WHERE BCREATOR IN (schema_name1, schema_name2...)
| ORDER BY DNAME;
|
| SELECT DISTINCT COLLID, NAME
| FROM SYSIBM.SYSPACKDEP, SYSIBM.SYSPACKAGE
| WHERE BQUALIFIER IN (schema_name1, schema_name2...)
| ORDER BY COLLID, NAME;
| CHECK DATA does not check LOB or XML table spaces. The utility does not check
| informational referential constraints.
Restriction: Do not run CHECK DATA on encrypted data. Because CHECK DATA
does not decrypt the data, the utility might produce unpredictable results.
For a diagram of CHECK DATA syntax and a description of available options, see
“Syntax and options of the CHECK DATA control statement” on page 62. For
detailed guidance on running this utility, see “Instructions for running CHECK
DATA” on page 69.
| CHECK DATA SHRLEVEL CHANGE operates on shadow copies of the table space
| and generates the corresponding REPAIR statements.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
| If you are using SHRLEVEL CHANGE, the batch user ID that invokes COPY with
| the CONCURRENT option must provide the necessary authority to execute the
| DFDSS COPY command. DFDSS will create a shadow data set with the authority
| of the utility batch address space. The submitter should have an RACF ALTER
| authority, or its equivalent, for the shadow data set.
If you specify the DELETE option, the privilege set must include the DELETE
privilege on the tables that are being checked. If you specify the FOR EXCEPTION
option, the privilege set must include the INSERT privilege on any exception table
that is used. If you specify the AUXERROR INVALIDATE option, the privilege set
must include the UPDATE privilege on the base tables that contain LOB columns.
Syntax diagram
|
SHRLEVEL REFERENCE
CHECK DATA table-space-spec
PART integer CLONE SHRLEVEL CHANGE
| (1)
SCOPE PENDING AUXERROR REPORT
drain-spec
SCOPE AUXONLY AUXERROR INVALIDATE
ALL
REFONLY
DELETE NO
LOG YES
DELETE YES
FOR EXCEPTION IN table-name1 USE table-name2 LOG NO
|
SYSPUNCH SORTDEVT device-type SORTNUM integer
PUNCHDDN ddname
Notes:
| 1 If you specify AUXERROR and LOBERROR or XMLERROR, the options for the keywords
| (REPORT and INVALIDATE) must match.
table-space-spec:
TABLESPACE table-space-name
database-name.
| drain-spec:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
DATA Indicates that you want the utility to check referential and table
check constraints. CHECK DATA does not check informational
referential constraints.
TABLESPACE database-name.table-space-name
| Specifies the table space to which the data belongs. You can specify
| only base table spaces, not LOB table spaces or XML table spaces.
database-name is the name of the database and is optional. The
default is DSNDB04.
table-space-name is the name of the table space.
PART integer Identifies which partition to check for constraint violations.
integer is the number of the partition and must be in the range
from 1 to the number of partitions that are defined for the table
space. The maximum is 4096.
| CLONE Indicates that CHECK DATA is to check the clone table in the
| specified table space. Because clone tables cannot have referential
| constraints, the utility checks only constraints for inconsistencies
| between the clone table data and the corresponding LOB data. If
| you do not specify CLONE, CHECK DATA operates against only
| the base table.
| SHRLEVEL Indicates the type of access that is to be allowed for the index,
| table space, or partition that is to be checked during CHECK
| DATA processing.
| REFERENCE
| Specifies that applications can read from but cannot write to
| the index, table space, or partition that is to be checked. The
| default is REFERENCE.
| CHANGE
| Specifies that applications can read from and write to the
| index, table space, or partition that is to be checked.
| DRAIN_WAIT
| Specifies the number of seconds that CHECK DATA is to wait
| when draining the table space or index. The specified time is the
| aggregate time for objects that are to be checked. This value
| overrides the values that are specified by the IRLMRWT and
| UTIMOUT subsystem parameters.
| integer can be any integer from 0 to 1800. If you do not specify
| DRAIN_WAIT or specify a value of 0, CHECK DATA uses the
| value of the lock timeout subsystem parameter IRLMRWT.
| RETRY integer Specifies the maximum number of retries that CHECK DATA is to
| attempt.
| integer can be any integer from 0 to 255. If you do not specify
| RETRY, CHECK DATA uses the value of the utility multiplier
| system parameter UTIMOUT.
| Specifying RETRY can increase processing costs and result in
| multiple or extended periods during which the specified index,
| table space, or partition is in read-only access.
| RETRY_DELAY integer
| Specifies the minimum duration, in seconds, between retries.
| integer can be any integer from 1 to 1800.
| If you do not specify RETRY_DELAY, CHECK DATA uses the
| smaller of the following two values:
| v DRAIN_WAIT value × RETRY value
| v DRAIN_WAIT value × 10
SCOPE Limits the scope of the rows in the table space that are to be
checked.
PENDING
Indicates that the only rows that are to be checked are
those that are in table spaces, partitions, or tables that are
in CHECK-pending status. The referential integrity check,
constraint check, and the LOB check are all performed.
If you specify this option for a table space that is not in
CHECK-pending status, the CHECK DATA utility does not
check the table space and does not issue an error message.
The default is PENDING.
AUXONLY
| Indicates that only the LOB column and the XML column
check are to be performed for table spaces that have tables
| with LOB columns or XML columns. The referential
integrity and constraint checks are not performed.
ALL Indicates that all dependent tables in the specified table
spaces are to be checked. The referential integrity check,
| constraint check, LOB check, and the XML check are
performed.
REFONLY
| Same as the ALL option, except that the LOB column check
| and the XML column check are not performed.
AUXERROR Specifies the action that CHECK DATA is to perform when it finds
| a LOB or XML column check error.
| REPORT A LOB or XML column check error is reported
with a warning message. The base table space is
set to the auxiliary CHECK-pending (ACHKP)
status.
The default is REPORT.
| INVALIDATE A LOB or XML column check error is reported
| with a warning message. The base table LOB or
| XML column is set to an invalid status. A LOB or
| deleted from a table space that is not logged, the table space is
| marked informational COPY-pending.
| PUNCHDDN ddname
| Specifies the DD statement for a data set that is to receive the
| REPAIR utility control statements that CHECK DATA SHRLEVEL
| CHANGE generates.
| ddname is the DD name. The default is SYSPUNCH.
| The PUNCHDDN keyword specifies either a DD name or a
| TEMPLATE name specification from a previous TEMPLATE control
| statement. If utility processing detects that the specified name is
| both a name in the current job step and a TEMPLATE name, the
| utility uses the DD name.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be
dynamically allocated by DFSORT. You can specify any device type
that is acceptable to the DYNALLOC parameter of the SORT or
OPTION control statement for DFSORT, as described in DFSORT
Application Programming: Guide.
Do not use a TEMPLATE specification to dynamically allocate sort
work data sets. The presence of the SORTDEVT keyword controls
dynamic allocation of these data sets.
device-type is the device type. If you omit SORTDEVT and a sort is
required, you must provide the DD statements that the sort
program requires for the temporary data sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be
dynamically allocated by the sort program.
integer is the number of temporary data sets that can range from 2
to 255.
If you omit SORTDEVT, SORTNUM is ignored. If you use
SORTDEVT and omit SORTNUM, no value is passed to DFSORT;
DFSORT uses its own default.
| You need at least two sort work data sets for each sort. The
| SORTNUM value applies to each sort invocation in the utility. For
| example, if three indexes, SORTKEYS is specified, there are no
| constraints that limit parallelism, and SORTNUM is specified as 8,
| a total of 24 sort work data sets are allocated for a job.
| Each sort work data set consumes both above-the-line and
| below-the-line virtual storage, so if you specify a value for
| SORTNUM that is too high, the utility might decrease the degree
| of parallelism due to virtual storage constraints, and possibly
| decreasing the degree down to one, meaning no parallelism.
4. Prepare a utility control statement that specifies the options for the tasks that
you want to perform, as described in “Instructions for specific tasks” on page
76.
5. Check the compatibility table in “Concurrency and compatibility for CHECK
DATA” on page 80 if you want to run other jobs concurrently on the same
target objects.
6. Plan for restarting CHECK DATA if the job doesn’t complete, as described in
“Terminating or restarting CHECK DATA” on page 79.
7. Run CHECK DATA by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 17.
The relationship between a base table with a LOB column and the LOB table space
is shown in Figure 8. The LOB column in the base table points to the auxiliary
index on the LOB table space, as illustrated in the figure. For more information
about LOBs and auxiliary tables, see Part 2 of DB2 Administration Guide.
Figure 8. Relationship between a base table with a LOB column and the LOB table space
Notes:
1. You can use CHAR(5) for any type of table space, but you must use it for table spaces that are defined with the
LARGE or DSSIZE options.
If you delete rows by using the CHECK DATA utility with SCOPE ALL, you must
create exception tables for all tables that are named in the table spaces and for all
their descendents. All descendents of any row are deleted.
v If column n+2 is of type TIMESTAMP, CHECK DATA records the starting time.
Otherwise, it does not use column n+2.
v You must have DELETE authorization on the dependent table that is being
checked.
v You must have INSERT authorization on the exception table.
v Column names in the exception table can have any name.
v Any change to the structure of the dependent table (such as a dropped column)
is not automatically recorded in the exception table. You must make that change
in the exception table.
An auxiliary table cannot be an exception table. A LOB column check error is not
included in the exception count. A row with only a LOB column check error does
not participate in exception processing.
You can create an exception table for the project activity table by using the
following SQL statements:
EXEC SQL
CREATE TABLE EPROJACT
LIKE DSN8910.PROJACT
IN DATABASE DSN8D91A
ENDEXEC
EXEC SQL
ALTER TABLE EPROJACT
ADD RID CHAR(4)
ENDEXEC
EXEC SQL
ALTER TABLE EPROJACT
ADD TIME TIMESTAMP NOT NULL WITH DEFAULT
ENDEXEC
The first statement requires the SELECT privilege on table DSN8910.PROJACT and
the privileges that are usually required to create a table.
Table EPROJACT has the same structure as table DSN8910.PROJACT, but it can
have two extra columns. The columns in EPROJACT are:
v Its first five columns mimic the columns of the project activity table; they have
exactly the same names and descriptions. Although the column names are the
same, they do not need to be. However, the rest of the column attributes for the
initial columns must be same as those of the table that is being checked.
v The next column, which is added by ALTER TABLE, is optional; CHECK DATA
uses it as an identifier. The name “RID” is an arbitrary choice; if the table
already has a column with that name, use a different name. The column
description, CHAR(4), is required.
v The final timestamp column is also optional. If you define the timestamp
column, a row identifier (RID) column must precede this column. You might
define a permanent exception table for each table that is subject to referential or
table check constraints. You can define it once and use it to hold invalid rows
that CHECK DATA detects. The TIME column allows you to identify rows that
were added by the most recent run of the utility.
Eventually, you correct the data in the exception tables, perhaps with an SQL
UPDATE statement, and transfer the corrections to the original tables by using
statements that are similar to those in the following example:
INSERT INTO DSN8910.PROJACT
SELECT PROJNO, ACTNO, ACSTAFF, ACSTDATE, ACENDATE
FROM EPROJACT
WHERE TIME > CURRENT TIMESTAMP - 1 DAY;
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space
Object that is to be checked. (If you want to check only one partition of a
table space, use the PART option in the control statement.)
Exception table
Table that stores rows that violate any referential constraints. For each table
in a table space that is checked, specify the name of an exception table in
the utility control statement. Any row that violates a referential constraint
is copied to the exception table.
Defining work data sets: Three sequential data sets are required during execution
of CHECK DATA. Two work data sets and one error data set are described by DD
statements in the WORKDDN and ERRDDN options.
Create the ERRDDN data set so that it is large enough to accommodate one error
entry (length=60 bytes) per violation that CHECK DATA detects.
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
| Shadow data set names: Each shadow data set must have the following name:
| catname.DSNDBx.dbname.psname.y000z.Lnnn
| To determine the names of existing data sets, execute one of the following queries
| against the SYSTABLEPART or SYSINDEXPART catalog tables:
| SELECT DBNAME, TSNAME, IPREFIX
| FROM SYSIBM.SYSTABLEPART
| WHERE DBNAME = ’dbname’
| AND TSNAME = ’psname’;
| SELECT DBNAME, IXNAME, IPREFIX
| FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
| WHERE X.NAME = Y.IXNAME
| AND X.CREATOR = Y.IXCREATOR
| AND X.DBNAME = ’dbname’
| AND X.INDEXSPACE = ’psname’;
|
| End of Product-sensitive Programming Interface
| For a partitioned table space, DB2 returns rows from which you select the row for
| the partitions that you want to check.
| Defining shadow data sets: Consider the following actions when you preallocate
| the data sets:
| v Allocate the shadow data sets according to the rules for user-managed data sets.
| v Define the shadow data sets as LINEAR.
| v Use SHAREOPTIONS(3,3).
| v Define the shadow data sets as EA-enabled if the original table space or index
| space is EA-enabled.
| v Allocate the shadow data sets on the volumes that are defined in the storage
| group for the original table space or index space.
| If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
| the SECQTY value for the table space or index space.
| Recommendation: Use the MODEL option, which causes the new shadow data set
| to be created like the original data set. This method is shown in the following
| example:
| DEFINE CLUSTER +
| (NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
| MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
| DATA +
| (NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
| MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’))
| Creating shadow data sets for indexes: When you preallocate shadow data sets for
| indexes, create the data sets as follows:
| v Create shadow data sets for the partition of the table space and the
| corresponding partition in each partitioning index and data-partitioned
| secondary index.
| v Create a shadow data set for logical partitions of nonpartitioned secondary
| indexes.
| Use the same naming scheme for these index data sets as you use for other data
| sets that are associated with the base index, except use J0001 instead of I0001.
| Estimating the size of shadow data sets: If you have not changed the value of
| FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
| comparable to the amount of required space for the original data set.
Whenever the scope information is in doubt, run the utility with the SCOPE ALL
option. The scope information is recorded in the DB2 catalog. The scope
information can become indoubt whenever you start the target table space with
ACCESS(FORCE), or when the catalog is recovered to a point in time.
If you want to check only the tables with LOB columns, specify the AUXONLY
option. If you want to check all dependent tables in the specified table spaces
except tables with LOB columns, specify the REFONLY option.
Finding violations
CHECK DATA issues a message for every row that contains a referential or table
check constraint violation. The violation is identified by:
v The RID of the row
v The name of the table that contains the row
v The name of the constraint that is being violated
You can automatically delete rows that violate referential or table check constraints
by specifying CHECK DATA with DELETE YES. However, you should be aware of
the following possible problems:
v The violation might be created by a non-referential integrity error. For example,
the indexes on a table might be inconsistent with the data in a table.
CHECK DATA uses the primary key index and all indexes that exactly match a
foreign key. Therefore, before running CHECK DATA, ensure that the indexes are
consistent with the data by using the CHECK INDEX utility.
If you run CHECK DATA with the DELETE NO option and referential or table
check constraint violations are found, the table space or partition is placed in
CHECK-pending status.
Orphan LOBs: An orphan LOB column is a LOB that is found in the LOB table
space but that is not referenced by the base table space. If an orphan error is the
only type of error reported by CHECK DATA, the base table is considered correct.
Missing LOBs: A missing LOB column is a LOB that is referenced by the base table
space but that is not in the LOB table space. A missing LOB can result from the
following situations:
v You recover the LOB table space to a point in time prior to the first insertion of
the LOB into the base table.
v You recover the LOB table space to a point in time when the LOB column is null
or has a zero length
Out-of-synch LOBs: An out-of-synch LOB error is a LOB that is found in both the
base table and the LOB table space, but the LOB in the LOB table space is at a
different level. A LOB column is also out-of-synch if the base table is null or has a
zero length, but the LOB is found in the LOB table space. An out-of-synch LOB
can occur anytime you recover the LOB table space or the base table space to a
prior point in time.
Invalid LOBs: An invalid LOB is an uncorrected LOB column error that is found
by a previous execution of CHECK DATA AUXERROR INVALIDATE.
| Detecting LOB column errors: If you specify CHECK DATA AUXERROR REPORT,
AUXERROR INVALIDATE, LOBERROR REPORT, or LOBERROR INVALIDATE
and a LOB column check error is detected, DB2 issues a message that identifies the
table, row, column, and type of error. Any additional actions depend on the option
that you specify for the AUXERROR or LOBERROR parameter:
| v When you specify the AUXERROR REPORT or LOBERROR REPORT option,
DB2 sets the base table space to the auxiliary CHECK-pending (ACHKP) status.
If CHECK DATA encounters only invalid LOB columns and no other LOB
column errors, the base table space is set to the auxiliary warning (AUXW)
status.
| v When you specify the AUXERROR INVALIDATE or LOBERROR INVALIDATE
| option, DB2 sets the base table LOB columns that are in error to an invalid
status. DB2 resets the invalid status of LOB columns that have been corrected. If
any invalid LOB columns remain in the base table, DB2 sets the base table space
to auxiliary warning (AUXW) status. You can use SQL to update a LOB column
that is in the AUXW status; however, any other attempt to access the column
results in a -904 SQL return code.
Use one of the following actions to remove the auxiliary CHECK-pending status if
DB2 does not find any inconsistencies:
v Use the SCOPE(ALL) option to check all dependent tables in the specified table
space. The checks include referential integrity constraints, table check
| constraints, and the existence of LOB and XML columns.
v Use the SCOPE(PENDING) option to check table spaces or partitions with
CHKP status. The checks include referential integrity constraints, table check
| constraints, and the existence of LOB and XML columns,
| v Use the SCOPE(AUXONLY) option to check for LOB and XML columns.
CHECKDAT phase places the table space in the CHECK-pending status when
CHECK DATA detects an error; at the end of the phase, CHECK DATA resets the
CHECK-pending status if it detects no errors. The REPORTCK phase resets the
CHECK-pending status if you specify the DELETE YES option.
Claims and drains: Table 9 shows which claim classes CHECK DATA claims and
drains and any restrictive status that the utility sets on the target object. The
legend for these claim classes is located at the bottom of the table.
Table 9. Claim classes of CHECK DATA operations
CHECK DATA CHECK DATA
CHECK DATA CHECK DATA PART PART
Target objects DELETE NO DELETE YES DELETE NO DELETE YES
Table space or partition DW/UTRO DA/UTUT DW/UTRO DA/UTUT
Partitioning index or DW/UTRO DA/UTUT DW/UTRO DA/UTUT
index partition
Secondary index DW/UTRO DA/UTUT none DR
Logical partition of none none DW/UTRO DA/UTUT
index
Primary index DW/UTRO DW/UTRO DW/UTRO DW/UTRO
RI dependent and none DA/UTUT none DA/UTUT
descendent table spaces
and indexes
RI exception table DA/UTUT DA/UTUT DA/UTUT DA/UTUT
spaces and indexes
(FOR EXCEPTION
only)
Legend:
v DA: Drain all claim classes, no concurrent SQL access
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers
v DW: Drain the write claim class, concurrent access for SQL readers
v UTUT: Utility restrictive state, exclusive control
v UTRO: Utility restrictive state, read-only access allowed
v none: Object not affected by this utility
v RI: Referential Integrity
Table 10 shows claim classes on a LOB table space and an index on the auxiliary
table.
Table 10. Claim classes of CHECK DATA operations on a LOB table space and index on the
auxiliary table
CHECK DATA CHECK DATA
Target objects DELETE NO DELETE YES
LOB table space DW/UTRO DA/UTUT
Index on the auxiliary table DW/UTRO DA/UTUT
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v DA: Drain all claim classes, no concurrent SQL access
v UTRO: Utility restrictive state, read-only access allowed
v UTUT: Utility restrictive state, exclusive control
Compatibility: The following utilities are compatible with CHECK DATA and can
run concurrently on the same target object:
v DIAGNOSE
v MERGECOPY
v MODIFY
v REPORT
v STOSPACE
v UNLOAD (when CHECK DATA DELETE NO)
To run on DSNDB01.SYSUTILX, CHECK DATA must be the only utility in the job
step and the only utility that is running in the DB2 subsystem.
The index on the auxiliary table for each LOB column inherits the same
compatibility and concurrency attributes of a primary index.
that violate these constraints into the exception tables that are specified in the FOR
EXCEPTION clause. For example, CHECK DATA is to copy the violations in table
DSN8810.DEPT into table DSN8810.EDEPT.
Figure 10. Example of using the CHECK DATA utility to copy invalid data into exception
tables and to delete the invalid data from the original table.
You can create exception tables by using the LIKE clause in the CREATE TABLE
statement. For an example of creating an exception table, see “Example: creating an
exception table for the project activity table” on page 72.
Example 2: Running CHECK DATA on a table space with LOBs. Before you run
CHECK DATA on a table space that contains at least one LOB column, complete
the steps that are listed in “For a table with LOB columns” on page 70.
Figure 11. Example of running CHECK DATA on a table space with LOBs
Also run CHECK INDEX before running CHECK DATA, especially if you specify
DELETE YES. Running CHECK INDEX before CHECK DATA ensures that the
indexes that CHECK DATA uses are valid. When checking an auxiliary table index,
CHECK INDEX verifies that each LOB is represented by an index entry, and that
an index entry exists for every LOB. For more information about running the
CHECK DATA utility on a table space that contains at least one LOB column, see
“For a table with LOB columns” on page 70.
For a diagram of CHECK INDEX syntax and a description of available options, see
“Syntax and options of the CHECK INDEX control statement” on page 86. For
detailed guidance on running this utility, see “Instructions for running CHECK
INDEX” on page 89.
Output: CHECK INDEX generates several messages that show whether the indexes
are consistent with the data. See Part 2 of DB2 Messages for more information
about these messages.
For unique indexes, any two null values are treated as equal values, unless the
index was created with the UNIQUE WHERE NOT NULL clause. In that case, if
the key is a single column, it can contain any number of null values, and CHECK
INDEX does not issue an error message.
CHECK INDEX issues an error message if it finds two or more null values and the
unique index was not created with the UNIQUE WHERE NOT NULL clause.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute CHECK INDEX, but
only on a table space in the DSNDB01 or DSNDB06 databases.
| If you are using SHRLEVEL CHANGE, the batch user ID that invokes COPY with
| the CONCURRENT option must provide the necessary authority to execute the
| DFDSS ADRSSU command. DFDSS will create a shadow data set with the
| authority of the utility batch address space. The submitter should have an RACF
| ALTER authority, or its equivalent, for the shadow data set.
Syntax diagram
CHECK INDEX
| LIST listdef-name
( index-name ) CLONE
PART integer
( ALL ) TABLESPACE table-space-name
database-name. PART integer
WORKDDN SYSUT1
RETRY_DELAY integer WORKDDN ddname SORTDEVT device-type SORTNUM integer
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
INDEX Indicates that you are checking for index consistency.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
list should contain only index spaces. Do not specify the name of
an index or of a table space. DB2 groups indexes by their related
table space and executes CHECK INDEX once per table space.
CHECK INDEX allows one LIST keyword for each control
statement in CHECK INDEX. This utility will only process clone
data if the CLONE keyword is specified. The use of CLONED YES
on the LISTDEF statement is not sufficient. For more information
about LISTDEF specifications, see Chapter 15, “LISTDEF,” on page
185.
(index-name, ...)
Specifies the indexes that are to be checked. All indexes must
belong to tables in the same table space. If you omit this option,
you must use the (ALL) TABLESPACE option. Then CHECK
INDEX checks all indexes on all tables in the table space that you
specify.
index-name is the name of an index, in the form creator-id.name. If
you omit the qualifier creator-id., the user identifier for the utility
job is used. If you use a list of names, separate items in the list by
commas. Parentheses are required around a name or list of names.
Enclose the index name in quotation marks if the name contains a
blank.
PART integer Identifies a physical partition of a partitioned index or a logical
partition of a nonpartitioned index that is to be checked for
consistency. If you specify an index on a nonpartitioned table
space, an error occurs.
integer is the number of the partition and must be in the range
from 1 to the number of partitions that are defined for the table
space. The maximum is 4096.
If the PART keyword is not specified, CHECK INDEX tests the
entire target index for consistency.
(ALL) Specifies that all indexes in the specified table space that are
referenced by the table space are to be checked.
TABLESPACE database-name.table-space-name
Specifies the table space from which all indexes are to be checked.
If an explicit list of index names is not specified, all indexes on all
tables in the specified table space are checked.
Do not specify TABLESPACE with an explicit list of index names.
database-name is the name of the database that the table space
belongs to. The default is DSNDB04.
table-space-name is the name of the table space from which all
indexes are checked.
| CLONE Indicates that CHECK INDEX is to check only the specified
| indexes that are on clone tables. This utility will only process clone
| data if the CLONE keyword is specified. The use of CLONED YES
| on the LISTDEF statement is not sufficient.
SHRLEVEL Indicates the type of access that is to be allowed for the index,
table space, or partition that is to be checked during CHECK
INDEX processing.
REFERENCE
Specifies that applications can read from but cannot write to
the index, table space, or partition that is to be checked. The
default is REFERENCE.
If you specify SHRLEVEL REFERENCE or use this value as the
default, DB2 unloads the index entries, sorts the index entries,
and scans the data to validate the index entries.
CHANGE
Specifies that applications can read from and write to the
index, table space, or partition that is to be checked.
If you specify SHRLEVEL CHANGE, DB2 performs the
following actions:
v Drains all writers and forces the buffers to disk for the
specified object and all of its indexes
v Invokes DFSMSdss™ to copy the specified object and all of
its indexes to shadow data sets
v Enables read-write access for the specified object and all of
its indexes
v Runs CHECK INDEX on the shadow data sets
WORKDDN ddname
Specifies a DD statement for a temporary work file.
You can use the WORKDDN keyword to specify either a DD name
or a TEMPLATE name specification from a previous TEMPLATE
control statement. If utility processing detects that the specified
name is both a DD name in the current job step and a TEMPLATE
name, the utility uses the DD name. For more information about
TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on page
641.
ddname is the DD name. The default is SYSUT1.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be
dynamically allocated by DFSORT. You can specify any device type
that is acceptable to the DYNALLOC parameter of the SORT or
OPTION control statement for DFSORT.
A TEMPLATE specification does not dynamically allocate sort
work data sets. The SORTDEVT keyword controls dynamic
allocation of these data sets.
device-type is the device type. If you omit SORTDEVT and a sort is
required, you must provide the DD statements that the sort
program requires for the temporary data sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be
dynamically allocated by the sort program.
integer is the number of temporary data sets that can range from 2
to 255.
If you omit SORTDEVT, SORTNUM is ignored. If you use
SORTDEVT and omit SORTNUM, no value is passed to DFSORT;
DFSORT uses its own default.
If you omit SORTDEVT, SORTNUM is ignored. If you use
SORTDEVT and omit SORTNUM, no value is passed to DFSORT;
DFSORT uses its own default.
| You need at least two sort work data sets for each sort. The
| SORTNUM value applies to each sort invocation in the utility. For
| example, if three indexes, SORTKEYS is specified, there are no
| constraints that limit parallelism, and SORTNUM is specified as 8,
| a total of 24 sort work data sets are allocated for a job.
| Each sort work data set consumes both above-the-line and
| below-the-line virtual storage, so if you specify a value for
| SORTNUM that is too high, the utility might decrease the degree
| of parallelism due to virtual storage constraints, and possibly
| decreasing the degree down to one, meaning no parallelism.
2. Create JCL statements, by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 17. (For examples of JCL for
CHECK INDEX, see “Sample CHECK INDEX control statements” on page 100.)
3. Prepare a utility control statement that specifies the options for the tasks that
you want to perform.
4. Check the compatibility table in “Concurrency and compatibility for CHECK
INDEX” on page 98 if you want to run other jobs concurrently on the same
target objects.
5. Plan for restart if the CHECK INDEX job doesn’t complete, as described in
“Terminating or restarting CHECK INDEX” on page 98.
| 6. Read “After running CHECK INDEX” on page 99.
7. Run CHECK INDEX by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 17.
Note: Inaccurate statistics for tables, table spaces, or indexes can result in a sort
failure during CHECK INDEX.
| Notes:
| 1. Required when collecting inline statistics on at least one data-partitioned secondary
| index.
| 2. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
| data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Index space
Object that is to be checked. (If you want to check only one partition of an
index, use the PART option in the control statement.)
| Calculating the size of the sort work data sets: To calculate the approximate size
| (in bytes) of the ST01WKnn data set, use the following formula:
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
Another method of estimating the size of the WORKDDN data set is to obtain the
high-used relative byte address (RBA) for each index from a VSAM catalog listing.
Then add the RBAs.
Shadow data set names: Each shadow data set must have the following name:
| catname.DSNDBx.psname.y000z.Lnnn
To determine the names of existing data sets, execute one of the following queries
against the SYSTABLEPART or SYSINDEXPART catalog tables:
SELECT DBNAME, TSNAME, IPREFIX
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
SELECT DBNAME, IXNAME, IPREFIX
FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
For a partitioned table space, DB2 returns rows from which you select the row for
the partitions that you want to check.
Defining shadow data sets: Consider the following actions when you preallocate
the data sets:
v Allocate the shadow data sets according to the rules for user-managed data sets.
v Define the shadow data sets as LINEAR.
v Use SHAREOPTIONS(3,3).
| v Allocate base or clone objects
v Define the shadow data sets as EA-enabled if the original table space or index
space is EA-enabled.
v Allocate the shadow data sets on the volumes that are defined in the storage
group for the original table space or index space.
If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
the SECQTY value for the table space or index space.
Recommendation: Use the MODEL option, which causes the new shadow data set
to be created like the original data set. This method is shown in the following
example:
| DEFINE CLUSTER +
| (NAME(’catname.DSNDBC.dbname.psname.x000z.L001’) +
| MODEL(’catname.DSNDBC.dbname.psname.y000z.L001’)) +
| DATA +
| (NAME(’catname.DSNDBD.dbname.psname.x000z.L001’) +
| MODEL(’catname.DSNDBD.dbname.psname.y000z.L001’) )
Creating shadow data sets for indexes: When you preallocate shadow data sets for
indexes, create the data sets as follows:
v Create shadow data sets for the partition of the table space and the
corresponding partition in each partitioning index and data-partitioned
secondary index.
v Create a shadow data set for logical partitions of nonpartitioned secondary
indexes.
Use the same naming scheme for these index data sets as you use for other data
sets that are associated with the base index, except use J0001 instead of I0001. For
more information about this naming scheme, see the information about the shadow
data set naming convention at the beginning of this section, “Shadow data sets” on
page 91.
Estimating the size of shadow data sets: If you have not changed the value of
FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
comparable to the amount of required space for the original data set.
In this example, the keys are unique within each logical partition, but both
logical partitions contain the key, T; so for the index as a whole, the keys are not
unique. CHECK INDEX does not detect the duplicates.
v CHECK INDEX does not detect keys that are out of sequence between different
logical partitions. For example, the following keys are out of sequence:
1 7 5 8 9 10 12
Figure 13 shows the flow of a CHECK INDEX job with a parallel index check for a
nonpartitioned table space or a single partition of a partitioned table space.
Table
space
Indexes
Snapshot copy
Sort Check
Table Unload Sort Check
space Sort Check Indexes
Figure 13. Parallel index check for a nonpartitioned table space or a single partition of a
partitioned table space
Figure 14 shows the flow of a CHECK INDEX job with a parallel index check for
all partitioning indexes on a partitioned table space.
Table
space Index
parts parts
Snapshot copy
Figure 14. Parallel index check for all partitioning indexes on a partitioned table space
Figure 15 shows the flow of a CHECK INDEX job with a parallel index check for a
partitioned table space with a single nonpartitioned secondary index.
Table
space
Index
parts
Snapshot copy
Unload Sort
Table Unload Sort Merge Check Index
space Unload Sort
parts
Figure 15. Parallel index check for a partitioned table space with a single nonpartitioned
secondary index
Figure 16 shows the flow of a CHECK INDEX job with a parallel index check for
all indexes on a partitioned table space. Each unload task pipes keys to each sort
task, sorting the keys and piping them back to the check tasks.
Table
space
parts Indexes
Snapshot copy
Figure 16. Parallel index check for all indexes on a partitioned table space
You can restart a CHECK INDEX utility job, but it starts from the beginning again.
For guidance in restarting online utilities, see “Restarting an online utility” on page
39.
Claims and drains: Table 13 shows which claim classes CHECK INDEX claims and
drains and any restrictive state that the utility sets on the target object.
| Table 13. Claim classes of CHECK INDEX operations
| CHECK CHECK CHECK CHECK
| INDEX INDEX PART INDEX INDEX PART
| SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
| Target REFERENCE REFERENCE CHANGE CHANGE
| Table space or partition DW/UTRO DW/UTRO DW/UTRW DW/UTRW
| Partitioning index or index DW/UTRO DW/UTRO DW/UTRW DW/UTRW
| partition
| Secondary index1 DW/UTRO none DW/UTRW DW/UTRW
| Data-partitioned secondary DW/UTRO DW/UTRO DW/UTRW DW/UTRW
| index or index partition2
| Logical partition of an index none DW/UTRO DW/UTRW DW/UTRW
| Legend:
| v DW: Drain the write claim class, concurrent access for SQL readers
| v UTRO: Utility restrictive state, read only-access allowed
| v UTRW: Utility restrictive state, read and write access allowed
| v none: Object not affected by this utility
| Notes:
| 1. Includes document ID indexes and node ID indexes over non-partitioned XML table
| spaces and XML indexes.
| 2. Includes document ID indexes and node ID indexes over partitioned XML table spaces.
|
CHECK INDEX does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
| CHECK INDEX of an XML index cannot run if REBUILD INDEX, REORG INDEX,
| or RECOVER is being run on that index because CHECK INDEX needs access to
| the node ID index. CHECK INDEX SHRLEVEL CHANGE cannot run two jobs
| concurrently for two different indexes that are in the same table space or partition
| because the snapshot shadow will have a conflicting name for the table space.
Compatibility: Table 14 on page 99 shows which utilities can run concurrently with
CHECK INDEX on the same target object. The first column lists the other utility
and the second column lists whether or not that utility is compatible with CHECK
INDEX. The target object can be a table space, an index space, or an index
Example 2: Checking one index. The following control statement specifies that the
CHECK INDEX utility is to check the project-number index (DSN8910.XPROJ1) on
the sample project table. SORTDEVT SYSDA specifies that SYSDA is the device
type for temporary data sets that are to be dynamically allocated by DFSORT.
CHECK INDEX (DSN8910.XPROJ1)
SORTDEVT SYSDA
Example 3: Checking more than one index. The following control statement
specifies that the CHECK INDEX utility is to check the indexes
DSN8910.XEMPRAC1 and DSN8910.XEMPRAC2 on the employee-to-project-
activity sample table.
CHECK INDEX NAME (DSN8910.XEMPRAC1, DSN8910.XEMPRAC2)
Figure 18. CHECK INDEX output from a job that checks the third partition of all indexes.
| Example 6: Checking all specified indexes on clone tables. The following control
| statement specifies that the CHECK INDEX utility is to check all specified indexes
| that are on clone tables.
| CHECK INDEX (ALL) TABLESPACE DBLOB01.TSLOBC4 CLONE
For a diagram of CHECK LOB syntax and a description of available options, see
“Syntax and options of the CHECK LOB control statement” on page 104. For
detailed guidance on running this utility, see “Instructions for running CHECK
LOB” on page 107.
| Output: After successful execution, CHECK LOB SHRLEVEL CHANGE does not
set or reset the CHECK-pending (CHKP) and auxiliary-warning (AUXW) statuses.
CHECK LOB SHRLEVEL CHANGE will not reset the CHECK-pending (CHKP)
| and auxiliary-warning (AUXW) statuses.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
| If you are using SHRLEVEL CHANGE, the batch user ID that invokes COPY with
| the CONCURRENT option must provide the necessary authority to execute the
| DFDSS ADRSSU command. DFDSS will create a shadow data set with the
| authority of the utility batch address space. The submitter should have an RACF
| ALTER authority, or its equivalent, for the shadow data set.
Syntax diagram
| SHRLEVEL REFERENCE
CHECK LOB lob-table-space-spec drain-spec
SHRLEVEL CHANGE
| EXCEPTIONS 0
EXCEPTIONS integer SYSPUNCH SORTDEVT device-type
PUNCHDDN ddname
SORTNUM integer
lob-table-space-spec:
| TABLESPACE lob-table-space-name
database-name. CLONE
| drain-spec:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LOB Indicates that you are checking a LOB table space for defects.
TABLESPACE database-name.lob-table-space-name
Specifies the table space to which the data belongs.
database-name is the name of the database and is optional. The
default is DSNDB04.
lob-table-space-name is the name of the LOB table space.
| SHRLEVEL Indicates the type of access that is to be allowed for the index,
| table space, or partition that is to be checked during CHECK LOB
| processing.
| REFERENCE
| Specifies that applications can read from but cannot write to
| the index, table space, or partition that is to be checked. The
| default is REFERENCE.
| CHANGE
| Specifies that applications can read from and write to the table
| space that is to be checked.
| DRAIN_WAIT
| Specifies the number of seconds that CHECK LOB is to wait when
| draining the table space or index. The specified time is the
| aggregate time for objects that are to be checked. This value
| overrides the values that are specified by the IRLMRWT and
| UTIMOUT subsystem parameters.
| integer can be any integer from 0 to 1800. If you do not specify
| DRAIN_WAIT or specify a value of 0, CHECK LOB uses the value
| of the lock timeout subsystem parameter IRLMRWT.
| RETRY integer Specifies the maximum number of retries that CHECK LOB is to
| attempt.
| integer can be any integer from 0 to 255. If you do not specify
| RETRY, CHECK LOB uses the value of the utility multiplier system
| parameter UTIMOUT.
| Specifying RETRY can increase processing costs and result in
| multiple or extended periods during which the specified index,
| table space, or partition is in read-only access.
| RETRY_DELAY integer
| Specifies the minimum duration, in seconds, between retries.
| integer can be any integer from 1 to 1800.
| If you do not specify RETRY_DELAY, CHECK LOB uses the
| smaller of the following two values:
| v DRAIN_WAIT value × RETRY value
Chapter 10. CHECK LOB 105
CHECK LOB
| v DRAIN_WAIT value × 10
EXCEPTIONS integer
Specifies the maximum number of exceptions, which are reported
by messages only. CHECK LOB terminates in the CHECKLOB
phase when it reaches the specified number of exceptions.
All defects that are reported by messages are applied to the
exception count.
integer is the maximum number of exceptions. The default is 0,
which indicates no limit on the number of exceptions.
| PUNCHDDN ddname
| Specifies the DD statement for a data set that is to receive the
| REPAIR utility control statements that CHECK LOB SHRLEVEL
| REFERENCE generates. The REPAIR statements generated will
| delete the LOBs reported in error messages from the LOB table
| space. CHECK DATA should then be run against the base table
| space to set the deleted LOB columns in the base records to
| invalid.
| ddname is the DD name. The default is SYSPUNCH.
| The PUNCHDDN keyword specifies either a DD name or a
| TEMPLATE name specification from a previous TEMPLATE control
| statement. If utility processing detects that the specified name is
| both a name in the current job step and a TEMPLATE name, the
| utility uses the DD name.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be
dynamically allocated by DFSORT.
A TEMPLATE specification does not dynamically allocate sort
work data sets. The SORTDEVT keyword controls dynamic
allocation of these data sets.
device-type is the device type and can be any device type that is
acceptable to the DYNALLOC parameter of the SORT or OPTION
control statement for DFSORT, as described in DFSORT Application
Programming: Guide.
If you omit SORTDEVT and a sort is required, you must provide
the DD statements that the sort program requires for the
temporary data sets.
SORTNUM integer
Indicates the number of temporary data sets that are to be
dynamically allocated by the sort program.
integer is the number of temporary data sets that can range from 2
to 255.
If you omit SORTDEVT, SORTNUM is ignored. If you use
SORTDEVT and omit SORTNUM, no value is passed to DFSORT,
which then uses its own default.
| You need at least two sort work data sets for each sort. The
| SORTNUM value applies to each sort invocation in the utility. For
| example, if there are three indexes, SORTKEYS is specified, there
The following object is named in the utility control statement and does not require
DD statements in the JCL:
Table space
Object that is to be checked.
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
| Shadow data set names: Each shadow data set must have the following name:
| catname.DSNDBx.dbname.psname.y000z.Lnnn
| To determine the names of existing shadow data sets, execute one of the following
| queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
| SELECT DBNAME, TSNAME, IPREFIX
| FROM SYSIBM.SYSTABLEPART
| WHERE DBNAME = ’dbname’
| AND TSNAME = ’psname’;
| For a partitioned table space, DB2 returns rows from which you select the row for
| the partitions that you want to check.
| Defining shadow data sets: Consider the following actions when you preallocate
| the data sets:
| v Allocate the shadow data sets according to the rules for user-managed data sets.
| v Define the shadow data sets as LINEAR.
| v Use SHAREOPTIONS(3,3).
| v Define the shadow data sets as EA-enabled if the original table space or index
| space is EA-enabled.
| v Allocate the shadow data sets on the volumes that are defined in the storage
| group for the original table space or index space.
| If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
| the SECQTY value for the table space or index space.
| Recommendation: Use the MODEL option, which causes the new shadow data set
| to be created like the original data set. This method is shown in the following
| example:
| DEFINE CLUSTER +
| (NAME(’catname.DSNDBC.dbname.psname.x000z.L001’) +
| MODEL(’catname.DSNDBC.dbname.psname.y000z.L001’)) +
| DATA +
| (NAME(’catname.DSNDBD.dbname.psname.x000z.L001’) +
| MODEL(’catname.DSNDBD.dbname.psname.y000z.L001’) )
| Creating shadow data sets for indexes: When you preallocate shadow data sets for
| indexes, create the data sets as follows:
| v Create shadow data sets for the partition of the table space and the
| corresponding partition in each partitioning index and data-partitioned
| secondary index.
| v Create a shadow data set for logical partitions of nonpartitioned secondary
| indexes.
| Use the same naming scheme for these index data sets as you use for other data
| sets that are associated with the base index, except use J0001 instead of I0001. For
| more information about this naming scheme, see the information about the shadow
| data set naming convention at the beginning of this section, “Shadow data sets” on
| page 91.
| Estimating the size of shadow data sets: If you have not changed the value of
| FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
| comparable to the amount of required space for the original data set.
Beginning in Version 8, the CHECK LOB utility does not require SYSUT1 and
SORTOUT data sets. Work records are written to and processed from an
asynchronous SORT phase. The WORKDDN keyword, which provided the DD
names of the SYSUT1 and SORTOUT data sets in earlier versions of DB2, is not
needed and is ignored. You do not need to modify existing control statements to
remove the WORKDDN keyword.
Contact IBM Software Support for assistance with diagnosing and resolving the
problem.
Use the REPAIR utility with care, as improper use can further damage the data. If
necessary, contact IBM Software Support for guidance on using the REPAIR utility.
Claims and drains: Table 17 shows which claim classes CHECK LOB claims and
drains and any restrictive state that the utility sets on the target object.
Table 17. Claim classes for CHECK LOB operations on a LOB table space and index on the
auxiliary table
Target objects CHECK LOB CHECK LOB
SHRLEVEL SHRLEVEL CHANGE
REFERENCE
LOB table space DW/UTRO CR/UTRW
Index on the auxiliary table DW/UTRO CR/UTRW
Legend:
| v CR: Claim the read claim class
v DW: Drain the write claim class, concurrent access for SQL readers
v UTRO: Utility restrictive state, read-only access allowed
| v UTRW: Utility restrictive state, read and write access allowed
Compatibility: Any SQL operation or other online utility that attempts to update
the same LOB table space is incompatible.
| Example 2: Checking the LOB space data for a clone table. The following control
| statement specifies that the CHECK LOB utility is to check the LOB space data for
| only the clone table, not the LOB data for the base table. The EXCEPTIONS 0
| option indicates that there is no limit on the number of exceptions. The
| Example 3: Checking the LOB table space data. The following control statement
| specifies that the CHECK LOB utility is to check the LOB table space data with the
| SHRLEVEL CHANGE option, which specifies that the application can read from
| and write to the table space that is to be checked.
| //STEP2 EXEC DSNUPROC,
| // UTPROC=’’,SYSTEM=’SSTR’,
| // UID=’CHKLOB12.STEP2’
| //*SYSPUNCH DD DN=PUNCHS,DISP=(NEW,DELETE,DELETE),UNITE=SYSDA,
| //* SPACE=(CYL,(1,1)),VOL=SER=SCR03
| //SYSPRINT DD SYSOUT=*
| //UTPRINT DD DUMMY
| //SYSIN DD *
| CHECK LOB TABLESPACE
| DABA12.TSL12
| SHRLEVEL CHANGE
| EXCEPTIONS 5
| /*
The RECOVER utility uses these copies when recovering a table space or index
space to the most recent time or to a previous time. Copies can also be used by the
MERGECOPY, RECOVER, COPYTOCOPY, and UNLOAD utilities.
You can copy a list of objects in parallel to improve performance. Specifying a list
of objects along with the SHRLEVEL REFERENCE option creates a single recovery
point for that list of objects. Specifying the PARALLEL keyword allows you to
copy a list of objects in parallel, rather than serially.
To calculate the number of threads you need when you specify the PARALLEL
keyword, use the formula (n * 2 + 1), where n is the number of objects that are to
be processed in parallel, regardless of the total number of objects in the list. If you
do not use the PARALLEL keyword, n is one and COPY uses three threads for a
single-object COPY job.
For a diagram of COPY syntax and a description of available options, see “Syntax
and options of the COPY control statement” on page 114. For detailed guidance on
running this utility, see “Instructions for running COPY” on page 125.
The COPY-pending status is set off for table spaces if the copy was a full image
copy. However, DB2 does not reset the COPY-pending status if you copy a single
piece of a multi-piece linear data set. If you copy a single table space partition,
DB2 resets the COPY-pending status only for the copied partition and not for the
whole table space. DB2 resets the informational COPY-pending (ICOPY) status
| after you copy an index space or index. The COPY utility will reset
| ICOPY-pending status for not logged table spaces.
Related information: See Part 4 of DB2 Administration Guide for uses of COPY in
the context of planning for database recovery. For information about creating inline
copies during LOAD, see “Using inline COPY with LOAD” on page 269. You can
also create inline copies during REORG; see “Using inline copy with REORG
TABLESPACE” on page 503 for more information.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute COPY, but only on a
table space in the DSNDB01 or DSNDB06 database.
The batch user ID that invokes COPY with the CONCURRENT option must
provide the necessary authority to execute the DFDSS DUMP command.
Syntax diagram
Notes:
1 Use the copy-spec if you do not want to use the CONCURRENT option.
2 Use the concurrent-spec if you want to use the CONCURRENT option, but not the FILTERDDN
option.
3 Use the filterddn spec if you want to use the CONCURRENT and FILTERDDN options.
copy-spec:
FULL YES
LIST listdef-name data-set-spec
FULL NO
changelimit-spec
CHECKPAGE
PARALLEL
(num-objects) TAPEUNITS ( num-tape-units )
SYSTEMPAGES YES
SYSTEMPAGES NO
Notes:
1 Not valid for nonpartioning indexes.
concurrent-spec:
DSNUM ALL
table-space-spec data-set-spec
index-name-spec (1)
DSNUM integer
Notes:
1 Not valid for nonpartioning indexes.
filterddn-spec:
DSNUM ALL
table-space-spec
index-name-spec (1)
DSNUM integer
Notes:
1 Not valid for nonpartioning indexes.
data-set-spec:
(1)
COPYDDN( ddname1 )
,ddname2 RECOVERYDDN( ddname3 )
,ddname2 ,ddname4
,ddname4
RECOVERYDDN( ddname3 )
,ddname4
,ddname4
Notes:
1 COPYDDN SYSCOPY is the default for the primary copy, but this default can only be used for
one object in the list.
changelimit-spec:
CHANGELIMIT
(percent_value1 ) REPORTONLY
,percent_value2
table-space-spec:
TABLESPACE table-space-name
database-name.
index-name-spec:
(1)
INDEXSPACE index-space-name
database-name.
INDEX index-name
creator-id.
Notes:
1 INDEXSPACE is the preferred specification.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name.
LIST specifies one LIST keyword for each COPY control statement.
Do not specify LIST with either the INDEX or the TABLESPACE
keyword. DB2 invokes COPY once for the entire list. This utility
will only process clone data if the CLONE keyword is specified.
The use of CLONED YES on the LISTDEF statement is not
sufficient. For more information about LISTDEF specifications, see
Chapter 15, “LISTDEF,” on page 185.
TABLESPACEdatabase-name.table-space-name
Specifies the table space (and, optionally, the database it belongs
to) that is to be copied.
database-name is the name of the database that the table space
belongs to. The default is DSNDB04.
table-space-name is the name of the table space to be copied.
Specify the DSNDB01.SYSUTILX, DSNDB06.SYSCOPY, or
DSNDB01.SYSLGRNX table space by itself in a single COPY
statement. Alternatively, specify the DSNDB01.SYSUTILX,
DSNDB06.SYSCOPY, or DSNDB01.SYSLGRNX table space with
indexes over the table space that were defined with the COPY YES
attribute.
| CLONE Indicates that COPY is to copy only clone table or index data. This
| utility will only process clone data if the CLONE keyword is
| specified. The use of CLONED YES on the LISTDEF statement is
| not sufficient.
In this format:
catname Is the ICF catalog name or alias.
x Is C (for VSAM clusters) or D (for VSAM
data components).
dbname Is the database name.
spacename Is the table space or index space name.
y Is I or J, which indicates the data set name
used by REORG with FASTSWITCH.
| z Is 1 or 2.
nnn Is the data set integer.
| PENDING
| Indicates that you want to copy only those objects in
| COPY-pending or informational COPY-pending status. When
| the DSNUM ALL option is specified for partitioned objects,
| and one or more of the partitions are in COPY-pending or
| informational COPY-pending status, a copy will be taken of the
| entire table space or index space.
| For partitioned objects, if you only want the partitions in
| COPY-pending status or informational COPY-pending status to
| be copied, then a list of partitions should be specified. This is
| done by invoking COPY on a LISTDEF list built with the
| PARTLEVEL option. An output image copy data set will be
| created for each partition that is in COPY-pending or
| informational COPY-pending status.
Notes:
1. Required if you specify CONCURRENT and the SYSPRINT DD statement points to a
data set.
2. Required if you specify the FILTERDDN option.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or index space
Object that is to be copied. (If you want to copy only certain data sets in a
table space, you must use the DSNUM option in the control statement.)
DB2 catalog objects
Objects in the catalog that COPY accesses. The utility records each copy in
the DB2 catalog table SYSIBM.SYSCOPY.
Output data set size: Image copies are written to sequential non-VSAM data sets.
Recommendation: Use a template for the image copy data set by specifying a
TEMPLATE statement without the SPACE keyword. When you omit this keyword,
the utility calculates the appropriate size of the data set for you.
Alternatively, you can find the approximate size of the image copy data set for a
table space, in bytes, by either executing COPY with the CHANGELIMIT
REPORTONLY option, or using the following procedure:
1. Find the high-allocated page number, either from the NACTIVEF column of
SYSIBM.SYSTABLESPACE after running the RUNSTATS utility, or from
information in the VSAM catalog data set.
2. Multiply the high-allocated page number by the page size.
Recommendation: Use a template for the filter data set by specifying a TEMPLATE
statement without the SPACE keyword. When you omit this keyword, the utility
calculates the appropriate size of the data set for you.
Alternatively, you can determine the approximate size of the filter data set size that
is required, in bytes, by using the following formula, where n = the number of
specified objects in the COPY control statement:
(240 + (80 × n))
JCL parameters: You can specify a block size for the output by using the BLKSIZE
parameter on the DD statement for the output data set. Valid block sizes are
multiples of 4096 bytes. You can increase the buffer using the BUFNO parameter;
for example, you might specify BUFNO=30, which creates 30 buffers.
See also “Data sets that online utilities use” on page 19 for information about using
BUFNO.
Cataloging image copies: To catalog your image copy data sets, use the
DISP=(MOD,CATLG,CATLG) parameter in the DD statement or TEMPLATE that is
named by the COPYDDN option. After the image copy is taken, the DSVOLSER
column of the row that is inserted into SYSIBM.SYSCOPY contains blanks.
Duplicate image copy data sets are not allowed. If a cataloged data set is already
recorded in SYSIBM.SYSCOPY with the same name as the new image copy data
set, the COPY utility issues a message and does not make the copy.
When RECOVER locates the SYSCOPY entry, it uses the operating system catalog
to allocate the required data set. If you have uncataloged the data set, the
allocation fails. In that case, the recovery can still go forward; RECOVER searches
for a previous image copy. But even if it finds one, RECOVER must use
correspondingly more of the log during recovery.
Recommendation: Keep the ICF catalog consistent with the information about
existing image copy data sets in the SYSIBM.SYSCOPY catalog table.
The following statement specifies that the COPY utility is to make a full image
copy of the DSN8S91E table space in database DSN8D91A:
COPY TABLESPACE DSN8D91A.DSN8S91E
The COPY utility writes pages from the table space or index space to the output
data sets. The JCL for the utility job must include DD statements or have a
template specification for the data sets. If the object consists of multiple data sets
and all are copied in one run, the copies reside in one physical sequential output
data set.
Image copies should be made either by entire page set or by partition, but not by
both.
Recommendations:
v Take a full image copy after any of the following operations:
– CREATE or LOAD operations for a new object that is populated.
– REORG operation for an existing object.
– LOAD RESUME of an existing object.
| – LOGGED operation of a table space.
v Copy the indexes over a table space whenever a full copy of the table space is
taken. More frequent index copies decrease the number of log records that need
to be applied during recovery. At a minimum, you should copy an index when
it is placed in informational COPY-pending (ICOPY) status. For more
information about the ICOPY status, see Appendix C, “Advisory or restrictive
states,” on page 895.
If you create an inline copy during LOAD or REORG, you do not need to execute
a separate COPY job for the table space. If you do not create an inline copy, and if
the LOG option is NO, the COPY-pending status is set for the table space. You
must then make a full image copy for any subsequent recovery of the data. An
incremental image copy is not allowed in this case. If the LOG option is YES, the
COPY-pending status is not set. However, your next image copy must be a full
image copy. Again, an incremental image copy is not allowed.
The COPY utility automatically takes a full image copy of a table space if you
attempt to take an incremental image copy when it is not allowed.
| If a table space changes after an image copy is taken and before the table space is
| altered from NOT LOGGED to LOGGED, the table space is marked
| COPY-pending, and a full image copy must be taken.
simultaneously; therefore, defer copying the catalog table or directories until the
other copy jobs have completed if possible. However, if you must copy other
objects while another COPY job processes catalog tables or directories, specify
SHRLEVEL (CHANGE) for the copies of the catalog and directory tables.
Copy by partition or data set: You can make an incremental image copy by
partition or data set (specified by DSNUM) in the following situations:
v A full image copy of the table space exists.
v A full image copy of the same partition or data set exists and the COPY-pending
status is not on for the table space or partition.
In addition, the full image copy must have been made after the most recent use of
CREATE, REORG or LOAD, or it must be an inline copy that was made during the
most recent use of LOAD or REORG.
with the option RECOVERYSITE). All copies are identical, and all are produced at
the same time from one invocation of COPY. Alternatively you can use
COPYTOCOPY to create the needed image copies. See Chapter 12,
“COPYTOCOPY,” on page 155 for more information.
Remote-site recovery: For remote site recovery, DB2 assumes that the system and
application libraries and the DB2 catalog and directory are identical at the local site
and recovery site. You can regularly transport copies of archive logs and database
data sets to a safe location to keep current data for remote-site recovery current.
This information can be kept on tape until needed.
Naming the data sets for the copies: The COPYDDN option of COPY names the
output data sets that receive copies for local use. The RECOVERYDDN option
names the output data sets that receive copies that are intended for remote-site
recovery. The options have the following formats:
COPYDDN (ddname1,ddname2)
RECOVERYDDN (ddname3,ddname4)
The DD names for the primary output data sets are ddname1 and ddname3. The
ddnames for the backup output data sets are ddname2 and ddname4.
Sample control statement: The following statement makes four full image copies of
the table space DSN8S91E in database DSN8D91A. The statement uses LOCALDD1
and LOCALDD2 as DD names for the primary and backup copies that are used on
the local system and RECOVDD1 and RECOVDD2 as DD names for the primary
and backup copies for remote-site recovery:
COPY TABLESPACE DSN8D91A.DSN8S91E
COPYDDN (LOCALDD1,LOCALDD2)
RECOVERYDDN (RECOVDD1,RECOVDD2)
You do not need to make copies for local use and for remote-site recovery at the
same time. COPY allows you to use either the COPYDDN or the RECOVERYDDN
option without the other. If you make copies for local use more often than copies
for remote-site recovery, a remote-site recovery could be performed with an older
copy, and more of the log, than a local recovery; hence, the recovery would take
longer. However, in your plans for remote-site recovery, that difference might be
acceptable. You can also use MERGECOPY RECOVERYDDN to create recovery-site
full image copies, and merge local incremental copies into new recovery-site full
copies.
Conditions for making multiple incremental image copies: DB2 cannot make
incremental image copies if any of the following conditions is true:
v The incremental image copy is requested only for a site other than the current
site (the local site from which the request is made).
v Incremental image copies are requested for both sites, but the most recent full
image copy was made for only one site.
v Incremental image copies are requested for both sites and the most recent full
image copies were made for both sites, but between the most recent full image
copy and current request, incremental image copies were made for the current
site only.
If you attempt to make incremental image copies under any of these conditions,
COPY terminates with return code 8, does not take the image copy or update the
SYSIBM.SYSCOPY table, and issues the following message:
DSNU404I csect-name
LOCAL SITE AND RECOVERY SITE INCREMENTAL
IMAGE COPIES ARE NOT SYNCHRONIZED
To proceed, and still keep the two sets of data synchronized, take another full
image copy of the table space for both sites, or change your request to make an
incremental image copy only for the site at which you are working.
DB2 cannot make an incremental image copy if the object that is being copied is an
index or index space.
Maintaining copy consistency: Make full image copies for both the local and
recovery sites:
v If a table space is in COPY-pending status
v After a LOAD or REORG procedure that did not create an inline copy
v If an index is in the informational COPY-pending status
| v If a table space is in informational COPY-pending status
This action helps to ensure correct recovery for both local and recovery sites. If the
requested full image copy is for one site only, but the history shows that copies
were made previously for both sites, COPY continues to process the image copy
and issues the following warning message:
DSNU406I FULL IMAGE COPY SHOULD BE TAKEN FOR BOTH LOCAL SITE AND
RECOVERY SITE.
The COPY-pending status of a table space is not changed for the other site when
you make multiple image copies at the current site for that other site. For example,
if a table space is in COPY-pending status at the current site, and you make copies
from there for the other site only, the COPY-pending status is still on when you
bring up the system at that other site.
When you specify the PARALLEL keyword, DB2 supports parallelism for image
copies on disk or tape devices. You can control the number of tape devices to
allocate for the copy function by using TAPEUNITS with the PARALLEL keyword.
If you use JCL statements to define tape devices, the JCL controls the allocation of
the devices.
When you explicitly specify objects with the PARALLEL keyword, the objects are
not necessarily processed in the specified order. Objects that are to be written to
tape and whose file sequence numbers have been specified in the JCL are
processed in the specified order. If templates are used, you cannot specify file
sequence numbers. In the absence of overriding JCL specifications, DB2 determines
the placement and, thus, the order of processing for such objects. When only
templates are used, objects are processed according to their size, with the largest
objects processed first.
To calculate the number of threads that you need when you specify the PARALLEL
keyword, use the formula (n * 2 + 1), where n is the number of objects that are to
be processed in parallel, regardless of the total number of objects in the list. If you
do not use the PARALLEL keyword, n is 1 and COPY uses three threads for a
single-object COPY job.
| COPY SCOPE PENDING indicates that you want to copy only those objects in
| COPY-pending or informational COPY-pending status. When the DSNUM ALL
| option is specified for partitioned objects, and one or more of the partitions are in
| COPY-pending or informational COPY-pending status, a copy will be taken of the
| entire table space or index space.
| For partitioned objects, if you only want the partitions in COPY-pending status or
| informational COPY-pending status to be copied, then a list of partitions should be
| specified. It is recommended that you do this by invoking COPY on a LISTDEF list
| built with the PARTLEVEL option. An output image copy data set will be created
| for each partition that is in COPY-pending or informational COPY-pending status.
| The LIMIT option on the TEMPLATE statement allows you to switch templates for
| output copy data sets. Template switching is most commonly needed to direct
| small data sets to DASD and large data sets to TAPE. This allows you to switch to
| templates that differ in the UNIT, DSNs, or HSM classes. See Chapter 31,
| “TEMPLATE,” on page 641 for more information about the TEMPLATE statement.
The following table spaces cannot be included in a list of table spaces. You must
specify each one as a single object:
v DSNDB01.SYSUTILX
v DSNDB06.SYSCOPY
v DSNDB01.SYSLGRNX
The only exceptions to this restriction are the indexes over these table spaces that
were defined with the COPY YES attribute. You can specify such indexes along
with the appropriate table space.
If a job step that contains more than one COPY statement abends, do not use
TERM UTILITY. Restart the job from the last commit point by using RESTART
instead. Terminating COPY by using TERM UTILITY in this case creates
inconsistencies between the ICF catalog and DB2 catalogs.
If a nonpartitioned table space consists of more than one data set, you can copy
several or all of the data sets independently in separate jobs. To do so, run
simultaneous COPY jobs (one job for each data set) and specify SHRLEVEL
CHANGE on each job.
However, creating copies simultaneously does not provide you with a consistent
recovery point unless you subsequently run a QUIESCE for the table space.
| When you make an image copy of a partition-by-growth table space, and the
| partition is empty as a result of REORG, SQL delete operations, or recovery to a
| prior point in time. The empty partition has a header page and space map pages
| or system pages. The COPY utility still copies the empty partition.
| If you copy a LOB table space that has a base table space with the NOT LOGGED
| attribute, copy the base table space and the LOB table space together so that a
| RECOVER TOLASTCOPY of the entire set results in consistent data across the base
| table space and all of the associated LOB table spaces.
| To copy an XML table space with a base table space that has the NOT LOGGED
| attribute, all associated XML table spaces must also have the NOT LOGGED
| attribute. The XML table space acquires this NOT LOGGED attribute by being
| linked to the logging attribute of its associated base table space. You cannot
| independently alter the logging attribute of an XML table space.
| If the LOG column of the SYSIBM.SYSTABLESPACE record for an XML table space
| has the value of ″X″, the logging attributes of the XML table space and its base
| table space are linked, and that the logging attribute of both table spaces is NOT
| LOGGED. To break the link, alter the logging attribute of the base table space back
| to LOGGED, and the logging attribute of both table spaces are changed back to
| LOGGED.
| Copying indexes
| If you copy a COPY YES index of a table space that has the NOT LOGGED
| attribute, copy the indexes and table spaces together to ensure that the indexes and
| the table space have the same recoverable point.
| When the index has the COMPRESS YES attribute, concurrent copies of indexes are
| compressed because DFSMSdss is invoked to copy the VSAM linear data sets
| (LDS) for the index. Image copies of indexes are not compressed because the index
| pages are copied from the DB2 buffer pool. When image copies are taken without
| the concurrent option, you can choose to compress the image copies by using
| access method compression via DFSMS or by using IDRC if the image copies
| reside on tape.
REFERENCE option. If the page size does not match the control interval, you
must use the SHRLEVEL REFERENCE option for table spaces with a 8-KB,
16-KB, or 32-KB page size.
Restrictions on using DFSMSdss concurrent copy: You cannot use a copy that is
made with DFSMSdss concurrent copy with the PAGE or ERRORRANGE options
of the RECOVER utility. If you specify PAGE or ERROR RANGE, RECOVER
bypasses any concurrent copy records when searching the SYSIBM.SYSCOPY table
for a recovery point.
You can use the CONCURRENT option with SHRLEVEL CHANGE on a table
space if the page size in the table space matches the control interval for the
associated data set.
Also, you cannot run the following DB2 stand-alone utilities on copies that are
made by DFSMSdss concurrent copy:
DSN1COMP
DSN1COPY
DSN1PRNT
You cannot execute the CONCURRENT option from the DB2I Utilities panel or
from the DSNU TSO CLIST command.
Table space availability: If you specify COPY SHRLEVEL REFERENCE with the
CONCURRENT option, and if you want to copy all of the data sets for a list of
table spaces to the same dump data set, specify FILTERDDN in your COPY
statement to improve table space availability. If you do not specify FILTERDDN,
COPY might force DFSMSdss to process the list of table spaces sequentially, which
might limit the availability of some of the table spaces that are being copied.
You cannot use the CHANGELIMIT option for a table space or partition that is
defined with TRACKMOD NO. If you change the TRACKMOD option from NO to
YES, you must take an image copy before you can use the CHANGELIMIT option.
When you change the TRACKMOD option from NO to YES for a linear table
space, you must take a full image copy by using DSNUM ALL before you can
copy using the CHANGELIMIT option.
Obtaining image copy information about a table space: When you specify COPY
CHANGELIMIT REPORTONLY, COPY reports image copy information for the
table space and recommends the type of copy, if any, to take. The report includes:
v The total number of pages in the table space. This value is the number of pages
that are to be copied if a full image copy is taken.
v The number of empty pages, if the table space is segmented.
v The number of changed pages. This value is the number of pages that are to be
copied if an incremental image copy is taken.
v The percentage of changed pages.
v The type of image copy that is recommended.
Adding conditional code to your COPY job: You can add conditional code to
your jobs so that an incremental or full image copy, or some other step, is
performed depending on how much the table space has changed. For example, you
can add a conditional MERGECOPY step to create a new full image copy if your
COPY job took an incremental copy. COPY CHANGELIMIT uses the following
return codes to indicate the degree that a table space or list of table spaces has
changed:
1 (informational)
If no CHANGELIMIT was met.
2 (informational)
If the percentage of changed pages is greater than the low CHANGELIMIT
and less than the high CHANGELIMIT value.
3 (informational)
If the percentage of changed pages is greater than or equal to the high
CHANGELIMIT value.
If you specify multiple COPY control statements in one job step, that job step
reports the highest return code from all of the imbedded statements. Basically, the
statement with the highest percentage of changed pages determines the return
code and the recommended action for the entire list of COPY control statements
that are contained in the subsequent job step.
Using conditional copy with generation data groups (GDGs): When you use
generation data groups (GDGs) and need to make an incremental image copy, take
the following steps to prevent creating an empty image copy:
1. Include in your job a first step in which you run COPY with CHANGELIMIT
REPORTONLY. Set the SYSCOPY DD statement to DD DUMMY so that no
output data set is allocated. If you specify REPORTONLY and use a template,
DB2 does not dynamically allocate the data set.
2. Add a conditional JCL statement to examine the return code from the COPY
CHANGELIMIT REPORTONLY step.
3. Add a second COPY step without CHANGELIMIT REPORTONLY to copy the
table space or table space list based on the return code from the second step.
Even if you do not periodically merge multiple image copies into one copy when
you do not have enough tape units, RECOVER TABLESPACE can still attempt to
recover the object. RECOVER dynamically allocates the full image copy and
attempts to dynamically allocate all the incremental image copy data sets. If every
incremental copy can be allocated, recovery proceeds to merge pages to table
spaces and apply the log. If a point is reached where RECOVER TABLESPACE
cannot allocate an incremental copy, the log RBA of the last successfully allocated
data set is noted. Attempts to allocate incremental copies cease, and the merge
proceeds using only the allocated data sets. The log is applied from the noted RBA,
and the incremental image copies that were not allocated are simply ignored.
For LOB data, you should quiesce and copy both the base table space and the LOB
table space at the same time to establish a recovery point of consistency, called a
recovery point. Be aware that QUIESCE does not create a recovery point for a LOB
table space that contains LOBs that are defined with LOG NO.
Setting and clearing the informational COPY-pending status: For an index that
was defined with the COPY YES attribute the following utilities can place the
index in the informational COPY-pending (ICOPY) status:
v REORG INDEX
v REORG TABLESPACE LOG YES or NO
v LOAD TABLE LOG YES or NO
v REBUILD INDEX
After the utility processing completes, take a full image copy of the index space so
that the RECOVER utility can recover the index space. If you need to recover an
index of which you did not take a full image copy, use the REBUILD INDEX
utility to rebuild the index from data in the table space.
| Table spaces with the NOT LOGGED attribute that have been updated since the
| last full copy will be in informational COPY-pending status. To copy the table
| spaces that have been updated, run the COPY utility with the SCOPE PENDING
| option.
Improving performance
You can merge a full image copy and subsequent incremental image copies into a
new full copy by running the MERGECOPY utility. After reorganizing a table
space, the first image copy must be a full image copy.
Do not base the decision of whether to run a full image copy or an incremental
image copy on the number of rows that are updated since the last image copy was
taken. Instead, base your decision on the percentage of pages that contain at least
one updated record (not the number of updated records). Regardless of the size of
the table, if more than 50% of the pages contain updated records, use full image
copy (this saves the cost of a subsequent MERGECOPY). To find the percentage of
changed pages, you can execute COPY with the CHANGELIMIT REPORTONLY
option. Alternatively, you can execute COPY CHANGELIMIT to allow COPY to
Using DB2 data compression for table spaces can improve COPY performance
because COPY does not decompress data. The performance improvement is
proportional to the amount of compression.
Attention: Do not take incremental image copies when using generation data
groups unless data pages have changed. When you use generation data groups,
taking an incremental image copy when no data pages have changed causes the
following results:
v The new image copy data set is empty.
v No SYSCOPY record is inserted for the new image copy data set.
v Your oldest image copy is deleted.
See “Using conditional copy with generation data groups (GDGs)” on page 137 for
guidance on executing COPY with the CHANGELIMIT and REPORTONLY options
to ensure that you do not create empty image copy data sets when using GDGs.
If you plan to use SMS, catalog all image copies. Never maintain cataloged and
uncataloged image copies that have the same name.
Terminating COPY
This section explains the recommended way to terminate the COPY utility.
Recommendation: Do not stop a COPY job with the TERM UTILITY command. If
you issue TERM UTILITY while COPY is in the active or stopped state, DB2
inserts an ICTYPE=T record in the SYSIBM.SYSCOPY catalog table for each object
that COPY had started processing, but not yet completed. For copies that are made
with SHRLEVEL REFERENCE, some objects in the list might not have an
ICTYPE=T record. For SHRLEVEL CHANGE, some objects might have a valid an
ICTYPE=F, I, or T record, or no record at all. The COPY utility does not allow you
to take an incremental image copy if an ICTYPE=T record exists. To reset the status
in this case, you must make a full image copy.
DB2 uses the same image copy data set when you RESTART from the last commit
point. Therefore, specify DISP=(MOD,CATLG,CATLG) on your DD statements. You
cannot use RESTART(PHASE) for any COPY job. If you do specify
RESTART(PHASE), the request is treated as if you specified RESTART, also known
as RESTART(CURRENT).
Restarting COPY
If you do not use the TERM UTILITY command, you can restart a COPY job.
COPY jobs with the CONCURRENT option restart from the beginning, and other
COPY jobs restart from the last commit point. You cannot use RESTART(PHASE)
for any COPY job. If you are restarting a COPY job with uncataloged output data
sets, you must specify the appropriate volumes for the job in the JCL or on the
TEMPLATE utility statement. Doing so could impact your ability to use implicit
restart. For general instructions on restarting a utility job, see “Restarting an online
utility” on page 39.
Restarting with a new data set: If you define a new output data set for a current
restart, complete the following actions before restarting the COPY job:
1. Copy the failed COPY output to the new data set.
2. Delete the old data set.
3. Rename the new data set to use the old data set name.
Restricted states: Do not copy a table space that is in any of the following states:
v CHECK-pending
v RECOVER-pending
v REFRESH-pending
v Logical error range
v Group buffer pool RECOVER-pending
v Stopped
v STOP-pending
Claims and drains: Table 19 shows which claim classes COPY claims and drains
and any restrictive status that the utility sets on the target object.
Table 19. Claim classes of COPY operations
SHRLEVEL SHRLEVEL
Target REFERENCE CHANGE
Table space, index space, or partition DW CR
UTRO UTRW1
Legend:
v DW - Drain the write claim class - concurrent access for SQL readers
v CR - Claim the read claim class
v UTRO - Utility restrictive state, read-only access allowed
v UTRW - Utility restrictive state, read-write access allowed
Notes:
1. If the target object is a segmented table space, SHRLEVEL CHANGE does not allow you
to concurrently execute an SQL DELETE without the WHERE clause.
COPY does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Compatibility: Table 20 documents which utilities can run concurrently with COPY
on the same target object. The target object can be a table space, an index space, or
a partition of a table space or index space. If compatibility depends on particular
options of a utility, that information is also documented in the table.
Table 20. Compatibility of COPY with other utilities
COPY COPY COPY COPY
INDEXSPACE INDEXSPACE TABLESPACE TABLESPACE
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Action REFERENCE CHANGE REFERENCE CHANGE
BACKUP SYSTEM Yes Yes Yes Yes
CHECK DATA Yes Yes No No
CHECK INDEX Yes Yes Yes Yes
CHECK LOB Yes Yes Yes Yes
COPY INDEXSPACE No No Yes Yes
COPY TABLESPACE Yes Yes No No
COPYTOCOPY No No No No
DIAGNOSE Yes Yes Yes Yes
LOAD No No No No
MERGECOPY No No No No
MODIFY No No No No
QUIESCE Yes No Yes No
REBUILD INDEX No No Yes Yes
RECOVER INDEX No No Yes Yes
RECOVER TABLESPACE Yes Yes No No
REORG INDEX No No Yes Yes
REORG TABLESPACE No No No No
UNLOAD CONTINUE or
PAUSE
REORG TABLESPACE Yes Yes Yes Yes
UNLOAD ONLY or
EXTERNAL
REPAIR LOCATE by KEY, Yes Yes Yes Yes
RID, or PAGE DUMP or
VERIFY
REPAIR LOCATE by KEY No No No No
or RID DELETE or
REPLACE
To run on DSNDB01.SYSUTILX, COPY must be the only utility in the job step.
Also, if SHRLEVEL REFERENCE is specified, the COPY job of
DSNDB01.SYSUTILX must be the only utility running in the Sysplex.
COPY on SYSUTILX is an “exclusive” job; such a job can interrupt another job
between job steps, possibly causing the interrupted job to time out.
Example 1: Making a full image copy. The following control statement specifies
that the COPY utility is to make a full image copy of table space
DSN8D91A.DSN8S91E. The copy is to be written to the data set that is defined by
the SYSCOPY DD statement in the JCL; SYSCOPY is the default.
//STEP1 EXEC DSNUPROC,UID=’IUJMU111.COPYTS’,
// UTPROC=’’,
// SYSTEM=’DSN’
//SYSCOPY DD DSN=COPY001F.IFDY01,UNIT=SYSDA,VOL=SER=CPY01I,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//SYSIN DD *
COPY TABLESPACE DSN8D91A.DSN8S91E
/*
Instead of defining the data sets in the JCL, you can use templates. In the
following example, the preceding job is modified to use a template. In this
example, the name of the template is LOCALDDN. The LOCALDDN template is
identified in the COPY statement by the COPYDDN option.
//STEP1 EXEC DSNUPROC,UID=’IUJMU111.COPYTS’,
// UTPROC=’’,
// SYSTEM=’DSN’
//SYSIN DD *
Recommendation: When possible, use templates to allocate data sets. For more
information about templates, see Chapter 31, “TEMPLATE,” on page 641.
Example 2: Making full image copies for local site and recovery site. The following
COPY control statement specifies that COPY is to make primary and backup full
image copies of table space DSN8D91P.DSN8S91C at both the local site and the
recovery site. The COPYDDN option specifies the output data sets for the local
site, and the RECOVERYDDN option specifies the output data sets for the recovery
site. The PARALLEL option indicates that up to 2 objects are to be processed in
parallel.
The OPTIONS statement at the beginning indicates that if COPY encounters any
errors (return code 8) while making the requested copies, DB2 ignores that
particular item. COPY skips that item and moves on to the next item. For example,
if DB2 encounters an error copying the specified data set to the COPY1 data set,
DB2 ignores the error and tries to copy the table space to the COPY2 data set.
OPTIONS EVENT(ITEMERROR,SKIP)
COPY TABLESPACE DSN8D81P.DSN8S81C
COPYDDN(COPY1,COPY2)
RECOVERYDDN(COPY3,COPY4)
PARALLEL(2)
Example 3: Making full image copies of a list of objects. The control statement in
Figure 20 on page 145 specifies that COPY is to make local and recovery full image
copies (both primary and backup) of the following objects:
v Table space DSN8D91A.DSN8S91D, and its indexes:
– DSN8910.XDEPT1
– DSN8910.XDEPT2
– DSN8910.XDEPT3
v Table space DSN8D91A.DSN8S91E, and its indexes:
– DSN8710.XEMP1
– DSN8710.XEMP2
These copies are to be written to the data sets that are identified by the COPYDDN
and RECOVERYDDN options for each object. The COPYDDN option specifies the
data sets for the copies at the local site, and the RECOVERYDDN option specifies
the data sets for the copies at the recovery site. The first parameter of each of these
options specifies the data set for the primary copy, and the second parameter
specifies the data set for the backup copy. For example, the primary copy of table
space DSN8D81A.DSN8S81D at the recovery site is to be written to the data set
that is identified by the COPY3 DD statement.
SHRLEVEL REFERENCE specifies that no updates are allowed during the COPY
job. This option is the default and is recommended to ensure the integrity of the
data in the image copy.
Figure 20. Example of making full image copies of multiple objects (Part 1 of 2)
//COPY16 DD DSN=C81A.S00004.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY17 DD DSN=C81A.S00005.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY18 DD DSN=C81A.S00005.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY19 DD DSN=C81A.S00005.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY20 DD DSN=C81A.S00005.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY21 DD DSN=C81A.S00006.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY22 DD DSN=C81A.S00006.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY23 DD DSN=C81A.S00006.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY24 DD DSN=C81A.S00006.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY25 DD DSN=C81A.S00007.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY26 DD DSN=C81A.S00007.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY27 DD DSN=C81A.S00007.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY28 DD DSN=C81A.S00007.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//SYSIN DD *
COPY
TABLESPACE DSN8D91A.DSN8S91D
COPYDDN (COPY1,COPY2)
RECOVERYDDN (COPY3,COPY4)
INDEX DSN8910.XDEPT1
COPYDDN (COPY5,COPY6)
RECOVERYDDN (COPY7,COPY8)
INDEX DSN8910.XDEPT2
COPYDDN (COPY9,COPY10)
RECOVERYDDN (COPY11,COPY12)
INDEX DSN8910.XDEPT3
COPYDDN (COPY13,COPY14)
RECOVERYDDN (COPY15,COPY16)
TABLESPACE DSN8D91A.DSN8S91E
COPYDDN (COPY17,COPY18)
RECOVERYDDN (COPY19,COPY20)
INDEX DSN8910.XEMP1
COPYDDN (COPY21,COPY22)
RECOVERYDDN (COPY23,COPY24)
INDEX DSN8910.XEMP2
COPYDDN (COPY25,COPY26)
RECOVERYDDN (COPY27,COPY28)
PARALLEL(4)
SHRLEVEL REFERENCE
/*
Figure 20. Example of making full image copies of multiple objects (Part 2 of 2)
| You can also write this COPY job so that it uses lists and templates, as shown in
| Figure 21 on page 147. In this example, the name of the template is T1. Note that
| this TEMPLATE statement does not contain any space specifications for the
| dynamically allocated data sets. Instead, DB2 determines the space requirements.
| The T1 template is identified in the COPY statement by the COPYDDN and
| RECOVERYDDN options. The name of the list is COPYLIST. This list is identified
| in the COPY control statement by the LIST option.
Figure 21. Example of using a list and template to make full image copies of multiple objects
| Note that the DSN option of the TEMPLATE statement identifies the names of the
data sets to which the copies are to be written. These names are similar to the data
set names in the JCL in Figure 20 on page 145. For more information about using
variable notation for data set names in TEMPLATE statements, see “Creating data
set names” on page 655.
Each of the preceding COPY jobs create a point of consistency for the table spaces
and their indexes. You can subsequently use the RECOVER utility with the
TOLOGPOINT option to recover all of these objects; see 417 for an example.
Example 5: Making full image copies of a list of objects in parallel on tape. The
following COPY control statement specifies that COPY is to make image copies of
the specified table spaces and their associated index spaces in parallel and stack
the copies on different tape devices.
The TEMPLATE utility control statements define the templates A1 and A2. For
more information about TEMPLATE control statements, see “Syntax and options of
the TEMPLATE control statement ” on page 641 in the TEMPLATE chapter.
//COPY2A EXEC DSNUPROC,SYSTEM=DSN
//SYSIN DD *
TEMPLATE A1 DSN(&DB..&SP..COPY1) UNIT CART STACK YES
TEMPLATE A2 DSN(&DB..&SP..COPY2) UNIT CART STACK YES
COPY PARALLEL 2 TAPEUNITS 2
TABLESPACE DSN8D81A.DSN8S81D COPYDDN(A1)
INDEXSPACE DSN8810.XDEPT COPYDDN(A1)
TABLESPACE DSN8D81A.DSN8S81E COPYDDN(A2)
INDEXSPACE DSN8810.YDEPT COPYDDN(A2)
Although use of templates is recommended, you can also define the output data
sets by coding JCL DD statements, as in Figure 22 on page 149. This COPY control
statement also specifies a list of objects to be processed in parallel, but in this case,
the data sets are defined by DD statements. In each DD statement, notice the
parameters for the VOLUME option. These values show that the data sets are
defined on three different tape devices as follows:
v The first tape device contains data sets that are defined by DD statements DD1
and DD4. (For DD4, the VOLUME option has a value of *.DD1 for the REF
parameter.)
v A second tape device contains data sets that are defined by DD statements DD2
and DD3. (For DD3, the VOLUME option has a value of *.DD3 for the REF
parameter.)
v A third tape device contains the data set that is defined by DD statement DD5.
The following table spaces are to be processed in parallel on two different tape
devices:
v DSN8D81A.DSN8S81D on the device that is defined by the DD1 DD statement
and the device that is defined by the DD5 DD statement
v DSN8D81A.DSN8S81E on the device that is defined by the DD2 DD statement
Copying of the following tables spaces must wait until processing has completed
for DSN8D81A.DSN8S81D and DSN8D81A.DSN8S81E:
v DSN8D81A.DSN8S81F on the device that is defined by the DD2 DD statement
after DSN8D81A.DSN8S81E completes processing
v DSN8D81A.DSN8S81G on the device that is defined by the DD1 DD statement
after DSN8D81A.DSN8S81D completes processing
Figure 22. Example of making full image copies of a list of objects in parallel on tape
Example 6: Using both JCL-defined and template-defined data sets to copy a list
of objects on tape: The example in Figure 23 on page 150 uses both JCL DD
statements and utility templates to define four data sets for the image copies. The
JCL defines two data sets (DB1.TS1.CLP and DB2.TS2.CLB.BACKUP), and the
TEMPLATE utility control statements define two data sets that are to be
dynamically allocated (&DB..&SP..COPY1 and &DB..&SP..COPY2). For more
information about TEMPLATE control statements, see “Syntax and options of the
TEMPLATE control statement ” on page 641 in the TEMPLATE chapter.
The COPYDDN options in the COPY control statement specify the data sets that
are to be used for the local primary and backup image copies of the specified table
spaces. For example, the primary copy of table space DSN8D81A.DSN8S71D is to
be written to the data set that is defined by the DD1 DD statement (DB1.TS1.CLP),
and the primary copy of table space DSN8D81A.DSN8S71E is to be written to the
data set that is defined by the A1 template (&DB..&SP..COPY1).
Four tape devices are allocated for this COPY job: the JCL allocates two tape
drives, and the TAPEUNITS 2 option in the COPY statement indicates that two
tape devices are to be dynamically allocated. Note that the TAPEUNITS option
applies only to those tape devices that are dynamically allocated by the
TEMPLATE statement.
Recommendation: Although this example shows how to use both templates and
DD statements, use only templates, if possible.
In the preceding example, the utility determines the number of tape streams to use
by dividing the value for TAPEUNITS (8) by the number of output data sets (2) for
a total of 4 in this example. For each tape stream, the utility attaches one subtask.
The list of objects is sorted by size and processed in descending order. The first
subtask to finish processes the next object in the list. In this example, the
PARALLEL(10) option limits the number of objects to be processed in parallel to 10
and attaches four subtasks. Each subtask copies the objects in the list in parallel to
two tape drives, one for the primary and one for the recovery output data sets.
For more information about LISTDEF control statements, see “Syntax and options
of the LISTDEF control statement” on page 185 in the LISTDEF chapter. For more
information about TEMPLATE control statements, see “Syntax and options of the
TEMPLATE control statement ” on page 641 in the TEMPLATE chapter.
not fail; COPY takes a full image copy of the index space instead. However, if a
COPY FULL NO statement identifies only an index that is not part of a list, the
COPY job fails.
All specified copies (local primary and backup copies and remote primary and
backup copies) are written to data sets that are dynamically allocated according to
the specifications of the COPYDS template. This template is defined in the
preceding TEMPLATE utility control statement. For more information about
templates, see Chapter 31, “TEMPLATE,” on page 641.
Example 10: Reporting image copy information for a table space. The
REPORTONLY option in the following control statement specifies that image copy
information is to be displayed only; no image copies are to be made. The
CHANGELIMIT(10,40) option specifies that the following information is to be
displayed:
v Recommendation that a full image copy be made if the percentage of changed
pages is equal to or greater than 40%.
v Recommendation that an incremental image copy be made if the percentage of
changed pages is greater than 10% and less than 40%.
v Recommendation that no image copy be made if the percentage of changed
pages is 10% or less.
COPY TABLESPACE DSN8D91P.DSN8S91C CHANGELIMIT(10,40) REPORTONLY
Figure 24. Example of invoking DFSMSdss concurrent copy with the COPY utility
Example 12: Invoking DFSMSdss concurrent copy and using a filter data set. The
control statement in Figure 25 specifies that DFSMSdss concurrent copy is to make
full image copies of the objects in the TSLIST list (table spaces TS1, TS2, and TS3).
The FILTERDDN option specifies that COPY is to use the filter data set that is
defined by the FILT template. All output is sent to the SYSCOPY data set, as
indicated by the COPYDDN(SYSCOPY) option. SYSCOPY is the default. This data
set is defined in the preceding TEMPLATE control statement.
LISTDEF TSLIST
INCLUDE TABLESPACE TS1
INCLUDE TABLESPACE TS2
INCLUDE TABLESPACE TS3
TEMPLATE SYSCOPY DSN &DB..&TS..COPY&IC.&LR.&PB..D&DATE..T&TIME.
UNIT(SYSDA) DISP (MOD,CATLG,CATLG)
TEMPLATE FILT DSN FILT.TEST1.&SN..D&DATE.
UNIT(SYSDA) DISP (MOD,CATLG,DELETE)
COPY LIST TSLIST
FILTERDDN(FILT)
COPYDDN(SYSCOPY)
CONCURRENT
SHRLEVEL REFERENCE
Figure 25. Example of invoking DFSMSdss concurrent copy with the COPY utility and using a
filter data set
Example 13: Copying LOB table spaces together with related objects. Assume that
table space TPIQUD01 is a base table space and that table spaces TLIQUDA1,
TLIQUDA2, TLIQUDA3, and TLIQUDA4 are LOB table spaces. The control
statement in Figure 26 on page 153 specifies that COPY is to take the following
actions:
v Take a full image copy of each specified table space if the percentage of changed
pages is equal to or greater than the highest decimal percentage value for the
CHANGELIMIT option for that table space. For example, if the percentage of
changed pages for table space TPIQUD01 is equal to or greater than 6.7%, COPY
is to take a full image copy.
v Take an incremental image copy of each specified table space if the percentage of
changed pages falls in the range between the specified decimal percentage
values for the CHANGELIMIT option for that table space. For example, if the
percentage of changed pages for table space TLIQUDA1 is greater than 7.9% and
less than 25.3%, COPY is to take an incremental image copy.
v Do not take an image copy of each specified table space if the percentage of
changed pages is equal to or less than the lowest decimal percentage value for
the CHANGELIMIT option for that table space. For example, if the percentage of
changed pages for table space TLIQUDA2 is equal to or less than 2.2%, COPY is
not to take an incremental image copy.
COPY
TABLESPACE DBIQUD01.TPIQUD01 DSNUM ALL CHANGELIMIT(3.3,6.7)
COPYDDN(COPYTB1)
TABLESPACE DBIQUD01.TLIQUDA1 DSNUM ALL CHANGELIMIT(7.9,25.3)
COPYDDN(COPYTA1)
TABLESPACE DBIQUD01.TLIQUDA2 DSNUM ALL CHANGELIMIT(2.2,4.3)
COPYDDN(COPYTA2)
TABLESPACE DBIQUD01.TLIQUDA3 DSNUM ALL CHANGELIMIT(1.2,9.3)
COPYDDN(COPYTA3)
TABLESPACE DBIQUD01.TLIQUDA4 DSNUM ALL CHANGELIMIT(2.2,4.0)
COPYDDN(COPYTA4)
INDEXSPACE DBIQUD01.IPIQUD01 DSNUM ALL
COPYDDN(COPYIX1)
INDEXSPACE DBIQUD01.IXIQUD02 DSNUM ALL
COPYDDN(COPYIX2)
INDEXSPACE DBIQUD01.IUIQUD03 DSNUM ALL
COPYDDN(COPYIX3)
INDEXSPACE DBIQUD01.IXIQUDA1 DSNUM ALL
COPYDDN(COPYIXA1)
INDEXSPACE DBIQUD01.IXIQUDA2 DSNUM ALL
COPYDDN(COPYIXA2)
INDEXSPACE DBIQUD01.IXIQUDA3 DSNUM ALL
COPYDDN(COPYIXA3)
INDEXSPACE DBIQUD01.IXIQUDA4 DSNUM ALL
COPYDDN(COPYIXA4)
SHRLEVEL REFERENCE
Figure 26. Example of copying LOB table spaces together with related objects
Example 14: Using GDGs to make a full image copy. The following control
statement specifies that the COPY utility is to make a full image copy of table
space DBLT2501.TPLT2501. The local copies are to be written to data sets that are
dynamically allocated according to the COPYTEM1 template. The remote copies
are to be written to data sets that are dynamically allocated according to the
COPYTEM2 template. For both of these templates, the DSN option indicates the
name of generation data group JULTU225 and the generation number of +1. (If a
GDG base does not already exist, DB2 creates one.) Both of these output data sets
are to be modeled after the JULTU255.MODEL data set (as indicated by the
MODELCB option in the TEMPLATE statements).
//***********************************************************
//* COMMENT: MAKE A FULL IMAGE COPY OF THE TABLESPACE.
//* USE A TEMPLATE FOR THE GDG.
//***********************************************************
//STEP2 EXEC DSNUPROC,UID=’JULTU225.COPY’,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSIN DD *
TEMPLATE COPYTEM1
UNIT SYSDA
DSN ’JULTU225.GDG.LOCAL.&PB.(+1)’
MODELDCB JULTU225.MODEL
TEMPLATE COPYTEM2
UNIT SYSDA
DSN ’JULTU225.GDG.REMOTE.&PB.(+1)’
MODELDCB JULTU225.MODEL
COPY TABLESPACE DBLT2501.TPLT2501
FULL YES
COPYDDN (COPYTEM1,COPYTEM1)
RECOVERYDDN (COPYTEM2,COPYTEM2)
SHRLEVEL REFERENCE
| Example 15: Copying clone table data. The following control statement indicates
| that COPY is to copy only clone table data in the specified table spaces or indexes.
| COPY SHRLEVEL REFERENCE CLONE
| TABLESPACE DBIQUD01.TPIQUD01 DSNUM ALL CHANGELIMIT(3.3,6.7)
| COPYDDN(COPYTB1)
| TABLESPACE DBIQUD01.TLIQUDA1 DSNUM ALL CHANGELIMIT(7.9,25.3)
| COPYDDN(COPYTA1)
| TABLESPACE DBIQUD01.TLIQUDA2 DSNUM ALL CHANGELIMIT(2.2,4.3)
| COPYDDN(COPYTA2)
| TABLESPACE DBIQUD01.TLIQUDA3 DSNUM ALL CHANGELIMIT(1.2,9.3)
| COPYDDN(COPYTA3)
| TABLESPACE DBIQUD01.TLIQUDA4 DSNUM ALL CHANGELIMIT(2.2,4.0)
| COPYDDN(COPYTA4)
| INDEXSPACE DBIQUD01.IPIQUD01 DSNUM ALL
| COPYDDN(COPYIX1)
| Example 16: Copying updated table space data. The following control statement
| indicates that COPY is to copy only the objects that have been updated. SCOPE
| PENDING indicates that you want to copy only those objects in COPY-pending or
| informational COPY-pending status.
| COPY SHRLEVEL REFERENCE
| TABLESPACE DBIQUD01.TPIQUD01 DSNUM ALL CHANGELIMIT(3.3,6.7)
| COPYDDN(COPYTB1)
| TABLESPACE DBIQUD01.TLIQUDA1 DSNUM ALL CHANGELIMIT(7.9,25.3)
| COPYDDN(COPYTA1)
| TABLESPACE DBIQUD01.TLIQUDA2 DSNUM ALL CHANGELIMIT(2.2,4.3)
| COPYDDN(COPYTA2)
| TABLESPACE DBIQUD01.TLIQUDA3 DSNUM ALL CHANGELIMIT(1.2,9.3)
| COPYDDN(COPYTA3)
| TABLESPACE DBIQUD01.TLIQUDA4 DSNUM ALL CHANGELIMIT(2.2,4.0)
| COPYDDN(COPYTA4)
| INDEXSPACE DBIQUD01.IPIQUD01 DSNUM ALL
| COPYDDN(COPYIX1)PARALLEL(4)
| SCOPE PENDING
| /*
The RECOVER utility uses the copies when recovering a table space or index space
to the most recent time or to a previous time. These copies can also be used by
MERGECOPY, UNLOAD, and possibly a subsequent COPYTOCOPY execution.
The entries for SYSCOPY columns remain the same as the original entries in the
SYSCOPY row when the COPY utility recorded them. The COPYTOCOPY job
inserts values in the columns DSNAME, GROUP_MEMBER, JOBNAME, AUTHID,
DSVOLSER, and DEVTYPE.
Restrictions: COPYTOCOPY does not support the following catalog and directory
objects:
v DSNDB01.SYSUTILX, and its indexes
v DSNDB01.DBD01, and its indexes
v DSNDB06.SYSCOPY, and its indexes
An image copy from a COPY job with the CONCURRENT option cannot be
processed by COPYTOCOPY.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
Syntax diagram
ts-num-spec:
DSNUM ALL
TABLESPACE table-space-name
database-name. DSNUM integer
index-name-spec:
Notes:
1 INDEXSPACE is the preferred specification.
2 Not valid for nonpartitioning indexes.
from-copy-spec:
FROMLASTCOPY
FROMLASTFULLCOPY
(1)
FROMLASTINCRCOPY
(2)
FROMCOPY dsn
FROMVOLUME CATALOG
volser
FROMSEQNO n
Notes:
1 Not valid with the INDEXSPACE or INDEX keyword.
2 Not valid with the LIST keyword.
data-set-spec:
(1) (2)
COPYDDN( ddname1 )
,ddname2 RECOVERYDDN( ddname3 )
,ddname2 ,ddname4
,ddname4
RECOVERYDDN( ddname3 )
,ddname4
,ddname4
Notes:
1 Use this option if you want to make a local site primary copy from one of the recovery site
copies.
2 You can specify up to three DD names for both the COPYDDN and RECOVERYDDN options
combined.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
utility allows one LIST keyword for each COPYTOCOPY control
statement. Do not specify LIST with either the INDEX or
TABLESPACE keywords. DB2 invokes COPYTOCOPY once for the
entire list. This utility will only process clone data if the CLONE
keyword is specified. The use of CLONED YES on the LISTDEF
statement is not sufficient. For more information about LISTDEF
specifications, see Chapter 15, “LISTDEF,” on page 185.
TABLESPACE Specifies the table space (and, optionally, the database it belongs
to) that is to be copied. database-name is the name of the database
that the table space belongs to. The default is DSNDB04.
table-space-name is the name of the table space to be copied.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space that is to be copied;
the name is obtained from the SYSIBM.SYSINDEXES table. Define
the index space with the COPY YES attribute.
database-name optionally specifies the name of the database that the
index space belongs to. The default is DSNDB04.
index-space-name specifies the name of the index space that is to be
copied.
INDEX creator-id.index-name
Specifies the index that is to be copied. Enclose the index name in
quotation marks if the name contains a blank.
creator-id optionally specifies the creator of the index. The default
is the user identifier for the utility.
index-name specifies the name of the index that is to be copied.
DSNUM Identifies a partition or data set, within the table space or the index
In this format:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
spacename Is the table space or index space name.
y Is I or J.
| z Is 1 or 2.
nnn Is the data set integer.
If the image copy data set is a generation data set, then supply a
fully qualified data set name, including the absolute generation
and version number. If the image copy data set is not a generation
data set and more than one image copy data set have the same
data set name, use the FROMVOLUME option to identify the data
set exactly.
FROMVOLUME
Identifies the image copy data set.
CATALOG
Identifies the data set as cataloged. Use this option only for an
image copy that was created as a cataloged data set. (Its
volume serial is not recorded in SYSIBM.SYSCOPY.)
COPYTOCOPY refers to the SYSIBM.SYSCOPY catalog table
during execution. If you use FROMVOLUME CATALOG, the
data set must be cataloged. If you remove the data set from the
catalog after creating it, you must catalog the data set again to
make it consistent with the record that appears in
SYSIBM.SYSCOPY for this copy.
vol-ser
Identifies the data set by an alphanumeric volume serial
identifier of its first volume. Use this option only for an image
copy that was created as a noncataloged data set. Specify the
first vol-ser in the SYSCOPY record to locate a data set that is
stored on multiple tape volumes.If an individual volume serial
number contains leading zeros, it must be enclosed in single
quotation marks.
FROMSEQNO n
Identifies the image copy data set by its file sequence number.
n is the file sequence number.
COPYDDN (ddname1,ddname2)
Specifies a DD name (ddname) or a TEMPLATE name for the
primary (ddname1) and backup (ddname2) copied data sets for the
image copy at the local site. If ddname2 is specified by itself,
COPYTOCOPY expects the local site primary image copy to exist.
If it does not exist, error message DSNU1401 is issued and the
process for the object is terminated.
Recommendation: Catalog all of your image copy data sets.
You cannot have duplicate image copy data sets. If the DD
statement identifies a noncataloged data set with the same name,
volume serial, and file sequence number as one that is already
recorded in SYSIBM.SYSCOPY, COPYTOCOPY issues a message
and no copy is made. If the DD statement identifies a cataloged
data set with only the same name, no copy is made. For cataloged
image copy data sets, you must specify CATLG for the normal
termination disposition in the DD statement; for example,
DISP=(MOD,CATLG,CATLG). The DSVOLSER field of the
SYSCOPY entry is blank.
When the image copy data set is going to a tape volume, specify
VOL=SER parameter in the DD statement.
The COPYDDN keyword specifies either a DD name or a
TEMPLATE name specification from a previous TEMPLATE control
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or Index space
Object that is to be copied. (If you want to copy only certain partitions in a
partitioned table space, use the DSNUM option in the control statement.)
DB2 catalog objects
Objects in the catalog that COPYTOCOPY accesses. The utility records each
copy in the DB2 catalog table SYSIBM.SYSCOPY.
Input image copy data set
This information is accessed through the DB2 catalog. However, if you
want to preallocate your image copy data sets by using DD statements, see
“Retaining tape mounts” on page 165 for more information.
COPYTOCOPY retains all tape mounts for you.
Output data set size: Image copies are written to sequential non-VSAM data sets.
Recommendation: Use a template for the image copy data set for a table space by
specifying a TEMPLATE statement without the SPACE keyword. When you omit
this keyword, the utility calculates the appropriate size of the data set for you.
Alternatively, you can find the approximate size, in bytes, of the image copy data
set for a table space by using the following procedure:
1. Find the high-allocated page number from the COPYPAGESF column of
SYSIBM.SYSCOPY or from information in the VSAM catalog data set.
2. Multiply the high-allocated page number by the page size.
JCL parameters: You can specify a block size for the output by using the BLKSIZE
parameter on the DD statement for the output data set. Valid block sizes are
| multiples of 4096 bytes. It is recommended that the BLKSIZE parameter be
| omitted. The TAPEBLKSZLIM parameter of the DEVSUPxx member of
| SYS1.PARMLIB controls the block size limit for tapes. See the z/OS MVS
| Initialization and Tuning Guide for more details.
Cataloging image copies: To catalog your image copy data sets, use the
DISP=(NEW,CATLG,CATLG) parameter in the DD statement or TEMPLATE that is
named by the COPYDDN or RECOVERYDDN option. After the image copy is
taken, the DSVOLSER column of the row that is inserted into SYSIBM.SYSCOPY
contains blanks.
Duplicate image copy data sets are not allowed. If a cataloged data set is already
recorded in SYSIBM.SYSCOPY with the same name as the new image copy data
set, a message is issued and the copy is not made.
When RECOVER locates the entry in SYSIBM.SYSCOPY, it uses the ICF catalog to
allocate the required data set. If you have uncataloged the data set, the allocation
fails. In that case, the recovery can still go forward; RECOVER searches for a
previous image copy. But even if RECOVER finds one, it must use correspondingly
more of the log to recover. You are responsible for keeping the z/OS catalog
consistent with SYSIBM.SYSCOPY with regard to existing image copy data sets.
The COPYTOCOPY utility makes a copy from an existing image copy and writes
pages from the image copy to the output data sets. The JCL for the utility job must
include DD statements or a template for the output data sets. If the object consists
of multiple data sets and all are copied in one job, the copies reside in one physical
sequential output data set.
If a job step that contains more than one COPYTOCOPY statement abnormally
terminates, do not use TERM UTILITY. Restart the job from the last commit point
by using RESTART instead. Terminating COPYTOCOPY in this case might cause
inconsistencies between the ICF catalog and DB2 catalogs if generation data sets
are used.
If you specify the FROMCOPY keyword and the specified data set is not found in
SYSIBM.SYSCOPY, COPYTOCOPY issues message DSNU1401I. Processing for the
object then terminates.
Columns that are inserted by COPYTOCOPY are the same as those of the original
entries in SYSCOPY row when the COPY utility recorded them. Except for
columns GROUP_MEMBER, JOBNAME, AUTHID, DSNAME, DEVTYPE, and
DSVOLSER, the columns are those of the COPYTOCOPY job. When
COPYTOCOPY is invoked at the partition level (DSNUM n) and the input data set
is an inline copy that was created by the REORG of a range of partitions,
COPYTOCOPY inserts zeros in the HIGHDSNUM and LOWDSNUM columns of
the SYSCOPY record.
If you use the FROMCOPY keyword, only the specified data set is used as the
input to the COPYTOCOPY job.
If you plan to use SMS, catalog all image copies. Never maintain cataloged and
uncataloged image copies with the same name.
If input data sets to be copied are stacked on tape and output data sets are defined
by a template, the utility sorts the list of objects by the file sequence numbers
(FSN) of the input data sets and processes the objects serially.
For example, image copies of the following table spaces with their FSNs are
stacked on TAPE1:
v DB2.TS1 FSN=1
v DB2.TS2 FSN=2
v DB2.TS3 FSN=3
v DB2.TS4 FSN=4
In the following statements, COPYTOCOPY uses a template for the output data
set:
//COPYTOCOPY EXEC DSNUPROC,SYSTEM=V71A
//SYSIN DD *
TEMPLATE A1 &DB..&SP..COPY1 TAPE UNIT CART STACK YES
COPYTOCOPY
TABLESPACE DB1.TS4
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS1
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS2
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS3
LASTFULL
RECOVERYDDN(A1)
As a result, the utility sorts the objects by FSN and processes them in the following
order:
v DB1.TS1
v DB1.TS2
v DB1.TS3
v DB1.TS4
If the output data sets are defined by JCL, the utility gives stacking preference to
the output data sets over input data sets. If the input data sets are not stacked, the
utility sorts the objects by size in descending order.
Terminating COPYTOCOPY
You can use the TERM utility command to terminate a COPYTOCOPY job. For
instructions on terminating an online utility, see “Terminating an online utility with
the TERM UTILITY command” on page 38.
Restarting COPYTOCOPY
For instructions on restarting a utility job, see “Restarting an online utility” on
page 39.
Claims: Table 22 shows which claim classes COPYTOCOPY claims on the target
object.
Table 22. Claim classes of COPYTOCOPY operations.
Target COPYTOCOPY
Table space or partition, or index space or partition UTRW
Legend:
v UTRW - Utility restrictive state - read-write access allowed
Example 2: Copying the most recent copy. The following control statement specifies
that COPYTOCOPY is to make a local site backup copy, a recovery site primary
copy, and a recovery site backup copy of table space DBA90102.TPA9012C. The
COPYDDN and RECOVERYDDN options also indicate the data sets to which these
copies should be written. For example, the recovery site primary copy is to be
written to the COPY3 data set. The FROMLASTCOPY option specifies that the
most recent full image copy or incremental image copy is to be used as the input
copy data set. This option is the default and is therefore not required.
COPYTOCOPY TABLESPACE DBA90102.TPA9012C
FROMLASTCOPY COPYDDN(,COPY2)
RECOVERYDDN(COPY3,COPY4)
Example 3: Copying the most recent full image copy. The following control
statement specifies that COPYTOCOPY is to make primary and backup copies at
the recovery site of table space DBA90201.TPA9021C. The FROMLASTFULLCOPY
option specifies that the most recent full image copy is to be used as the input
copy data set.
COPYTOCOPY TABLESPACE DBA90201.TPA9021C
FROMLASTFULLCOPY
RECOVERYDDN(COPY3,COPY4)
Example 4: Specifying a copy data set for input. The following control statement
specifies that COPYTOCOPY is to make a local site backup copy, a recovery site
primary copy, and a recovery site backup copy from data set
DH109003.COPY1.STEP1.COPY3. This input data set is specified by the
FROMCOPY option. The output data sets (COPY2, COPY3, and COPY4) are
specified by the COPYDDN and RECOVERYDDN options.
COPYTOCOPY TABLESPACE DBA90301.TPA9031C
FROMCOPY DH109003.COPY1.STEP1.COPY3
COPYDDN(,COPY2)
RECOVERYDDN(COPY3,COPY4)
Example 5: Identifying a cataloged image copy data set. The following control
statement specifies that COPYTOCOPY is to make a local site backup copy from a
cataloged data set that is named DH109003.COPY1.STEP1.COPY4. This data set is
identified by the FROMCOPY and FROMVOLUME options. The FROMCOPY
option specifies the input data set name, and the FROMVOLUME CATALOG
option indicates that the input data set is cataloged. Use the FROMVOLUME
option to distinguish a data set from other data sets that have the same name.
COPYTOCOPY TABLESPACE DBA90302.TLA9032A
FROMCOPY DH109003.COPY1.STEP1.COPY4
FROMVOLUME CATALOG
COPYDDN(,COPY2)
statements, see “Syntax and options of the TEMPLATE control statement ” on page
641 in the TEMPLATE chapter.
TEMPLATE C2C1_T1
DSN(JUKQU2BP.C2C1.LB.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
TEMPLATE C2C1_T2
DSN(JUKQU2BP.C2C1.RP.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
TEMPLATE C2C1_T3
DSN(JUKQU2BP.C2C1.RB.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
| template if the output data set size is bigger than the specified limit value 5 MB.
| This template defines the naming convention for the output data sets that are to be
| dynamically allocated.
The OPTIONS PREVIEW statement before the LISTDEF statement is used to force
the CPY1 list contents to be included in the output. For long lists, using this
statement is not recommended, because it might cause the output to be too long.
The OPTIONS OFF statement ends the PREVIEW mode processing, so that the
following TEMPLATE and COPYTOCOPY jobs run normally.
| OPTIONS PREVIEW
| LISTDEF CPY1 INCLUDE TABLESPACES TABLESPACE DBA906*.T*A906*
| INCLUDE INDEXSPACES COPY YES INDEXSPACE ADMF001.I?A906*
| OPTIONS OFF
| TEMPLATE T4 UNIT(3B0)
| DSN(T4.&SN..T&TI..COPY&IC.&LOCREM.)
| TEMPLATE T3 UNIT(SYSDA) SPACE CYL
| DSN(T3.&SN..T&TI..COPY&IC.&LOCREM.)
| LIMIT(5 MB,T4)
| COPYTOCOPY LIST CPY1 COPYDDN(T3,T3)
For more information about LISTDEF control statements, see “Syntax and options
of the LISTDEF control statement” on page 185 in the LISTDEF chapter. For more
information about TEMPLATE control statements, see “Syntax and options of the
TEMPLATE control statement ” on page 641 in the TEMPLATE chapter. For more
information about OPTIONS control statements, see “Syntax and options of the
OPTIONS control statement” on page 337 in the OPTIONS chapter.
| Example 8: Using LISTDEF and TEMPLATE with the CLONE option. The
| following COPYTOCOPY control statement specifies that the utility is to copy the
| list of objects that are included in the C2C1_LIST list, which is defined by the
| LISTDEF control statement. The CLONE option indicates that COPYTOCOPY is to
| process only image copy data sets that were taken against clone objects.
| LISTDEF C2C1_LIST
| INCLUDE TABLESPACES TABLESPACE DBKQBS01.TPKQBS01
| INCLUDE INDEXSPACES INDEXSPACE DBKQBS01.IPKQBS11
| INCLUDE INDEXSPACES INDEXSPACE DBKQBS01.IXKQBS12
| INCLUDE TABLESPACES TABLESPACE DBKQBS02.TSKQBS02
| INCLUDE INDEXSPACES INDEXSPACE DBKQBS02.IXKQBS21
| INCLUDE INDEXSPACES INDEXSPACE DBKQBS02.IXKQBS22
|
| TEMPLATE C2C1_T1
| DSN(JUKQU2BS.C2C1.LB.&SN.)
| DISP(NEW,CATLG,CATLG)
| UNIT(SYSDA)
|
| TEMPLATE C2C1_T2
| DSN(JUKQU2BS.C2C1.RP.&SN.)
| DISP(NEW,CATLG,CATLG)
| UNIT(SYSDA)
|
| TEMPLATE C2C1_T3
| DSN(JUKQU2BS.C2C1.RB.&SN.)
| DISP(NEW,CATLG,CATLG)
| UNIT(SYSDA)
|
| COPYTOCOPY LIST C2C1_LIST
| FROMLASTFULLCOPY
| COPYDDN(,C2C1_T1)
| RECOVERYDDN(C2C1_T2,C2C1_T3)
| CLONE
Interpreting output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2 problems, you might need to refer to
licensed documentation to interpret output from this utility.
Authorization required: To execute this utility for options which access relational
data, you must use a privilege set that includes one of the following
authorizations:
v REPAIR privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
Syntax diagram
diagnose statement:
, ALLDUMPS
,
TYPE( integer )
( X'abend-code' )
NODUMPS
,
( X'abend-code' )
display statement wait statement abend statement
display statement:
wait statement:
abend statement:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
TYPE(integer, ...)
Specifies one or more types of diagnose that you want to perform.
integer is the number of types of diagnoses. The maximum number of
types is 32. IBM Software Support defines the types as needed to diagnose
problems with IBM utilities.
ALLDUMPS(X'abend-code', ...)
Forces a dump to be taken in response to any utility abend code.
X'abend-code' is a member of a list of abend codes to which the scope of
ALLDUMPS is limited.
abend-code is a hexadecimal value.
NODUMPS(X'abend-code', ...)
Suppresses the dump for any utility abend code.
X'abend-code' is a member of a list of abend codes to which the scope of
NODUMPS is limited.
abend-code is a hexadecimal value.
DISPLAY
Formats the specified database items using SYSPRINT.
OBD database-name.table-space-name
Formats the object descriptor (OBD) of the table space.
database-name is the name of the database in which the table space
belongs.
table-space-name is the name of the table space whose OBD is to be
formatted.
ALL Formats all OBDs of the table space. The OBD of any object
that is associated with the table space is also formatted.
TABLES
Formats the OBDs of all tables in the specified table spaces.
INDEXES
Formats the OBDs of all indexes in the specified table spaces.
SYSUTIL
Formats every record from SYSIBM.SYSUTIL. This directory table
stores information about all utility jobs.
MEPL
Dumps the module entry point lists (MEPLs) to SYSPRINT.
AVAILABLE
| Displays the utilities that are installed on this subsystem in both
| bitmap and readable format. The presence or absence of the utility
| products 5655-N97 (IBM DB2 Utilities Suite for z/OS) affects the
| results of this display. See message DSNU862I for the output of this
| display.
DBET
Dumps the contents of a database exception table (DBET) to
SYSPRINT.
DATABASE database-name
Dumps the DBET entry that is associated with the specified
database.
database-name is the name of the database.
TABLESPACE database-name.table-space-name
Dumps the DBET entry that is associated with the specified
table space.
database-name is the name of the database.
table-space-name is the name of the table space.
INDEX creator-name.index-name
Dumps the DBET entry that is associated with the specified
index.
creator-name is the ID of the creator of the index.
index-name is the name of the index.
Enclose the index name in quotation marks if the name
contains a blank.
| CLONE
| Indicates that DIAGNOSE is to display information for only the
| specified objects that are clone tables, table spaces that contain clone
| tables, indexes on clone tables, or index spaces that contain indexes on
| clone tables.
WAIT
Suspends utility execution when it encounters the specified utility message
or utility trace ID. DIAGNOSE issues a message to the console and utility
execution does not resume until the operator replies to that message, the
utility job times out, or the utility job is canceled. This waiting period
allows events to be synchronized while you are diagnosing concurrency
problems. The utility waits for the operator to reply to the message,
allowing the opportunity to time or synchronize events.
If neither the utility message nor the trace ID are encountered, processing
continues.
ABEND
Forces an abend during utility execution if the specified utility message or
utility trace ID is issued.
If neither the utility message nor the trace ID are encountered, processing
continues.
NODUMP
Suppresses the dump that is generated by an abend of DIAGNOSE.
MESSAGE message-id
Specifies a DSNUxxx or DSNUxxxx message that causes a wait or an abend
to occur when that message is issued. For information about the valid
message IDs, see Part 2 of DB2 Messages.
message-id is the message, in the form of Uxxx or Uxxxx.
INSTANCE integer
Specifies that a wait or an abend is to occur when the MESSAGE
option message has been encountered a specified number of times.
If INSTANCE is not specified, a wait or abend occurs each time
that the message is encountered.
integer is the number of times that a message is to be encountered
before a wait or an abend occurs.
TRACEID trace-id
Specifies a trace ID that causes a wait or an abend to occur when the ID is
encountered. You can find valid trace IDs can be found in data set
prefix.SDSNSAMP(DSNWEIDS).
trace-id is a trace ID that is associated with the utility trace (RMID21). You
can specify trace-id in either decimal (integer) or hexadecimal (X'trace-id')
format.
INSTANCE integer
Specifies that a wait or an abend is to occur when the TRACEID
option has been encountered a specified number of times. If
INSTANCE is not specified, a wait or abend occurs each time that
the trace ID is encountered.
integer is the number of times that a trace ID is to be encountered
before a wait or an abend occurs.
END Ends DIAGNOSE processing.
4. Run DIAGNOSE by using one of the methods that are described in Chapter 3,
“Invoking DB2 online utilities,” on page 17.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Database
Database about which DIAGNOSE is to gather diagnosis information.
Table space
Table space about which DIAGNOSE is to gather diagnosis information.
Index space
Index about which DIAGNOSE is to gather diagnosis information.
DIAGNOSE can force a utility to abend when a specific message is issued. To force
an abend when unique-index or referential-constraint violations are detected, you
must specify the message that is issued when the error is encountered. Specify this
message by using the MESSAGE option of the ABEND statement.
Instead of using a message, you can force an abend by using the TRACEID option
of the ABEND statement to specify a trace IFCID that is associated with the utility
to force an abend.
Use the INSTANCE keyword to specify the number of times that the specified
message or trace record is to be generated before the utility abends.
You can restart a DIAGNOSE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
The following control statement forces a dump for any utility abend that occurs
during the execution of the specified COPY job. The DIAGNOSE END option ends
DIAGNOSE processing.
DIAGNOSE
ALLDUMPS
COPY TABLESPACE DSNDB06.SYSDBASE
DIAGNOSE END
The following control statement forces an abend of the specified LOAD job when
message DSNU311 is issued for the fifth time. The NODUMP option indicates that
the DIAGNOSE utility is not to generate a dump in this situation.
DIAGNOSE
ABEND MESSAGE U311 INSTANCE 5 NODUMP
LOAD DATA RESUME NO
INTO TABLE TABLE1
(NAME POSITION(1) CHAR(20))
DIAGNOSE END
| Example 6: Displaying only CLONE data. The control statement indicates that the
| DIAGNOSE utility is to be display information for only the specified objects that
| are table clones, table spaces that contain clone tables, indexes on clone tables, or
| index spaces that contain indexes on clone tables.
| DIAGNOSE DISPLAY DBET
| DATABASE DBNI0501
| CLONE
Output: The EXEC SQL control statement produces a result table when you specify
a cursor.
Execution phases of EXEC SQL: The EXEC SQL control statement executes entirely
in the EXEC phase. You can restart the EXEC phase if necessary.
Syntax diagram
declare-cursor-spec:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
cursor-name Specifies the cursor name. The name must not identify a cursor
that is already declared within the same input stream. When using
the DB2 cross-loader function to load data from a remote server,
you must identify the cursor with a three-part name. Cursor names
that are specified with the EXEC SQL utility cannot be longer than
eight characters.
select-statement Specifies the result table for the cursor. This statement can be any
valid SQL SELECT statement, including joins, unions, conversions,
| aggregations, special registers, and user-defined functions. The
| result table cannot include XML columns. See DB2 SQL Reference
for a description of the SELECT statement.
non-select dynamic SQL statement
Specifies a dynamic SQL statement that is to be used as input to
EXECUTE IMMEDIATE. You can specify the following dynamic
SQL statements in a utility statement:
ALTER RENAME
COMMENT ON REVOKE
COMMIT ROLLBACK
| CREATE SET CURRENT DECFLOAT
| DELETE ROUNDING MODE
DROP SET CURRENT DEGREE
EXPLAIN SET CURRENT LOCALE LC_CTYPE
GRANT SET CURRENT OPTIMIZATION HINT
INSERT SET PATH
LABEL ON SET CURRENT PRECISION
LOCK TABLE SET CURRENT RULES
SET CURRENT SQLID
UPDATE
You can restart an EXEC SQL utility job, but it starts from the beginning again. If
you are restarting this utility as part of a larger job in which EXEC SQL completed
successfully, but a later utility failed, do not change the EXEC SQL utility control
statement, if possible. If you must change the EXEC SQL utility control statement,
use caution; any changes can cause the restart processing to fail. For guidance in
restarting online utilities, see “Restarting an online utility” on page 39.
Example 1: Creating a table: The following control statement specifies that DB2 is
to create table MYEMP with the same rows and columns as sample table EMP.
EXEC SQL
CREATE TABLE MYEMP LIKE DSN8810.EMP CCSID EBCDIC
ENDEXEC
This type of statement can be used to create a mapping table. For an example of
creating and using a mapping table, see “Sample REORG TABLESPACE control
statements” on page 521 in the REORG TABLESPACE chapter.
Example 2: Inserting rows into a table: The following control statement specifies
that DB2 is to insert all rows from sample table EMP into table MYEMP.
EXEC SQL
INSERT INTO MYEMP SELECT * FROM DSN8810.EMP
ENDEXEC
You can use a declared cursor with the DB2 cross-loader function to load data from
a local server or from any DRDA-compliant remote server as part of the DB2
cross-loader function. For more information about using the cross-loader function,
see “Loading data by using the cross-loader function” on page 268.
End of General-use Programming Interface
You can use LISTDEF to standardize object lists and the utility control statements
that refer to them. This standardization reduces the need to customize or alter
utility job streams.
If you do not use lists and you want to run a utility on multiple objects, you must
run the utility multiple times or specify an itemized list of objects in the utility
control statement.
Restriction: Objects that are created with the DEFINE NO attribute are excluded
from all LISTDEF lists.
Output: Output from the LISTDEF control statement consists of a list with a name.
Authorization required: To execute the LISTDEF utility, you must have SELECT
authority on SYSIBM.SYSINDEXES, SYSIBM.SYSTABLES, and
SYSIBM.SYSTABLESPACE.
Additionally, you must have the authority to execute the utility that is used to
process the list, as currently documented in the “Authorization required” section of
each utility in this book.
Syntax diagram
LISTDEF list-name
|
INCLUDE LIST referenced-list
EXCLUDE (1) initial-object-spec CLONED YES RI ALL
type-spec NO BASE
LOB
XML
Notes:
1 You must specify type-spec if you specify DATABASE.
type-spec:
TABLESPACES
INDEXSPACES
COPY NO
YES
initial-object-spec:
DATABASE database-name
table-space-spec PARTLEVEL
index-space-spec (n)
table-spec
index-spec
table-space-spec:
TABLESPACE table-space-name
database-name.
index-space-spec:
INDEXSPACE index-space-name
database-name.
table-spec:
TABLE table-name
creator-id.
index-spec:
INDEX index-name
creator-id.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LISTDEF list-name
Defines a list of DB2 objects and assigns a name to the list. The list
name makes the list available for subsequent execution as the
object of a utility control statement or as an element of another
LISTDEF statement.
list-name is the name (up to 18 alphanumeric characters in length)
of the defined list.
You can put LISTDEF statements either in a separate LISTDEF
library data set or before a DB2 utility control statement that refers
to the list-name.
INCLUDE Specifies that the list of objects that results from the expression that
follows is to be added to the list. You must first specify an
INCLUDE clause. You can then specify subsequent INCLUDE or
EXCLUDE clauses in any order to add to or delete clauses from the
existing list.
For detailed information about the order of INCLUDE and
EXCLUDE processing, see “Including objects in a list” on page 194.
EXCLUDE Specifies, after the initial INCLUDE clause, that the list of objects
that results from the expression that follows is to be excluded from
the list if the objects are in the list. If the objects are not in the list,
they are ignored, and DB2 proceeds to the next INCLUDE or
EXCLUDE clause.
INDEXSPACES
Specifies that the INCLUDE or EXCLUDE object expression is to
create a list of related index spaces.
INDEXSPACES is the default type for lists that use an index space
or an index for the initial search. For more information about
specifying these objects, see the descriptions of the INDEXSPACE
and INDEX options.
No default type value exists for lists that use other lists for the
initial search. The list that is referred to by the LIST option is used
unless you specify TABLESPACES or INDEXSPACES. Likewise, no
type default value exists for lists that use databases for the initial
search. If you specify the DATABASE option, you must specify
LOB indicator keywords: Use one of three LOB indicator keywords to direct
LISTDEF processing to follow auxiliary relationships to include related LOB objects
in the list. The auxiliary relationship can be followed in either direction. LOB
objects include the LOB table spaces, auxiliary tables, indexes on auxiliary tables,
and their containing index spaces.
No default LOB indicator keyword exists. If you do not specify BASE, LOB, or
ALL, DB2 does not follow the auxiliary relationships and does not filter LOB from
base objects in the enumerated list.
ALL
| Specifies that BASE, LOB, and XML objects are to be included in the list.
| Auxiliary relationships are to be followed from all objects that result from the
| initial object lookup, and BASE, LOB, and XML objects are to remain in the
| final enumerated list. CLONED objects are not included.
BASE
Specifies that only base table spaces (non-LOB) and index spaces are to be
included in this element of the list.
If the result of the initial search for the object is a base object, auxiliary
relationships are not followed. If the result of the initial search for the object is
a LOB object, the auxiliary relationship is applied to the base table space or
index space, and only those objects become part of the resulting list.
LOB
Specifies that only LOB table spaces and related index spaces that contain
indexes on auxiliary tables are to be included in this element of the list.
If the result of the initial search for the object is a LOB object, auxiliary
relationships are not followed. If the result of the initial search for the object is
a base object, the auxiliary relationship is applied to the LOB table space or
index space, and only those objects become part of the resulting list.
| XML
| Specifies that only XML objects are to be included in this element of the list.
For a description of the elements that must be included in each INCLUDE and
EXCLUDE clause, see “Specifying objects to include or exclude.”
DB2 constructs the list, one clause at a time, by adding objects to or removing
objects from the list. If an EXCLUDE clause attempts to remove an object that is
not yet in the list, DB2 ignores the EXCLUDE clause of that object and proceeds to
the next INCLUDE or EXCLUDE clause. Be aware that a subsequent INCLUDE
can return a previously excluded object to the list.
You must include the following elements in each INCLUDE or EXCLUDE clause:
v The object that is to be used in the initial catalog lookup for each INCLUDE or
EXCLUDE clause. The search for objects can begin with databases, table spaces,
index spaces, tables, indexes, or other lists. You can explicitly specify the names
of these objects or, with the exception of other lists, use a pattern matching
expression. The resulting list contains only table spaces, only index spaces, or
both.
v The type of objects that the list contains, either TABLESPACES or
INDEXSPACES. You must explicitly specify the list type only when you specify
a database as the initial object by using the keyword DATABASE. Otherwise,
LISTDEF uses the default list type values shown in Table 27. These values
depend on the type of object that you specified for the INCLUDE or EXCLUDE
clause.
Table 27. Default list type values that LISTDEF uses.
Specified object Default list type value
TABLESPACE TABLESPACES
TABLE TABLESPACES
INDEXSPACE INDEXSPACES
INDEX INDEXSPACES
LIST Existing type value of the list
For example, the following INCLUDE clause specifies that table space
DBLT0301.TLLT031A is to be added to the LIST:
INCLUDE TABLESPACE DBLT0301.TLLT031A
In this example, the clause specifies that all index spaces over all tables in table
space DBLT0301.TLLT031A are to be added to the list.
Optionally, you can add related objects to the list by specifying keywords that
indicate a relationship, such as referentially related objects or auxiliary related
objects. Valid specifications include the following keywords:
v BASE (non-LOB and non-XML objects)
v LOB (LOB objects)
| v XML (XML objects)
| v ALL (BASE, LOB, and XML objects)
v TABLESPACES (related table spaces)
v INDEXSPACES (related index spaces)
v RI (related by referential constraints, including informational referential
constraints)
The preceding keywords perform two functions: they determine which objects are
related, and they then filter the contents of the list. The behavior of these keywords
varies depending on the type of object that you specify. For example, if your initial
object is a LOB object, the LOB keyword is ignored. If, however, the initial object is
not a LOB object, the LOB keyword determines which LOB objects are related, and
DB2 excludes non-LOB objects from the list. For more information about the
keywords that can be used to indicate relationships, see“Option descriptions” on
page 187.
DB2 processes each INCLUDE and EXCLUDE clause in the following order:
1. Perform the initial search for the object that is based on the specified
pattern-matching expression, including PARTLEVEL specification, if specified.
2. Add or remove related objects and filter the list elements based on the specified
list type, either TABLESPACES or INDEXSPACES (COPY YES or COPY NO).
3. Add or remove related objects depending on the presence or absence of the RI,
| BASE, LOB, XML, and ALL keywords.
For example, to generate a list of all table spaces in the ACCOUNT database but
exclude all LOB table spaces, you can specify the following LISTDEF statement:
LISTDEF ACCNT INCLUDE TABLESPACES DATABASE ACCOUNT BASE
In the preceding example, the name of the list is ACCNT. The TABLESPACES
keyword indicates that the list is to include table spaces that are associated with
the specified object. In this case, the table spaces to be included are those table
spaces in database ACCOUNT. Finally, the BASE keyword limits the objects to only
base table spaces.
If you want a list of only LOB index spaces in the ACCOUNT database, you can
specify the following LISTDEF statement:
LISTDEF ACLOBIX INCLUDE INDEXSPACES DATABASE ACCOUNT LOB
In the preceding example, the INDEXSPACES and LOB keywords indicate that the
INCLUDE clause is to add only LOB index spaces to the ACLOBIX list.
v TABLESPACE DSNDB01.SYSUTILX
v TABLE SYSIBM.SYSUTILX
v TABLE SYSIBM.SYSUTIL
v INDEXSPACE DSNDB01.DSNLUX01
v INDEXSPACE DSNDB01.DSNLUX02
v INDEX SYSIBM.DSNLUX01
v INDEX SYSIBM.DSNLUX02
Although DB2 catalog and directory objects can appear in LISTDEF lists, these
objects might be invalid for a utility and result in an error message.
The following valid INCLUDE clauses contain catalog and directory objects:
v INCLUDE TABLESPACE DSNDB06.SYSDBASE
v INCLUDE TABLESPACES TABLESPACE DSNDB06.SYSDBASE
v INCLUDE INDEXSPACE DSNDB06.DSNDXX01
v INCLUDE INDEXSPACES INDEXSPACE DSNDB06.DSNDXX01
All LISTDEF lists automatically exclude work file databases, which consist of
DSNDB07 objects and user-defined work file objects, because DB2 utilities do not
process these objects.
Any data sets that are identified as part of a LISTDEF library must contain only
LISTDEF statements.
In the utility job that references those LISTDEF statements, include an OPTIONS
statement before the utility statement. In the OPTIONS statement, specify the DD
name of the LISTDEF library as LISTDEFDD ddname.
Chapter 15. LISTDEF 197
LISTDEF
DB2 uses this LISTDEF library for any subsequent utility control statements, until
either the end of input or until you specify another OPTIONS LISTDEFDD ddname.
The default DD name for the LISTDEF definition library is SYSLISTD.
When DB2 encounters a reference to a list, DB2 first searches SYSIN. If DB2 does
not find the definition of the referenced list, DB2 searches the specified LISTDEF
library.
Any LISTDEF statement that is defined within the SYSIN DD statement overrides
another LISTDEF definition of the same name found in a LISTDEF library data set.
In general, utilities processes the objects in the list in the order in which they are
specified. However, some utilities alter the list order for optimal processing as
follows:
v CHECK INDEX, REBUILD INDEX, and RUNSTATS INDEX process all index
spaces that are related to a given table space at one time, regardless of list order.
v UNLOAD processes all specified partitions of a given table space at one time
regardless of list order.
The LIST keyword is supported by the utilities that are listed in Table 29. When
possible, utility processing optimizes the order of list processing as indicated in the
table.
Table 29. How specific utilities process lists
Utility Order of list processing
CHECK INDEX Items are grouped by related table space.
COPY Items are processed in the specified order on a single call to COPY;
the PARALLEL keyword is supported.
COPYTOCOPY Items are processed in the specified order on a single call to
COPYTOCOPY.
MERGECOPY Items are processed in the specified order.
MODIFY Items are processed in the specified order.
RECOVERY
MODIFY Items are processed in the specified order.
STATISTICS
QUIESCE All items are processed in the specified order on a single call to
QUIESCE.
Some utilities such as COPY and RECOVER, can process a LIST without a
specified object type. Object types are determined from the list contents. Other
utilities, such as REPORT, RUNSTATS, and REORG INDEX, must know the object
type that is to be processed before processing can begin. These utilities require that
you specify an object type in addition to the LIST keyword (for example: REPORT
RECOVERY TABLESPACE LIST, RUNSTATS INDEX LIST, and REORG INDEX
LIST). See the syntax diagrams for an individual utility for details.
In some cases you can use traditional JCL DD statements with LISTDEF lists, but
this method is usually not practical unless you are processing small lists one object
at a time.
You can restart a LISTDEF utility job, but it starts from the beginning again. Use
caution when changing LISTDEF lists prior to a restart. When DB2 restarts list
processing, it uses a saved copy of the list. Modifying the LISTDEF list that is
referred to by the stopped utility has no effect. Only control statements that follow
the stopped utility are affected. For guidance in restarting online utilities, see
“Restarting an online utility” on page 39.
List processing limitations: Although DB2 does not limit the number of objects
that a list can contain, be aware that if your list is too large, the utility might fail
with an error or abend in either DB2 or another program. These errors or abends
can be caused by storage limitations, limitations of the operating system, or other
restrictions imposed by either DB2 or non-DB2 programs. Whether such a failure
occurs depends on many factors including, but not limited to the following items:
v The amount of available storage in both the utility batch and DBM1 address
spaces
v The utility that is running.
v The type and number of other utilities that are running at the same time.
v The specific combination of keywords and operands of all the utilities that are
running
Recommendation: If you receive a failure that you suspect is caused by running a
utility on a list that is too large, divide your list into smaller lists and run the
utility or utilities in separate job steps on the smaller lists until they run
successfully.
Assume that three table spaces qualify. Of these table spaces, two are partitioned
table spaces (PAY2.DEPTA and PAY2.DEPTF) that each have three partitions and
one is a nonpartitioned table space (PAY1.COMP). In this case, the EXAMPLE4 list
includes the following items:
v PAY2.DEPTA partition 1
v PAY2.DEPTA partition 2
v PAY2.DEPTA partition 3
v PAY2.DEPTF partition 1
v PAY2.DEPTF partition 2
v PAY2.DEPTF partition 3
v PAY1.COMP
Example 5: Defining a list of COPY YES indexes. The following control statement
defines a list (EXAMPLE5) that includes related index spaces from the referenced
list (EXAMPLE4) that have been defined or altered to COPY YES.
LISTDEF EXAMPLE5 INCLUDE LIST EXAMPLE4 INDEXSPACES COPY YES
Example 6: Defining a list that includes all table space partitions except for one.
The following control statement defines a list (EXAMPLE6) that includes all
partitions of table space X, except for partition 12. The INCLUDE clause adds an
entry for each partition, and the EXCLUDE clause removes the entry for partition
12.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X PARTLEVEL
EXCLUDE TABLESPACE X PARTLEVEL(12)
Note that if the PARTLEVEL keyword is not specified in both clauses, as in the
following two sample statements, the INCLUDE and EXCLUDE items do not
intersect. For example, in the following statement, table space X is included is
included in the list in its entirety, not at the partition level. Therefore, partition 12
cannot be excluded.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X
EXCLUDE TABLESPACE X PARTLEVEL(12)
In the following sample statement, the list includes only partition 12 of table space
X, so table space X in its entirety can not be excluded.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X PARTLEVEL(12)
EXCLUDE TABLESPACE X
The LISTLIB DD statement (in the JCL for the QUIESCE job) defines a LISTDEF
library. When you define a LISTDEF library, you give a name to a group of data
sets that contain LISTDEF statements. In this case, the library is to include the
following data sets:
v The sequential data set JULTU103.TCASE.DATA2 (which includes the NAME1
list)
v The MEM1 member of the partitioned data set JULTU103.TCASE.DATA3 (which
includes the NAME2 list).
Defining such a library enables you to subsequently refer to a group of LISTDEF
statements with a single reference.
The OPTIONS utility control statement in this example specifies that the library
that is identified by the LISTLIB DD statement is to be used as the default
LISTDEF definition library. This declaration means that for any referenced lists,
DB2 is to first search SYSIN for the list definition. If DB2 does not find the list
definition in SYSIN, it is to search any data sets that are included in the LISTLIB
LISTDEF library.
The last LISTDEF statement defines the NAME3 list. This list includes all objects in
the NAME1 and NAME2 lists, except for three table spaces (TSLT032B, TSLT031B,
TSLT032C). Because the NAME1 and NAME2 lists are not included in SYSIN, DB2
searches the default LISTDEF library (LISTLIB) to find them.
Finally, the QUIESCE utility control statement specifies this list of objects (NAME3)
for which DB2 is to establish a quiesce point.
Figure 31. Example of building a LISTDEF library and then running the QUIESCE utility (Part
1 of 2)
Figure 31. Example of building a LISTDEF library and then running the QUIESCE utility (Part
2 of 2)
Example 8: Defining a list that includes related objects. The following LISTDEF
control statement defines a list (EXAMPLE8) that includes table space
DBLT0101.TPLT011C and all objects that are referentially related to it. Only base
table spaces are included in the list. The subsequent RECOVER utility control
statement specifies that all objects in the EXAMPLE8 list are to be recovered.
//STEP2 EXEC DSNUPROC,UID=’JULTU101.RECOVE5’,
// UTPROC=’’,SYSTEM=’SSTR’
//SYSIN DD *
LISTDEF EXAMPLE8 INCLUDE TABLESPACE DBLT0101.TPLT011C RI BASE
RECOVER LIST EXAMPLE8
/*
For a diagram of LOAD syntax and a description of available options, see “Syntax
and options of the LOAD control statement” on page 207. For detailed guidance on
running this utility, see “Instructions for running LOAD” on page 249.
Output: LOAD DATA generates one or more of the following forms of output:
v A loaded table space or partition.
v A discard file of rejected records.
v A summary report of errors that were encountered during processing; this report
is generated only if you specify ENFORCE CONSTRAINTS or if the LOAD
involves unique indexes.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorizations:
v Ownership of the table
v LOAD privilege for the database
v SYSCTRL or SYSADM authority
v STATS privilege for the database is required if STATISTICS keyword is specified
LOAD operates on a table space level, so you must have authority for all tables in
the table space when you perform LOAD.
To run LOAD STATISTICS, the privilege set must include STATS authority on the
database. To run LOAD STATISTICS REPORT YES, the privilege set must also
include the SELECT privilege on the tables required.
If you use RACF access control with multilevel security and LOAD is to process a
table space that contains a table that has multilevel security with row-level
granularity, you must be identified to RACF and have an accessible valid security
label. You must also meet the following authorization requirements:
v To replace an entire table space with LOAD REPLACE, you must have the
write-down privilege unless write-down rules are not in effect.
v You must have the write-down privilege to specify values for the security label
columns, unless write-down rules are not in effect. If these rules are in effect and
you do not have write-down privilege, DB2 assigns your security label as the
value for the security label column for the rows that you are loading.
For more information about multilevel security and security labels, see Part 3 of
DB2 Administration Guide.
Execution phases of LOAD: The LOAD utility operates in the phases that are listed
in Table 30 on page 206.
A subtask is started at the beginning of the RELOAD phase to sort the keys.
The sort subtask initializes and waits for the main RELOAD phase to pass its
keys to SORT. RELOAD loads the data, extracts the keys, and passes them in
memory for sorting. At the end of the RELOAD phase, the last key is passed
to SORT, and record sorting completes.
Note that load partition parallelism starts subtasks. PREFORMAT for table
spaces occurs at the end of the RELOAD phase.
SORT Sorts temporary file records before creating indexes or validating referential
constraints, if indexes or foreign keys exist. The SORT phase is skipped if all
the following conditions apply for the data that is processed during the
RELOAD phase:
v Each table has no more than one key.
v All keys are the same type (index key only, indexed foreign key, or foreign
key only).
v The data that is being loaded or reloaded is in key order (if a key exists). If
the key is an index key only and the index is a data-partitioned secondary
index, the data is considered to be in order if the data is grouped by
partition and ordered within partition by key value. If the key in question
is an indexed foreign key and the index is a data-partitioned secondary
index, the data is never considered to be in order.
v The data that is being loaded or reloaded is grouped by table, and each
input record is loaded into one table only.
SORT passes the sorted keys in memory to the BUILD phase, which builds
the indexes.
BUILD Creates indexes from temporary file records for all indexes that are defined on
the loaded tables. Build also detects duplicate keys. PREFORMAT for indexes
occurs at the end of the BUILD phase.
SORTBLD Performs all activities that normally occur in both the SORT and BUILD
phases, if you specify a parallel index build.
INDEXVAL Corrects unique index violations or index evaluation errors from the
information in SYSERR, if any exist.
ENFORCE Checks referential constraints, except informational referential constraints, and
corrects violations. Information about violations of referential constraints is
stored in SYSERR.
DISCARD Copies records that cause errors from the input data set to the discard data
set.
REPORT Generates a summary report, if you specified ENFORCE CONSTRAINT or if
load index validation is performed. The report is sent to SYSPRINT.
UTILTERM Performs cleanup.
Syntax diagram
| (1)
LOG YES SORTKEYS 0
workddn-spec
KEEPDICTIONARY REUSE LOG NO SORTKEYS NO
NOCOPYPEND SORTKEYS integer
FLOAT(S390) EBCDIC
format-spec
FLOAT(IEEE) ASCII , NOSUBS
UNICODE
CCSID( integer )
DISCARDS 0
DISCARDS integer SORTDEVT device-type SORTNUM integer
|
CONTINUEIF(start )= X’byte-string’ DECFLOAT_ROUNDMODE ROUND_CEILING
:end ’character-string’ ROUND_DOWN
ROUND_FLOOR
ROUND_HALF_DOWN
ROUND_HALF_EVEN
ROUND_HALF_UP
ROUND_UP
INTO-TABLE-spec
Notes:
1 The default is 0 if the input is on tape, a cursor, a PDS member or for SYSREC DD *. For
sequential data sets, LOAD computes the default based upon the input data set size.
workddn-spec:
WORKDDN(SYSUT1,SORTOUT)
WORKDDN (ddname1,ddname2)
,SORTOUT
(ddname1 )
SYSUT1
( ,ddname2)
copy-spec:
(SYSCOPY) RECOVERYDDN(ddname3 )
COPYDDN ,ddname4
(ddname1 )
,ddname2
(,ddname2)
statistics-spec:
STATISTICS
TABLE ( ALL )
SAMPLE integer
COLUMN ALL
TABLE ( table-name )
SAMPLE integer ,
COLUMN ( column-name )
UPDATE ALL
UPDATE ACCESSPATH HISTORY ALL FORCEROLLUP YES
SPACE ACCESSPATH NO
NONE SPACE
NONE
correlation-stats-spec:
format-spec:
FORMAT UNLOAD
SQL/DS
COLDEL ',' CHARDEL '"' DECPT '.'
DELIMITED
COLDEL coldel CHARDEL chardel DECPT decpt
INTO-TABLE-spec:
For the syntax diagram and the option descriptions of the into-table specification,
see “INTO-TABLE-spec” on page 226.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
DATA Specifies that data is to be loaded. This keyword is optional and is used for
clarity only.
INDDN ddname
Specifies the data definition (DD) statement or template that identifies the
input data set for the partition. The record format for the input data set
must be fixed-length or variable-length. The data set must be readable by
the basic sequential access method (BSAM).
The ddname is the name of the input data set. The default is SYSREC.
INCURSOR cursor-name
Specifies the cursor for the input data set. You must declare the cursor
before it is used by the LOAD utility. Use the EXEC SQL utility control
statement to define the cursor. You cannot load data into the same table on
| which you defined the cursor. You cannot load data into a table that is a
| parent in a RI relationship with the dependent table on which the cursor is
| defined.
The specified cursor can be used with the DB2 UDB family cross-loader
function, which enables you to load data from any DRDA-compliant
remote server. For more information about using the cross-loader function,
see “Loading data by using the cross-loader function” on page 268.
cursor-name is the cursor name. Cursor names that are specified with the
LOAD utility cannot be longer than eight characters.
You cannot use the INCURSOR option with the following options:
v SHRLEVEL CHANGE
v NOSUBS
v FORMAT UNLOAD
v FORMAT SQL/DS™
v CONTINUEIF
v WHEN
In addition, you cannot specify field specifications or use discard
processing with the INCURSOR option.
PREFORMAT
Specifies that the remaining pages are preformatted up to the
high-allocated RBA in the table space and index spaces that are associated
with the table that is specified in table-name. The preformatting occurs after
the data has been loaded and the indexes are built.
PREFORMAT can operate on an entire table space and its index spaces, or
on a partition of a partitioned table space and on the corresponding
partitions of partitioned indexes, if any exist. Specifying LOAD
PREFORMAT (rather than PART integer PREFORMAT) tells LOAD to
serialize at the table space level, which can inhibit concurrent processing of
separate partitions. If you want to serialize at the partition level, specify
PART integer PREFORMAT. See “Option descriptions for INTO TABLE” on
page 229 for information about specifying PREFORMAT at the partition
level.
| The PREFORMAT keyword does not apply to auxiliary table spaces.
RESUME
Indicates whether records are to be loaded into an empty or non-empty
table space. For nonsegmented table spaces, space is not reused for rows
that have been marked as deleted or for rows of dropped tables.
Important: Specifying LOAD RESUME (rather than PART integer RESUME)
tells LOAD to serialize on the entire table space, which can inhibit
concurrent processing of separate partitions. If you want to process other
partitions concurrently, use “INTO-TABLE-spec” on page 226 to specify
PART integer RESUME.
NO
Loads records into an empty table space. If the table space is not
empty, and you have not used REPLACE, a message is issued and the
utility job step terminates with a job step condition code of 8.
For nonsegmented table spaces that contain deleted rows or rows of
dropped tables, using the REPLACE keyword provides increased
efficiency.
The default is NO, unless you override it with PART integer RESUME
YES.
YES
Loads records into a non-empty table space. If the table space is empty,
a warning message is issued, but the table space is loaded. Loading
begins at the current end of data in the table space. Space is not reused
for rows that are marked as deleted or for rows of dropped tables.
SHRLEVEL
Specifies the extent to which applications can concurrently access the table
space or partition during the LOAD utility job. The following parameter
values are listed in order of increasing extent of allowed concurrent access.
NONE
Specifies that applications have no concurrent access to the table space
or partition.
The default is NONE.
CHANGE
Specifies that applications can concurrently read from and write to the
table space or partition into which LOAD is loading data. If you
specify SHRLEVEL CHANGE, you cannot specify the following
parameters: INCURSOR, RESUME NO, REPLACE,
KEEPDICTIONARY, LOG NO, ENFORCE NO, STATISTICS,
COPYDDN, RECOVERYDDN, PREFORMAT, REUSE, or PART integer
REPLACE.
For a partition-directed LOAD, if you specify SHRLEVEL CHANGE,
only RESUME YES can be specified or inherited from the LOAD
statement.
LOAD SHRLEVEL CHANGE does not perform the SORT, BUILD,
SORTBLD, INDEXVAL, ENFORCE, or REPORT phases, and the
compatibility and concurrency considerations differ.
A LOAD SHRLEVEL CHANGE job functions like a mass INSERT.
Whereas a regular LOAD job drains the entire table space, LOAD
SHRLEVEL CHANGE functions like an INSERT statement and uses
claims when accessing an object.
Normally, a LOAD RESUME YES job loads the records at the end of
the already existing records. However, for a LOAD RESUME YES job
with the SHRLEVEL CHANGE option, the utility tries to insert the
new records in available free space as close to the clustering order as
possible. This LOAD job does not create any additional free pages. If
you insert a lot of records, these records are likely to be stored out of
clustering order. In this case, you should run the REORG
TABLESPACE utility after loading the records.
Recommendation: If you have loaded a lot of records, run RUNSTATS
SHRLEVEL CHANGE UPDATE SPACE and then a conditional
REORG.
Log records that DB2 creates during LOAD SHRLEVEL CHANGE can
be used by DB2 DataPropagator™, if the tables that are being loaded
are defined with DATA CAPTURE CHANGES.
Note that before and after row triggers are activated for SHRLEVEL
CHANGE but not for SHRLEVEL NONE. Statement triggers for each
row are also activated for SHRLEVEL CHANGE but not for
SHRLEVEL NONE.
REPLACE
Indicates whether the table space and all its indexes need to be reset to
empty before records are loaded. With this option, the newly loaded rows
replace all existing rows of all tables in the table space, not just those of
the table that you are loading. For DB2 STOGROUP-defined data sets, the
data set is deleted and redefined with this option, unless you also specified
the REUSE option. You must have LOAD authority for all tables in the
table space where you perform LOAD REPLACE. If you attempt a LOAD
REPLACE without this authority, you get an error message.
You cannot use REPLACE with the PART integer REPLACE option of INTO
TABLE; you must either replace an entire table space by using the
REPLACE option or replace a single partition by using the PART integer
REPLACE option of INTO TABLE.
Specifying LOAD REPLACE (rather than PART integer REPLACE) tells
LOAD to serialize at the table space level. If you want to serialize at the
partition level, specify PART integer REPLACE. See the information about
specifying REPLACE at the partition level under the keyword descriptions
for INTO TABLE.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the primary (ddname1) and backup
(ddname2) copy data sets for the image copy.
ddname is the DD name.
The default is SYSCOPY for the primary copy. No default exists for the
backup copy.
The COPYDDN keyword can be specified only with REPLACE. A full
image copy data set (SHRLEVEL REFERENCE) is created for the table or
partitions that are specified when LOAD executes. The table space or
partition for which an image copy is produced is not placed in
COPY-pending status.
Image copies that are taken during LOAD REPLACE are not recommended
for use with RECOVER TOCOPY because these image copies might
contain unique index violations, referential constraint violations, or index
evaluation errors.
| When using COPYDDN for XML data, an inline copy is taken only of the
| base table space, not the XML table space.
Using COPYDDN when loading a table with LOB columns does not create
a copy of any index, LOB table space, or XML table space. You must
perform these tasks separately.
The COPYDDN keyword specifies either a DD name or a TEMPLATE
name specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
RECOVERYDDN ddname3,ddname4
Specifies the DD statements for the primary (ddname3) and backup
(ddname4) copy data sets for the image copy at the recovery site.
ddname is the DD name.
You cannot have duplicate image copy data sets. The same rules apply for
RECOVERYDDN and COPYDDN.
The RECOVERYDDN keyword specifies either a DD name or a
TEMPLATE name specification from a previous TEMPLATE control
statement. If utility processing detects that the specified name is both a DD
name in the current job step and a TEMPLATE name, the utility uses the
INDEX
Specifies indexes for which information is to be gathered. Column
information is gathered for the first column of the index. All the indexes
must be associated with the same table space, which must be the table
space that is specified in the TABLESPACE option.
(ALL)
Specifies that the column information is to be gathered for all indexes
that are defined on tables in the table space.
(index-name)
Specifies the indexes for which information is to be gathered. Enclose
the index name in quotation marks if the name contains a blank.
KEYCARD
Requests the collection of all distinct values in all of the 1 to n key column
combinations for the specified indexes. n is the number of columns in the
index.
FREQVAL
Controls the collection of frequent-value statistics. If you specify
FREQVAL, it must be followed by two additional keywords:
NUMCOLS
Indicates the number of key columns that are to be concatenated
together when collecting frequent values from the specified index.
Specifying '3' means that frequent values are to be collected on the
concatenation of the first three key columns. The default is 1, which
means that DB2 collects frequent values on the first key column of the
index.
COUNT
Indicates the number of frequent values that are to be collected.
Specifying ’15’ means that DB2 collects 15 frequent values from the
specified key columns. The default is 10.
REPORT
Indicates whether a set of messages is to be generated to report the
collected statistics.
NO
Indicates that the set of messages is not sent to SYSPRINT as output.
The default is NO.
YES
Indicates that the set of messages is sent to SYSPRINT as output. The
generated messages are dependent on the combination of keywords
(such as TABLESPACE, INDEX, TABLE, and COLUMN) that are
specified with the RUNSTATS utility. However, these messages are not
dependent on the specification of the UPDATE option. REPORT YES
always generates a report of SPACE and ACCESSPATH statistics.
UPDATE
Indicates whether the collected statistics are to be inserted into the catalog
tables. UPDATE also allows you to select statistics that are used for access
path selection or statistics that are used by database administrators.
ALL Indicates that all collected statistics are to be updated in the
catalog.
The default is ALL.
ACCESSPATH
Indicates that updates are to be made only to the catalog table
columns that provide statistics that are used for access path
selection.
SPACE
Indicates that updates are to be made only to the catalog table
columns that provide statistics to help database administrators
assess the status of a particular table space or index.
NONE
Indicates that no catalog tables are to be updated with the collected
statistics. This option is valid only when REPORT YES is specified.
HISTORY
Records all catalog table inserts or updates to the catalog history tables.
The default is supplied by the value that is specified in STATISTICS
HISTORY on panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that updates are to be made only to the catalog history
table columns that provide statistics that are used for access path
selection.
SPACE
Indicates that only space-related catalog statistics are to be updated
in catalog history tables.
NONE
Indicates that no catalog history tables are to be updated with the
collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when
RUNSTATS is executed even if some parts are empty. This keyword
enables the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some parts might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all parts.
If data is not available for all parts, DSNU623I message is issued if
the installation value for STATISTICS ROLLUP on panel DSNTIPO
is set to NO.
KEEPDICTIONARY
Prevents the LOAD utility from building a new compression dictionary.
LOAD retains the current compression dictionary and uses it for
compressing the input data. This option eliminates the cost that is
associated with building a new dictionary.
| The KEEPDICTIONARY keyword is ignored for XML table spaces. If you
| specify REPLACE, any existing dictionary for the XML table space or
| partition is deleted. If you do not specify REPLACE, any existing
| dictionary for the XML table space or partition is saved.
This keyword is valid only if the table space that is being loaded has the
COMPRESS YES attribute.
If the table space or partition is empty, DB2 performs one of these actions:
v DB2 builds a dictionary if a compression dictionary does not exist.
v DB2 keeps the dictionary if a compression dictionary exists.
If the table space or partition is not empty and RESUME YES is specified,
DB2 performs one of these actions:
v DB2 does not build a dictionary if a compression dictionary does not
exist.
v DB2 keeps the dictionary if a compression dictionary exists.
| NO
| Indicates that the SORTKEYS default is to be turned off.
For sequential data sets, LOAD computes an estimate based upon the
input data set size.
For more information about sorting keys, see “Improved performance with
SORTKEYS” on page 271 and “Building indexes in parallel for LOAD” on
page 275.
FORMAT
Identifies the format of the input record. If you use FORMAT UNLOAD or
FORMAT SQL/DS, it uniquely determines the format of the input, and no
field specifications are allowed in an INTO TABLE option.
If you omit FORMAT, the format of the input data is determined by the
rules for field specifications that are described in“Option descriptions for
INTO TABLE” on page 229. If you specify FORMAT DELIMITED, the
format of the input data is determined by the rules that are described in
Appendix F, “Delimited file format,” on page 931.
UNLOAD
Specifies that the input record format is compatible with the DB2
unload format. (The DB2 unload format is the result of REORG
with the UNLOAD ONLY option.)
Input records that were unloaded by the REORG utility are loaded
into the tables from which they were unloaded, if an INTO TABLE
option specifies each table. Do not add columns or change column
definitions of tables between the time you run REORG UNLOAD
ONLY and LOAD FORMAT UNLOAD.
Any WHEN clause on the LOAD FORMAT UNLOAD statement is
ignored; DB2 reloads the records into the same tables from which
they were unloaded. Not allowing a WHEN clause with the
FORMAT UNLOAD clause ensures that the input records are
loaded into the proper tables. Input records that cannot be loaded
are discarded.
If the DCB RECFM parameter is specified on the DD statement for
the input data set, and the data set format has not been modified
since the REORG UNLOAD (ONLY) operation, the record format
must be variable (RECFM=V).
SQL/DS
Specifies that the input record format is compatible with the
SQL/DS unload format. The data type of a column in the table
that is to be loaded must be the same as the data type of the
corresponding column in the SQL/DS table.
If the SQL/DS input contains rows for more than one table, the
WHEN clause of the INTO TABLE option indicates which input
records are to be loaded into which DB2 table.
For information about the correct DCB parameters to specify on
the DD statement for the input data set, refer to DB2 Server for VM:
DBS Utility.
LOAD cannot load SQL/DS strings that are longer than the DB2
limit. For information about DB2 limits, see Appendix A, “Limits in
DB2 for z/OS,” on page 851.
SQL/DS data that has been unloaded to disk under DB2 Server for
VSE & VM resides in a simulated z/OS-type data set with a record
format of VBS. Consider this format when transferring the data to
another system that is to be loaded into a DB2 table (for example,
the DB2 Server for VSE & VM. FILEDEF must define it as a
z/OS-type data set). Processing the data set as a standard CMS file
puts the SQL/DS record type field at the wrong offset within the
records; LOAD is unable to recognize them as valid SQL/DS input.
DELIMITED
Specifies that the input data file is in a delimited format. When
data is in a delimited format, all fields in the input data set are
character strings or external numeric values. In addition, each
column in a delimited file is separated from the next column by a
column delimiter character.
For each of the delimiter types that you can specify, you must
ensure that the delimiter character is specified in the code page of
the source data. The delimiter character can be specified as either a
character or hexadecimal constant. For example, to specify ’#’ as
the delimiter, you can specify either COLDEL ’#’ or COLDEL X'23'.
If the utility statement is coded in a character type that is different
from the input file, such as a utility statement that is coded in
EBCDIC and input data that is in Unicode, you should specify the
delimiter character in the utility statement as a hexadecimal
constant, or the result can be unpredictable.
You cannot specify the same character for more than one type of
delimiter (COLDEL, CHARDEL, and DECPT). For more
information about delimiter restrictions, see “Loading delimited
files” on page 261.
Unicode input data for FORMAT DELIMITED must be UTF-8,
CCSID 1208.
If you specify the FORMAT DELIMITED option, you cannot use
any of the following options:
v CONTINUEIF
v INCURSOR
v Multiple INTO TABLE statements
v WHEN
Also, LOAD ignores any specified POSITION statements within
the LOAD utility control statement.
the third value specifies the CCSID for DBCS data. If any of these values is
specified as 0 or omitted, the CCSID of the corresponding data type in the
input file is assumed to be the same as the installation default CCSID. If
the input data is EBCDIC, the omitted CCSIDs are assumed to be the
EBCDIC CCSIDs that are specified at installation, and if the input data is
ASCII, the omitted CCSIDs are assumed to be the ASCII CCSIDs that are
specified at installation. If the CCSIDs of the input data file do not match
the CCSIDs of the table that is being loaded, the input data is converted to
the table CCSIDs before being loaded.
integer is any valid CCSID specification.
If the input data is Unicode, the default CCSID values are the Unicode
CCSIDs that are specified at system installation.
NOSUBS
Specifies that LOAD is not to accept substitution characters in a string.
Place a substitution character in a string when that string is being
converted from ASCII to EBCDIC, or when the string is being converted
from one CCSID to another. For example, this substitution occurs when a
character (sometimes referred to as a code point) that exists in the source
CCSID (code page) does not exist in the target CCSID (code page).
When you specify the NOSUBS option and the LOAD utility determines
that a substitution character has been placed in a string as a result of a
conversion, it performs one of the following actions:
v If discard processing is active: DB2 issues message DSNU310I and
places the record in the discard file.
v If discard processing is not active: DB2 issues message DSNU334I, and
the utility abnormally terminates.
ENFORCE
Specifies whether LOAD is to enforce check constraints and referential
constraints, except informational referential constraints, which are not
enforced.
CONSTRAINTS
Indicates that constraints are to be enforced. If LOAD detects a
violation, it deletes the errant row and issues a message to identify
it. If you specify this option and referential constraints exist, sort
input and sort output data sets must be defined.
The default is CONSTRAINTS.
NO Indicates that constraints are not to be enforced. This option places
the target table space in the CHECK-pending status if at least one
referential constraint or check constraint is defined for the table.
ERRDDN ddname
Specifies the DD statement for a work data set that is being used during
error processing. Information about errors that are encountered during
processing is stored in this data set. A SYSERR data set is required if you
request discard processing.
ddname is the DD name. The default is SYSERR.
The ERRDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
MAPDDN ddname
Specifies the DD statement for a work data set that is to be used during
error processing. The work data set is used to correlate the identifier of a
table row with the input record that caused an error. A SYSMAP data set is
required if you specify ENFORCE CONSTRAINTS and the tables have a
referential relationship, or if you request discard processing when loading
one or more tables that contain unique indexes or extended indexes.
ddname is the DD name. The default is SYSMAP.
The MAPDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
DISCARDDN ddname
Specifies the DD statement for a discard data set that is to hold copies of
records that are not loaded (for example, if they contain conversion errors).
The discard data set also holds copies of records that are loaded and then
removed (because of unique index errors, referential or check constraint
violations, or index evaluation errors). Flag input records for discarding
during RELOAD, INDEXVAL, and ENFORCE phases. However, the
discard data set is not written until the DISCARD phase when the flagged
records are copied from the input data set to the discard data set. The
discard data set must be a sequential data set that can be written to by
BSAM, with the same record format, record length, and block size as the
input data set.
ddname is the DD name. The default is SYSDISC.
If you omit the DISCARDDN option, the utility application program saves
discarded records only if a SYSDISC DD statement is in the JCL input.
The DISCARDDN keyword specifies either a DD name or a TEMPLATE
name specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
DISCARDS integer
Specifies the maximum number of source records that are to be written on
the discard data set. integer can range from 0 to 2147483647. If the discard
maximum is reached, LOAD abnormally terminates, the discard data set is
empty, and you cannot see which records were discarded. You can either
restart the job with a larger limit, or terminate the utility.
DISCARDS 0 specifies that you do not want to set a maximum value. The
entire input data set can be discarded.
The default is DISCARDS 0.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be dynamically
allocated by DFSORT. You can specify any device type that is acceptable to
the DYNALLOC parameter of the SORT or OPTION options for DFSORT.
If you omit SORTDEVT and a sort is required, you must provide the DD
statements that the sort application program needs for the temporary data
sets.
A TEMPLATE specification does not dynamically allocate sort work data
sets. The SORTDEVT keyword controls dynamic allocation of these data
sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be dynamically
allocated by the sort application program.
integer is the number of temporary data sets that can range from 2 to 255.
If you omit SORTDEVT, SORTNUM is ignored. If you use SORTDEVT and
omit SORTNUM, no value is passed to DFSORT. In this case, DFSORT uses
its own default.
If you omit SORTDEVT, SORTNUM is ignored. If you use SORTDEVT and
omit SORTNUM, no value is passed to DFSORT; DFSORT uses its own
default.
| You need at least two sort work data sets for each sort. The SORTNUM
| value applies to each sort invocation in the utility. For example, if three
| indexes, SORTKEYS is specified, there are no constraints that limit
| parallelism, and SORTNUM is specified as 8, a total of 24 sort work data
| sets are allocated for a job.
| Each sort work data set consumes both above-the-line and below-the-line
| virtual storage, so if you specify a value for SORTNUM that is too high,
| the utility might decrease the degree of parallelism due to virtual storage
| constraints, and possibly decreasing the degree down to one, meaning no
| parallelism.
CONTINUEIF
Indicates that you want to be able to treat each input record as a portion of
a larger record. After CONTINUEIF, write a condition in one of the
following forms:
(start:end) = X’byte-string’
(start:end) = ’character-string’
If the condition is true in any record, the next record is concatenated with
it before loading takes place. You can concatenate any number of records
into a larger record, up to a maximum size of 32767 bytes.
| the result coefficient is .05 and the rightmost digit is odd, the result
| coefficient should be incremented by 1 (rounded up).
| ROUND_HALF_UP
| Round to nearest. If equidistant, round up. If the discarded digits are
| greater than or equal to 0.5, the result coefficient should be
| incremented by 1 (rounded up). Otherwise the discarded digits are
| ignored.
| ROUND_UP
| Round away from 0. If all of the discarded digits are 0, the result is
| unchanged. Otherwise, the result coefficient should be incremented by
| 1 (rounded up).
INTO-TABLE-spec
More than one table or partition for each table space can be loaded with a single
invocation of the LOAD utility. At least one INTO TABLE statement is required for
each table that is to be loaded. Each INTO TABLE statement:
v Identifies the table that is to be loaded
v Describes fields within the input record
v Defines the format of the input data set
All tables that are specified by INTO TABLE statements must belong to the same
table space.
INTO-TABLE-spec:
| IGNOREFIELDS NO
INTO TABLE table-name
IDENTITYOVERRIDE IGNOREFIELDS YES
INDDN SYSREC
PART integer resume-spec
PREFORMAT INDDN ddname
DISCARDDN ddname
INCURSOR cursor-name
WHEN SQL/DS=’table-name’ ,
field selection criterion
( field specification )
resume-spec:
RESUME NO
REPLACE KEEPDICTIONARY
REUSE copy-spec
RESUME YES
field-name = X’byte-string’
(start ) ’character-string’
:end G’graphic-string’
N’graphic-string’
field specification:
| field-name
POSITION(start ) CHAR
:end BIT (length) strip-spec
MIXED strip-spec
BLOBF
CLOBF
MIXED
DBCLOBF
VARCHAR strip-spec
BIT
MIXED
BLOBF
CLOBF
MIXED
DBCLOBF
GRAPHIC strip-spec
EXTERNAL (length)
VARGRAPHIC strip-spec
SMALLINT
INTEGER
EXTERNAL
(length)
BIGINT
BINARY strip-spec
(length)
VARBINARY strip-spec
BINARY VARYING
decimal-spec
FLOAT
EXTERNAL (length)
DATE EXTERNAL
(length)
TIME EXTERNAL
(length)
TIMESTAMP EXTERNAL
(length)
ROWID
BLOB
CLOB
MIXED
DBCLOB
(34)
DECFLOAT
(16)
EXTERNAL
(length)
XML
WHITESPACE
PRESERVE
NULLIF field selection criterion
DEFAULTIF field selection criterion
strip spec:
BOTH TRUNCATE
STRIP
TRAILING (1)
LEADING 'strip-char'
X'strip-char'
Notes:
| 1 If you specify GRAPHIC, BINARY, VARBINARY, or VARGRAPHIC, you cannot specify
| 'strip-char'. You can specify only X'strip-char'.
decimal spec:
PACKED
DECIMAL
ZONED
EXTERNAL
,0
(length )
,scale
IDENTITYOVERRIDE
Allows unloaded data to be reloaded into a GENERATED ALWAYS
identity column of the same table using LOAD REPLACE. When this
option is used and input field specifications are coded, the identity column
must be specified and NULLIF or DEFAULTIF is not allowed.
Specifying this option will allow LOAD INTO TABLE PART when the
identity column is part of the partitioning index.
IGNOREFIELDS
Indicates whether LOAD is to skip fields in the input data set that do not
correspond to columns in the target table. Examples of fields that do not
correspond to table columns are the DSN_NULL_IND_nnnnn,
| DSN_ROWID, DSN_IDENTITY, and DSN_RCTIMESTAMP fields that are
generated by the REORG utility.
NO
| Specifies that the LOAD process is not to skip any fields. The default
| is NO.
YES
Specifies that LOAD is to skip fields in the input data set that do not
correspond to columns in the target table.
Specifying YES can be useful if each input record contains a
variable-length field, followed by some variable-length data that you
do not want to load and then some data that you want to load.
Because of the variable-length field, you cannot use the POSITION
keyword to skip over the variable-length data that you do not want to
load. By specifying IGNOREFIELDS, you can give a field specification
for the variable-length data that you do not want to load; and by
giving it a name that is not one of the table column names, LOAD
skips the field without loading it.
Use this option with care, because it also causes fields to be skipped if
you intend to load a column but have misspelled the name.
PART integer
Specifies that data is to be loaded into a partition of a partitioned table
| space. This option is valid only for partitioned table spaces, not including
| partition-by-growth table spaces.
integer is the number of the partition into which records are to be loaded.
The same partition number cannot be specified more than once if partition
parallelism has been requested. Any data that is outside the range of the
specified partition is not loaded. The maximum is 4096.
| LOAD INTO PART integer is not allowed if:
v An identity column is part of the partitioning index, unless
IDENTITYOVERRIDE is specified for the identity column GENERATED
ALWAYS
v A row ID is part of the partitioning index
v The table space is partitioned by growth
PREFORMAT
Specifies that the remaining pages are to be preformatted up to the
high-allocated RBA in the partition and its corresponding partitioning
index space. The preformatting occurs after the data is loaded and the
indexes are built.
RESUME
Specifies whether records are to be loaded into an empty or non-empty
partition. For nonsegmented table spaces, space is not reused for rows that
have been marked as deleted or by rows of dropped tables is not reused. If
the RESUME option is specified at the table space level, the RESUME
option is not allowed in the PART clause.
If you want the RESUME option to apply to the entire table space, use the
LOAD RESUME option. If you want the RESUME option to apply to a
particular partition, specify it by using PART integer RESUME.
NO
Loads records into an empty partition. If the partition is not empty,
and you have not used REPLACE, a message is issued, and the utility
job step terminates with a job step condition code of 8.
For nonsegmented table spaces that contains deleted rows or rows of
dropped tables, using the REPLACE keyword provides increased
efficiency.
The default is NO.
YES
Loads records into a non-empty partition. If the partition is empty, a
warning message is issued, but the partition is loaded.
REPLACE
Indicates that you want to replace only the contents of the partition that is
cited by the PART option, rather than the entire table space.
You cannot use LOAD REPLACE with the PART integer REPLACE option
of INTO TABLE. If you specify the REPLACE option, you must either
replace an entire table space, using LOAD REPLACE, or a single partition,
using the PART integer REPLACE option of INTO TABLE. You can,
however, use PART integer REPLACE with LOAD RESUME YES.
REUSE
Specifies, when used with the REPLACE option, that LOAD should
logically reset and reuse DB2-managed data sets without deleting and
redefining them. If you do not specify REUSE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If you specify REUSE with REPLACE on the PART specification (and not
for LOAD at the table space level), only the specified partitions are
logically reset. If you specify REUSE for the table space and REPLACE for
the partition, data sets for the replaced parts are logically reset.
KEEPDICTIONARY
Specifies that the LOAD utility is not to build a new dictionary. LOAD
retains the current dictionary and uses it for compressing the input data.
This option eliminates the cost that is associated with building a new
dictionary.
This keyword is valid only if a dictionary exists and the partition that is
being loaded has the COMPRESS YES attribute.
If the partition has the COMPRESS YES attribute, but no dictionary exists,
one is built and an error message is issued.
INDDN ddname
Specifies the data definition (DD) statement or template that identifies the
input data set for the partition. The record format for the input data set
must be fixed or variable. The data set must be readable by the basic
sequential access method (BSAM).
The ddname is the name of the input data set. The default is SYSREC.
INDDN can be a template name.
When loading LOB data using file reference variables, this input data set
should include the names of the sequential files that contain the LOB
column values. Each file can be either a PDS member, PDSE member, or
separate HFS file.
If you specify INDDN, with or without DISCARDDN, in one INTO TABLE
PART specification and you supply more than one INTO TABLE PART
clause, you must specify INDDN in all INTO TABLE PART specifications.
Specifying INDDN at the partition level and supplying multiple PART
clauses, each with their own INDDN, enables load partition parallelism,
which can significantly improve performance. Loading all partitions in a
single job with load partition parallelism is recommended instead of
concurrent separate jobs whenever one or more nonpartitioned secondary
indexes are on the table space.
The field specifications apply separately to each input file. Therefore, if
multiple INTO TABLE PART INDDN clauses are used, field specifications
are required on each one.
DISCARDDN ddname
Specifies the DD statement for a discard data set for the partition. The
discard data set holds copies of records that are not loaded (for example, if
they contain conversion errors). The discard data set also holds copies of
records that were loaded and then removed (due to unique index errors, or
referential or check constraint violations).
Flag input records for discarding during the RELOAD, INDEXVAL, and
ENFORCE phases. However, the utility does not write the discard data set
until the DISCARD phase when the utility copies the flagged records from
the input data set to the discard data set.
The discard data set must be a sequential data set, and it must be
write-accessible by BSAM, with the same record format, record length, and
block size as the input data set.
The ddname is the name of the discard data set. DISCARDDN can be a
template name.
If you omit the DISCARDDN option, LOAD does not save discarded
records.
INCURSOR cursor-name
Specifies the cursor for the input data set. You must declare the cursor
before it is used by the LOAD utility. Use the EXEC SQL utility control
statement to define the cursor. You cannot load data into the same table on
which you defined the cursor.
The specified cursor can be used as part of the DB2 UDB family cross
loader function, which enables you to load data from any DRDA-compliant
remote server. For more information about using the cross loader function,
see “Loading data by using the cross-loader function” on page 268.
cursor-name is the cursor name. Cursor names that are specified with the
LOAD utility cannot be longer than eight characters.
You cannot use the INCURSOR option with the following options:
v SHRLEVEL CHANGE
v NOSUBS
v FORMAT UNLOAD
v FORMAT SQL/DS
v CONTINUEIF
v WHEN.
In addition, you cannot specify field specifications with the INCURSOR
option.
WHEN
Indicates which records in the input data set are to be loaded. If no WHEN
clause is specified (and if FORMAT UNLOAD was not used in the LOAD
statement), all records in the input data set are loaded into the specified
tables or partitions. (Data that is beyond the range of the specified
partition is not loaded.)
The option following WHEN describes a condition; input records that
satisfy the condition are loaded. Input records that do not satisfy any
WHEN clause of any INTO TABLE statement are written to the discard
data set, if one is being used.
| Character-string constants should be specified in LOAD utility control
| statements in the character set that matches the input data record. Specify
| EBCDIC constants in the LOAD control statement if your data is in
| EBCDIC and specify UNICODE constants if your data is in UNICODE.
| You may also code the WHEN condition using the hexadecimal form. For
| example, use (1:1)=X’31’ rather than (1:1)=’1’.
| SQL/DS='table-name'
| Is valid only when the FORMAT SQL/DS option is used on the LOAD
| statement.
| table-name is the name of a table that has been unloaded into the
| unload data set. The table name after INTO TABLE tells which DB2
| table the SQL/DS table is loaded into. Enclose the table name in
| quotation marks if the name contains a blank.
| If no WHEN clause is specified, input records from every SQL/DS
| table are loaded into the table that is specified after INTO TABLE.
| field-selection-criterion
| Describes a field and a character constant. Only those records in which
| the field contains the specified constant are to be loaded into the table
| that is specified after INTO TABLE.
| A field in a selection criterion must:
| v Contain a character or graphic string. No data type conversions are
| performed when the contents of the field in the input record are
| compared to a string constant.
| v Start at the same byte offset in each assembled input record. If any
| record contains varying-length strings, which are stored with length
| fields, that precede the selection field, they must be padded so that
| the start of the selection field is always at the same offset.
| The field and the constant do not need to be the same length. If they
| are not, the shorter of the two is padded before a comparison is made.
| Character and graphic strings are padded with blanks. Hexadecimal
| strings are padded with zeros.
| field-name
| Specifies the name of a field that is defined by a field-specification. If
| field-name is used, the start and end positions of the field are given
| by the POSITION option of the field specification.
| (start:end)
| Identifies column numbers in the assembled load record; the first
| column of the record is column 1. The two numbers indicate the
| starting and ending columns of a selection field in the load record.
| If :end is not used, the field is assumed to have the same length as
| the constant.
| X'byte-string'
| Identifies the constant as a string of hexadecimal characters. For
| example, the following WHEN clause specifies that a record is to
| be loaded if it has the value X'FFFF' in columns 33 through 34.
| WHEN (33:34) = X’FFFF’
| 'character-string'
| Identifies the constant as a string of characters. For example, the
| following WHEN clause specifies that a record is to be loaded if
| the field DEPTNO has the value D11.
| WHEN DEPTNO = ’D11’
| G'graphic-string'
| Identifies the constant as a string of double-byte characters. For
| example, the following WHEN clause specifies that a record is to
| be loaded if it has the specified value in columns 33 through 36.
| WHEN (33:36) = G’<**>’
v ROWID fields are varying length, and must contain a valid 2-byte binary
length field preceding the data; no intervening gaps are allowed between
ROWID fields and the fields that follow.
v LOB fields are varying length, and require a valid 4-byte binary length
field preceding the data; no intervening gaps are allowed between them
and the LOB fields that follow.
v Numeric data is assumed to be in the appropriate internal DB2 number
representation.
v The NULLIF or DEFAULTIF options cannot be used.
If any column in the output table does not have a field specification and is
defined as NOT NULL, with no default, the utility job step is terminated.
| Identity columns or row change timestamp columns can appear in the field
| specification only if you defined them with the GENERATED BY
| DEFAULT attribute.
field-name
Specifies the name of a field, which can be a name of your choice. If the
field is to be loaded, the name must be the name of a column in the table
that is named after INTO TABLE unless IGNOREFIELDS is specified. You
can use the field name as a vehicle to specify the range of incoming data.
See “Example 4: Loading data of different data types” on page 294 for an
example of loading selected records into an empty table space.
The starting location of the field is given by the POSITION option. If
POSITION is not used, the starting location is one column after the end of
the previous field.
LOAD determines the length of the field in one of the following ways, in
the order listed:
| 1. If the field has data type VARCHAR, VARGRAPHIC, VARBINARY,
ROWID, or XML the length is assumed to be contained in a 2-byte
| binary field that precedes the data. For VARCHAR, VARBINARY, and
XML fields, the length is in bytes; for VARGRAPHIC fields, the length
field identifies the number of double-byte characters.
If the field has data type CLOB, BLOB, or DBCLOB, the length is
assumed to be contained in a 4-byte binary field that precedes the data.
For BLOB and CLOB fields, the length is in bytes; for DBCLOB fields,
| the length field identifies the number of double-byte characters.
2. If :end is used in the POSITION option, the length is calculated from
start and end. In that case, any length attribute after the CHAR,
| GRAPHIC, INTEGER, DECIMAL, FLOAT, or DECFLOAT specifications
is ignored.
3. The length attribute on the CHAR, GRAPHIC, INTEGER, DECIMAL,
| FLOAT, or DECFLOAT specifications is used as the length.
4. The length is taken from the DB2 field description in the table
definition, or it is assigned a default value according to the data type.
For DATE and TIME fields, the length is defined during installation.
For variable-length fields, the length is defined from the column in the
If a data type is not given for a field, its data type is assumed to be the
same as that of the column into which it is loaded, as given in the DB2
table definition.
POSITION(start:end)
Indicates where a field is in the assembled load record.
start and end are the locations of the first and last columns of the field; the
first column of the record is column 1. The option can be omitted.
Column locations can be specified as:
v An integer n, meaning an actual column number
v *, meaning one column after the end of the previous field
Data types in a field specification: The data type of the field can be specified by
any of the keywords that follow. Except for graphic fields, length is the length in
bytes of the input field.
All numbers that are designated EXTERNAL are in the same format in the input
records.
CHAR(length)
Specifies a fixed-length character string. If you do not specify length, the length
of the string is determined from the POSITION specification. If you do not
specify length or POSITION, LOAD uses the default length for CHAR, which is
determined from the length of the column in the table. See Table 31 on page
236 for more information on the default length for CHAR. You can also specify
CHARACTER and CHARACTER(length).
BIT
Specifies that the input field contains BIT data. If BIT is specified, LOAD
bypasses any CCSID conversions for the input data. If the target column
has the BIT data type attribute, LOAD bypasses any code page translation
for the input data.
MIXED
Specifies that the input field contains mixed SBCS and DBCS data. If
MIXED is specified, any required CCSID conversions use the mixed CCSID
for the input data. If MIXED is not specified, any such conversions use the
SBCS CCSID for the input data.
| BLOBF
| Indicates that the input field contains the name of a BLOB file which is
| going to be loaded to a specified BLOB/XML column.
| CLOBF
| Indicates that the input field contains the name of a CLOB file which is
| going to be loaded to a specified CLOB/XML column.
| DBCLOBF
| Indicates that the input field contains the name of a DBCLOBF file which
| is going to be loaded to a specified DBCLOB/XML column.
STRIP
Specifies that LOAD is to remove zeros (the default) or the specified
characters from the beginning, the end, or both ends of the data. LOAD
pads the CHAR field, so that it fills the rest of the column.
LOAD applies the strip operation before performing any character code
conversion or padding.
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning and end of the data. The default is
BOTH.
TRAILING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the end of the data.
LEADING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning of the data.
'strip-char'
Specifies a single-byte or double-byte character that LOAD is to strip
from the data.
Specify this character value in EBCDIC. Depending on the input
encoding scheme, LOAD applies SBCS CCSID conversion to the
strip-char value before it is used in the strip operation.
If the subtype of the column to be loaded is BIT or you want to specify
a strip-char value in an encoding scheme other than EBCDIC, use the
hexadecimal form (X'strip-char'). LOAD does not perform any CCSID
conversion if the hexadecimal form is used.
X'strip-char'
Specifies in hexadecimal form a single-byte or double-byte character
that LOAD is to strip from the data. For single-byte characters, specify
this value in the form X'hh', where hh is two hexadecimal characters.
For double-byte characters, specify this value in the form X'hhhh',
where hhhh is four hexadecimal characters.
Use the hexadecimal form to specify a character in an encoding scheme
other than EBCDIC. When you specify the character value in
hexadecimal form, LOAD does not perform any CCSID conversion.
If you specify a strip character in the hexadecimal format, you must
specify the character in the input encoding scheme.
TRUNCATE
Indicates that LOAD is to truncate the input character string from the right
if the string does not fit in the target column. LOAD performs the
truncation operation after any CCSID translation.
If the input data is BIT data, LOAD truncates the data at a byte boundary.
If the input data is SBCS or MIXED data, LOAD truncates the data at a
character boundary. (Double-byte characters are not split.) If a MIXED field
is truncated to fit a column, the truncated string can be shorter than the
specified column size. In this case, blanks in the output CCSID are padded
to the right. If MIXED data is in EBCDIC, truncation preserves the SO
(shift-out) and SI (shift-in) characters around a DBCS string.
VARCHAR
Specifies a character field of varying length. The length in bytes must be
specified in a 2-byte binary field preceding the data. (The length does not
include the 2-byte field itself.) The length field must start in the column that is
specified as start in the POSITION option. If :end is used, it is ignored.
BIT
Specifies that the input field contains BIT data. If BIT is specified, LOAD
bypasses any CCSID conversions for the input data. If the target column
has the BIT data type attribute, LOAD bypasses any code page translation
for the input data.
MIXED
Specifies that the input field contains mixed DBCS data. If MIXED is
specified, any required CCSID conversions use the mixed CCSID for the
input data. If MIXED is not specified, any such conversions use the SBCS
CCSID for the input data.
| BLOBF
| Indicates that the input field contains the name of a BLOB file which is
| going to be loaded to a specified BLOB/XML column.
| CLOBF
| Indicates that the input field contains the name of a CLOB file which is
| going to be loaded to a specified CLOB/XML column.
| DBCLOBF
| Indicates that the input field contains the name of a DBCLOBF file which
| is going to be loaded to a specified DBCLOB/XML column.
STRIP
Specifies that LOAD is to remove zeros (the default) or the specified
characters from the beginning, the end, or both ends of the data. LOAD
adjusts the VARCHAR length field to the length of the stripped data.
LOAD applies the strip operation before performing any character code
conversion or padding.
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning and end of the data. The default is
BOTH.
TRAILING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the end of the data.
LEADING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning of the data.
'strip-char'
Specifies a single-byte or double-byte character that LOAD is to strip
from the data.
Specify this character value in EBCDIC. Depending on the input
encoding scheme, LOAD applies SBCS CCSID conversion to the
strip-char value before it is used in the strip operation.
If the subtype of the column to be loaded is BIT or you want to specify
a strip-char value in an encoding scheme other than EBCDIC, use the
hexadecimal form (X'strip-char'). LOAD does not perform any CCSID
conversion if the hexadecimal form is used.
X'strip-char'
Specifies in hexadecimal form a single-byte or double-byte character
that LOAD is to strip from the data. For single-byte characters, specify
this value in the form X'hh', where hh is two hexadecimal characters.
For double-byte characters, specify this value in the form X'hhhh',
where hhhh is four hexadecimal characters.
Use the hexadecimal form to specify a character in an encoding scheme
other than EBCDIC. When you specify the character value in
hexadecimal form, LOAD does not perform any CCSID conversion.
If you specify a strip character in the hexadecimal format, you must
specify the character in the input encoding scheme.
TRUNCATE
Indicates that LOAD is to truncate the input character string from the right
if the string does not fit in the target column. LOAD performs the
truncation operation after any CCSID translation.
If the input data is BIT data, LOAD truncates the data at a byte boundary.
If the input data is character type data, LOAD truncates the data at a
character boundary. If a mixed-character type data is truncated to fit a
column of fixed size, the truncated string can be shorter than the specified
column size. In this case, blanks in the output CCSID are padded to the
right.
GRAPHIC(length)
Specifies a fixed-length graphic type. You can specify both start and end for the
field specification.
If you use GRAPHIC, the input data must not contain shift characters. start
and end must indicate the starting and ending positions of the data itself.
length is the number of double-byte characters. The length of the field in bytes
is twice the value of length. If you do not specify length, the number of
double-byte characters is determined from the POSITION specification. If you
do not specify length or POSITION, LOAD uses the default length for
GRAPHIC, which is determined from the length of the column in the table. See
Table 31 on page 236 for more information on the default length for GRAPHIC.
For example, let *** represent three double-byte characters. Then, to describe
***, specify either POS(1:6) GRAPHIC or POS(1) GRAPHIC(3). A GRAPHIC field
that is described in this way cannot be specified in a field selection criterion.
STRIP
Specifies that LOAD is to remove zeros (the default) or the specified
characters from the beginning, the end, or both ends of the data.
LOAD applies the strip operation before performing any character code
conversion or padding.
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning and end of the data. The default is
BOTH.
TRAILING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the end of the data.
LEADING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning of the data.
X'strip-char'
Specifies the hexadecimal form of the double-byte character that LOAD
is to strip from the data. Specify this value in the form X'hhhh', where
hhhh is four hexadecimal characters.
You must specify the character in the input encoding scheme.
TRUNCATE
Indicates that LOAD is to truncate the input character string from the right
if the string does not fit in the target column. LOAD performs the
truncation operation after any CCSID translation.
LOAD truncates the data at a character boundary. Double-byte characters
are not split.
GRAPHIC EXTERNAL(length)
Specifies a fixed-length field of the graphic type with the external format. You
can specify both start and end for the field specification.
If you use GRAPHIC EXTERNAL, the input data must contain a shift-out
character in the starting position, and a shift-in character in the ending
position. Other than the shift characters, this field must have an even number
of bytes. The first byte of any pair must not be a shift character.
length is the number of double-byte characters. length for GRAPHIC
EXTERNAL does not include the number of bytes that are represented by shift
characters. The length of the field in bytes is twice the value of length. If you
do not specify length, the number of double-byte characters is determined from
the POSITION specification. If you do not specify length or POSITION, LOAD
uses the default length for GRAPHIC, which is determined from the length of
the column in the table. See Table 31 on page 236 for more information on the
default length for GRAPHIC.
For example, let *** represent three double-byte characters, and let < and >
represent shift-out and shift-in characters. Then, to describe <***>, specify
either POS(1:8) GRAPHIC EXTERNAL or POS(1) GRAPHIC EXTERNAL(3).
STRIP
Specifies that LOAD is to remove zeros (the default) or the specified
characters from the beginning, the end, or both ends of the data.
LOAD applies the strip operation before performing any character code
conversion or padding.
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning and end of the data. The default is
BOTH.
TRAILING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the end of the data.
LEADING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning of the data.
X'strip-char'
Specifies the hexadecimal form of the double-byte character that LOAD
is to strip from the data. Specify this value in the form X'hhhh', where
hhhh is four hexadecimal characters.
You must specify the character in the input encoding scheme.
TRUNCATE
Indicates that LOAD is to truncate the input character string from the right
if the string does not fit in the target column. LOAD performs the
truncation operation after any CCSID translation.
LOAD truncates the data at a character boundary. Double-byte characters
are not split.
VARGRAPHIC
Identifies a graphic field of varying length. The length, in double-byte
characters, must be specified in a 2-byte binary field preceding the data. (The
length does not include the 2-byte field itself.) The length field must start in
the column that is specified as start in the POSITION option. :end, if used, is
ignored.
VARGRAPHIC input data must not contain shift characters.
STRIP
Specifies that LOAD is to remove zeros (the default) or the specified
characters from the beginning, the end, or both ends of the data. LOAD
adjusts the VARGRAPHIC length field to the length of the stripped data
(the number of DBCS characters).
LOAD applies the strip operation before performing any character code
conversion or padding.
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning and end of the data. The default is
BOTH.
TRAILING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the end of the data.
LEADING
Indicates that LOAD is to remove occurrences of blank or the specified
strip character from the beginning of the data.
X'strip-char'
Specifies the hexadecimal form of the double-byte character that LOAD
is to strip from the data. Specify this value in the form X'hhhh', where
hhhh is four hexadecimal characters.
You must specify the character in the input encoding scheme.
TRUNCATE
Indicates that LOAD is to truncate the input character string from the right
if the string does not fit in the target column. LOAD performs the
truncation operation after any CCSID translation.
LOAD truncates the data at a character boundary. Double-byte characters
are not split.
SMALLINT
Specifies a 2-byte binary number. Negative numbers are in two’s complement
notation.
INTEGER
Specifies a 4-byte binary number. Negative numbers are in two’s complement
notation. You can also specify INT.
INTEGER EXTERNAL(length)
A string of characters that represent a number. The format is that of an SQL
numeric constant, as described in Chapter 2 of DB2 SQL Reference. If you do
not specify length, the length of the string is determined from the POSITION
specification. If you do not specify length or POSITION, LOAD uses the default
length for INTEGER, which is 4 bytes. See Table 31 on page 236 for more
information on the default length for INTEGER. You can also specify INT
EXTERNAL.
| BIGINT
| Specifies an 8-byte binary number. Negative numbers are in two’s complement
| notation.
| BINARY(length)
| Specifies a fixed-length binary string. If you do not specify length, the length of
| the string is determined from the POSITION specification. If you do not
| specify length or POSITION, LOAD uses the default length for BINARY, which
| is determined from the length of the column in the table. The default for
| X'strip-char' is hexadecimal zero (X'00'). No data conversion is applied to the
| field.
| STRIP
| Specifies that LOAD is to remove binary zeros (the default) or the specified
| X'strip-char' from the beginning, the end, or both ends of the data. LOAD
| pads the BINARY field, so that it fills the rest of the column.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function.
| BOTH
| Indicates that LOAD is to remove occurrences of binary zeros or the
| specified strip character from the beginning and end of the data. The
| default is BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| X'strip-char'
| Specifies, in hexadecimal form, a single-byte or double-byte character
| that LOAD is to strip from the data. For single-byte characters, specify
| this value in the form X'hh', where hh is two hexadecimal characters.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right
| if the string does not fit in the target column.
| LOAD truncates the data at a character boundary.
| VARBINARY
| Specifies a varying length binary string. The length in bytes must be specified
| in a 2-byte binary field preceding the data (the length does not include the
| 2-byte field itself). The length field must start in the column that is specified as
| start in the POSITION option. If :end is used, it is ignored. The default for
| X'strip-char' is hexadecimal zero (X'00'). No data conversion is applied to the
| field.
| STRIP
| Specifies that LOAD is to remove binary zeros (the default) or the specified
| characters from the beginning, the end, or both ends of the data. LOAD
| pads the VARBINARY field, so that it fills the rest of the column.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function. For details, see Chapter 5 of DB2 SQL Reference.
| BOTH
| Indicates that LOAD is to remove occurrences of binary zeros or the
| specified strip character from the beginning and end of the data. The
| default is BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of binary zeros or the
| specified strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of binary zeros or the
| specified strip character from the beginning of the data.
| X'strip-char'
| Specifies, in hexadecimal form, a single-byte character that LOAD is to
| strip from the data. For single-byte characters, specify this value in the
| form X'hh', where hh is two hexadecimal characters.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right
| if the string does not fit in the target column.
| LOAD truncates the data at a character boundary.
DECIMAL PACKED
Specifies a number of the form ddd...ds, where d is a decimal digit that is
represented by four bits, and s is a 4-bit sign value. The plus sign (+) is
represented by A, C, E, or F, and the minus sign (-) is represented by B or D.
The maximum number of ds is the same as the maximum number of digits
that are allowed in the SQL definition. You can also specify DECIMAL, DEC,
or DEC PACKED.
DECIMAL ZONED
Specifies a number in the form znznzn...z/sn, where z, n, and s have the
following values:
n A decimal digit represented by the right 4 bits of a byte (called the
numeric bits)
z That digit’s zone, represented by the left 4 bits
s The right-most byte of the decimal operand; s can be treated as a zone
or as the sign value for that digit
The plus sign (+) is represented by A, C, E, or F, and the minus sign (-) is
represented by B or D. The maximum number of zns is the same as the
maximum number of digits that are allowed in the SQL definition. You can
also specify DEC ZONED.
DECIMAL EXTERNAL(length,scale)
Specifies a string of characters that represent a number. The format is that of
an SQL numeric constant, as described in Chapter 2 of DB2 SQL Reference.
length
Overall length of the input field, in bytes. If you do not specify length, the
length of the input field is determined from the POSITION specification. If
you do not specify length or POSITION, LOAD uses the default length for
DECIMAL EXTERNAL, which is determined by using decimal precision.
See Table 31 on page 236 for more information on the default length for
DECIMAL EXTERNAL.
scale
Specifies the number of digits to the right of the decimal point. scale must
be an integer greater than or equal to 0, and it can be greater than length.
The default is 0.
If scale is greater than length, or if the number of provided digits is less than
the specified scale, the input number is padded on the left with zeros until the
decimal point position is reached. If scale is greater than the target scale, the
source scale locates the implied decimal position. All fractional digits greater
than the target scale are truncated. If scale is specified and the target column
has a data type of small integer or integer, the decimal portion of the input
number is ignored. If a decimal point is present, its position overrides the field
specification of scale.
FLOAT(length)
Specifies either a 64-bit floating-point number or a 32-bit floating-point
number. If length is between 1 and 21 inclusive, the number is 32 bits in the
s390 (HFP) format:
Bit 0 Represents a sign (0 for plus and 1 for minus)
Bits 1-7 Represent an exponent
Bits 8-31 Represent a mantissa
If length is between 1 and 24 inclusive, the number is 32 bits in the IEEE (BFP)
format:
Bit 0 Represents a sign (0 for plus and 1 for minus)
Bits 1-8 Represent an exponent
Bits 9-31 Represent a mantissa
You can also specify REAL for single-precision floating-point numbers and
DOUBLE PRECISION for double-precision floating-point numbers.
FLOAT EXTERNAL(length)
Specifies a string of characters that represent a number. The format is that of
an SQL floating-point constant, as described in Chapter 2 of DB2 SQL Reference.
A specification of FLOAT(IEEE) or FLOAT(S390) does not apply for this format
(string of characters) of floating-point numbers.
If you do not specify length, the length of the string is determined from the
POSITION specification. If you do not specify length or POSITION, LOAD uses
the default length for FLOAT, which is 4 bytes for single precision and 8 bytes
for double precision. See Table 31 on page 236 for more information on the
default length for FLOAT.
DATE EXTERNAL(length)
Specifies a character string representation of a date. The length, if unspecified,
is the specified length on the LOCAL DATA LENGTH install option, or, if none
was provided, the default is 10 bytes. If you specify a length, it must be within
the range of 8 to 254 bytes.
Dates can be in any of the following formats. You can omit leading zeros for
month and day. You can include trailing blanks, but no leading blanks are
allowed.
v dd.mm.yyyy
v mm/dd/yyyy
v yyyy-mm-dd
v Any local format that your site defined at the time DB2 was installed
TIME EXTERNAL(length)
Specifies a character string representation of a time. The length, if unspecified,
is the specified length on the LOCAL TIME LENGTH install option, or, if none
was provided, the default is 8 bytes. If you specify a length, it must be within
the range of 4 to 254 bytes.
Times can be in any of the following formats:
v hh.mm.ss
v hh:mm AM
v hh:mm PM
v hh:mm:ss
v Any local format that your site defined at the time DB2 was installed
You can omit the mm portion of the hh:mm AM and hh:mm PM formats if mm is
equal to 00. For example, 5 PM is a valid time, and can be used instead of 5:00
PM.
TIMESTAMP EXTERNAL(length)
Specifies a character string representation of a time. The default for length is 26
bytes. If you specify a length, it must be within the range of 19 to 26 bytes.
Timestamps can be in any of the following formats. Note that nnnnnn
represents the number of microseconds, and can be from 0 to 6 digits. You can
omit leading zeros from the month, day, or hour parts of the timestamp; you
can omit trailing zeros from the microseconds part of the timestamp.
v yyyy-mm-dd-hh.mm.ss
v yyyy-mm-dd-hh.mm.ss.nnnnnn
v yyyy-mm-dd hh:mm:ss.nnnnnn
See Chapter 2 of DB2 SQL Reference for more information about the timestamp
data type.
ROWID
Specifies a row ID. The input data must be a valid value for a row ID; DB2
does not perform any conversions.
A field specification for a row ID column is not allowed if the row ID column
was created with the GENERATED ALWAYS option.
If the row ID column is part of the partitioning key, LOAD INTO TABLE PART
is not allowed; specify LOAD INTO TABLE instead.
BLOB
Specifies a BLOB field. You must specify the length in bytes in a 4-byte binary
field that precedes the data. (The length does not include the 4-byte field
itself.) The length field must start in the column that is specified as start in the
POSITION option. If :end is used, it is ignored.
CLOB
Specifies a CLOB field. You must specify the length in bytes in a 4-byte binary
field that precedes the data. (The length does not include the 4-byte field
itself.) The length field must start in the column that is specified as start in the
POSITION option. If :end is used, it is ignored.
MIXED
Specifies that the input field contains mixed SBCS and DBCS data. If
MIXED is specified, any required CCSID conversions use the mixed CCSID
for the input data; if MIXED is not specified, any such conversions use the
SBCS CCSID for the input data.
DBCLOB
Specifies a DBCLOB field. You must specify the length in double-byte
characters in a 4-byte binary field that precedes the data. (The length does not
include the 4-byte field itself.) The length field must start in the column that is
specified as start in the POSITION option. If :end is used, it is ignored.
| DECFLOAT (length)
| Specifies either a 128-bit decimal floating-point number or a 64-bit decimal
| floating-point number. The value of the length must be either 16 or 34. If the
| length is 16, the number is in 64 bit decimal floating-point number format. If
| the length is 34, the number is in 128 bit decimal floating-point format. If the
| length is not specified, the number is in 128 bit decimal floating-point format.
| DECFLOAT EXTERNAL (length)
| Specifies a string of characters that represent a number. The format is an SQL
| numeric constant. If you do not specify a length, the length of the string is
| determined from the POSITION specification. If you do not specify a length or
| POSITION, LOAD uses the default length for DECFLOAT (see Table 31 on
| page 236 for more information on the default length for DECFLOAT.
| XML
| Specifies the input field type is XML. Field type XML can only be loaded to a
| XML column. Specify XML when loading the XML value directly from the
| input record. If the format of the input record is in nondelimited, you must
| specify a 2 byte length field precedes the actual data value.
| PRESERVE
| Specifies that the white space in the XML column is preserved. The default is
| not to preserve the white space.
| WHITESPACE
| Clarifies that white space is to be preserved.
DEFAULTIF field-selection-criterion
Describes a condition that causes the DB2 column to be loaded with its default
value. You can write the field-selection-criterion with the same options as
described under “field-selection-criterion” on page 233. If the contents of the
DEFAULTIF field match the provided character constant, the field that is
specified in field-specification is loaded with its default value.
If the DEFAULTIF field is defined by the name of a VARCHAR or
VARGRAPHIC field, DB2 takes the length of the field from the 2-byte binary
field that appears before the data portion of the VARCHAR or VARGRAPHIC
field.
| Character-string constants should be specified in LOAD utility control
| statements in the character set that matches the input data record. Specify
| EBCDIC constants in the LOAD control statement if your data is in EBCDIC
| and specify UNICODE constants if your data is in UNICODE. You may also
| code the DEFAULTIF condition using the hexadecimal form. For example, if
| the input data is in EBCDIC and the control statement is in UTF-8, use
| (1:1)=X’31’ in the condition rather than (1:1)=’1’.
You can use the DEFAULTIF attribute with the ROWID keyword. If the
condition is met, the column is loaded with a value that DB2 generates.
| You cannot specify the DEFAULTIF option for XML columns.
NULLIF field-selection-criterion
Describes a condition that causes the DB2 column to be loaded with NULL.
You can write the field-selection-criterion with the same options as described
under “field-selection-criterion” on page 233. If the contents of the NULLIF
field match the provided character constant, the field that is specified in
field-specification is loaded with NULL.
If the NULLIF field is defined by the name of a VARCHAR or VARGRAPHIC
field, DB2 takes the length of the field from the 2-byte binary field that appears
before the data portion of the VARCHAR or VARGRAPHIC field.
To load a null value into a BLOBF, CLOBF, or DBCLOBF field, use a null input
file name.
| Character-string constants should be specified in LOAD utility control
| statements in the character set that matches the input data record. Specify
| EBCDIC constants in the LOAD control statement if your data is in EBCDIC
| and specify UNICODE constants if your data is in UNICODE. You may also
| code the NULLIF condition using the hexadecimal form. For example, if the
| input data is in EBCDIC and the control statement is in UTF-8, use
| (1:1)=X’31’ in the condition rather than (1:1)=’1’.
The fact that a field in the output table is loaded with NULL does not change
the format or function of the corresponding field in the input record. The input
field can still be used in a field selection criterion. For example, assume that a
LOAD statement has the following field specification:
(FIELD1 POSITION(*) CHAR(4)
FIELD2 POSITION(*) CHAR(3) NULLIF(FIELD1=’SKIP’)
FIELD3 POSITION(*) CHAR(5))
You cannot use the NULLIF parameter with the ROWID keyword because row
ID columns cannot be null.
Field selection criterion
Describes a condition that causes the DB2 column to be loaded with NULL or
with its default value.
| If you are using LOAD for a partition-by-growth table space, you can load data
| only at the table space level, not at the partition level.
When loading data into a segmented table space, sort your data by table to ensure
that the data is loaded in the best physical organization.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. As an alternative to specifying an input data set, you can specify a cursor with the
INCURSOR option. For more information about cursors, see “Loading data by using
the cross-loader function” on page 268.
3. Required if referential constraints exist and ENFORCE(CONSTRAINTS) is specified
(This option is the default).
4. Used for tables with indexes.
5. Required for discard processing when loading one or more tables that have unique
indexes.
6. Required if a sort is done.
7. If you omit the DD statement for this data set, LOAD creates the data set with the
same record format, record length, and block size as the input data set.
8. Required for inline copies.
9. Required if any indexes are to be built or if a sort is required for processing errors.
10. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate
the data set. Otherwise, DFSORT dynamically allocates the temporary data set.
| 11. It is recommended that you use dynamic allocation by specifying SORTDEVT in the
| utility statement because dynamic allocation reduces the maintenance required of the
| utility job JCL.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table Table that is to be loaded. (If you want to load only one partition of a
table, you must use the PART option in the control statement.)
Defining work data sets: Use the formulas and instructions in Table 34 to calculate
the size of work data sets for LOAD. Each row in the table lists the DD name that
is used to identify the data set and either formulas or instructions that you should
use to determine the size of the data set. The key for the formulas is located at the
bottom of the table.
Table 34. Size of work data sets for LOAD jobs
Work data set Size
SORTOUT max(f,e)
ST01WKnn 2 ×(maximum record length × numcols × (count + 2) × number of
indexes)
SYSDISC Same size as input data set
Table 34. Size of work data sets for LOAD jobs (continued)
Work data set Size
SYSERR e
SYSMAP v Simple table space for discard processing:
m
v Partitioned or segmented table space without discard processing:
max(m,e)
SYSUT1 v Simple table space:
max(k,e)
v Partitioned or segmented table space:
max(k,e,m)
If you specify an estimate of the number of keys with the SORTKEYS
option:
max(f,e) for a simple table space
max(f,e,m) for a partitioned or segmented table space
Note:
variable
meaning
k Key calculation
f Foreign key calculation
m Map calculation
e Error calculation
max() Maximum value of the specified calculations
numcols Number of key columns to concatenate when you collect frequent values from the
specified index
count Number of frequent values that DB2 is to collect
maximum record length
Maximum record length of the SYSCOLDISTSTATS record that is processed when
collecting frequency statistics (You can obtain this value from the RECLENGTH
column in SYSTABLES.)
a. Count 0 for the first relationship in which the foreign key participates if
the index is not a data-partitioned secondary index. Count 1 if the index
is a data-partitioned secondary index.
b. Count 1 for subsequent relationships in which the foreign key
participates (if any).
4. Multiply count by the number of rows that are to be loaded.
Calculating the foreign key: f
If a mix of data-partitioned secondary indexes and nonpartitioned indexes
exists on the table that is being loaded or a foreign key exists that is exactly
indexed by a data-partitioned secondary index, use this formula:
max(longest foreign key + 15) × (number of extracted keys)
Otherwise, use this formula:
max(longest foreign key + 13) × (number of extracted keys)
Calculating the map: m
The data set must be large enough to accommodate one map entry (length = 21
bytes) per table row that is produced by the LOAD job.
Calculating the error: e
The data set must be large enough to accommodate one error entry (length =
560 bytes) per defect that is detected by LOAD (for example, conversion errors,
unique index violations, violations of referential constraints).
Calculating the number of possible defects:
– For discard processing, if the discard limit is specified, the number of
possible defects is equal to the discard limit.
If the discard limit is the maximum, calculate the number of possible defects
by using the following formula:
number of input records +
(number of unique indexes × number of extracted keys) +
(number of relationships × number of extracted foreign keys)
– For nondiscard processing, the data set is not required.
Allocating twice the space that is used by the input data sets is usually adequate
for the sort work data sets. Two or three large SORTWKnn data sets are preferable
to several small ones. For more information, see DFSORT Application Programming:
Guide.
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
For example, assume that you have a variable-length column that contains
X'42C142C142C2', which might be interpreted as either six single-byte characters or
three double-byte characters. With the two-byte length field, use:
v X'0006'X'42C142C142C2' to signify six single-byte characters in a VARCHAR
column
v X'0003'X'42C142C142C2' to signify three double-byte characters in a
VARGRAPHIC column
Because rows with duplicate key values for unique indexes fail to be loaded, any
records that are dependent on such rows either:
v Fail to be loaded because they would cause referential integrity violations (if you
specify ENFORCE CONSTRAINTS)
v Are loaded without regard to referential integrity violations (if you specify
ENFORCE NO)
As a result, violations of referential integrity might occur. Such violations can be
detected by LOAD (without the ENFORCE(NO) option) or by CHECK DATA.
When you run a LOAD job with the REPLACE option but without the REUSE
option and the data set that contains the data is not user-managed, DB2 deletes
this data set before the LOAD and redefines a new data set with a control interval
that matches the page size.
| When you run LOAD REPLACE on a table space or partition that is in basic row
| format, LOAD REPLACE converts the table space or partition to reordered row
| format. If there is a table in the table space with EDITPROC or VALIDPROC, the
| table space or partition remains in basic format after the LOAD REPLACE. To
| build a new dictionary for the new format, do not specify KEEPDICTIONARY
| while converting from basic row format to reordered row format. Replacing data
| with LOAD applies only in new function mode.
Using LOAD REPLACE with LOG YES: The LOAD REPLACE or PART REPLACE
with LOG YES option logs only the reset and not each deleted row. If you need to
see what rows are being deleted, use the SQL DELETE statement.
LOAD DATA
REPLACE
INTO TABLE DSN8910.DEPT
( DEPTNO POSITION (1) CHAR(3),
DEPTNAME POSITION (5) VARCHAR,
MGRNO POSITION (37) CHAR(6),
ADMRDEPT POSITION (44) CHAR(3),
LOCATION POSITION (48) CHAR(16) )
ENFORCE NO
Figure 32. Example of using LOAD to replace one table in a single-table table space
Replacing one table in a multiple-table table space: When using LOAD REPLACE
on a multiple-table table space, you must be careful because LOAD works on an
entire table space at a time. Thus, to replace all rows in a multiple-table table
space, you must work with one table at a time, by using the RESUME YES option
on all but the first table. For example, if you have two tables in a table space, you
need to do the following steps:
1. Use LOAD REPLACE on the first table as shown in the control statement in
Figure 33. This option removes data from the table space and replaces just the
data for the first table.
Figure 33. Example of using LOAD REPLACE on the first table in a multiple-table table
space
2. Use LOAD with RESUME YES on the second table as shown in the control
statement in Figure 34. This option adds the records for the second table
without destroying the data in the first table.
Figure 34. Example of using LOAD with RESUME YES on the second table in a
multiple-table table space
If you need to replace just one table in a multiple-table table space, you need to
delete all the rows in the table, and then use LOAD with RESUME YES. For
example, assume that you want to replace all the data in DSN8910.TDSPTXT
without changing any data in DSN8910.TOPTVAL. To do this, follow these steps:
1. Delete all the rows from DSN8910.TDSPTXT by using the following SQL
DELETE statement:
EXEC SQL
DELETE FROM DSN8910.TDSPTXT
ENDEXEC
Hint: The mass delete works most quickly on a segmented table space.
2. Use the LOAD job that is shown in Figure 35 to replace the rows in that table.
Figure 35. Example of using LOAD with RESUME YES to replace one table in a
multiple-table table space
| To include the data from the ROWID, identity, or row change timestamp column
| when you load the unloaded data into a table, define the ROWID, identity, or row
| change timestamp column with GENERATED BY DEFAULT. To use the generated
| LOAD statement, remove the IGNOREFIELDS keyword and change the dummy
| field names to the corresponding column names in the target table.
| To load the unloaded data into a compatible table that has identity columns that
| are defined as GENERATED ALWAYS, use one of the following techniques:
| v Using the combination of IGNOREFIELDS and the dummy DSN_IDENTITY
| field, load will generate the identity column data.
| v To load the unloaded identity column data, add the IDENTITYOVERRIDE
| keyword to the LOAD control statement. Change the dummy field name,
| DSN_IDENTITY, to the corresponding identity column name in the target table.
| v To load the unloaded data into a compatible table that has identity columns or
| ROWID columns that are defined as GENERATED BY DEFAULT, remove the
| IGNOREFIELDS keyword and change the dummy field names to the
| corresponding column names in the target table.
| v To load the unloaded data into a compatible table that has ROWID columns that
| are defined as GENERATED ALWAYS, using the combination of
| IGNOREFIELDS and the dummy DSN_ROWID field, load will generate the
| ROWID column data.
If RESUME NO is specified and the target table is not empty, no data is loaded.
If RESUME YES is specified and the target table is empty, data is loaded.
LOAD always adds rows to the end of the existing rows, but index entries are
placed in key sequence.
To delete all the data in a table space, specify the input data set in the JCL as DD
DUMMY. LOAD REPLACE replaces all tables in the table space.
Loading partitions
If you use the PART clause of the INTO TABLE option, only the specified
partitions of a partitioned table are loaded. If you omit PART, the entire table is
loaded.
You can specify the REPLACE and RESUME options separately by partition. The
control statement in Figure 36 specifies that DB2 is to load data into the first and
second partitions of the employee table. Records with '0' in column 1 replace the
contents of partition 1; records with '1' in column 1 are added to partition 2; all
other records are ignored. (The following example control statement, which is
simplified to illustrate the point, does not list field specifications for all columns of
the table.)
To load columns in an order that is different than the order of the columns in the
CREATE TABLE statement, you must code field specifications for each INTO
TABLE statement.
The following example assumes that your data is in separate input data sets. That
data is already sorted by partition, so you do not need to use the WHEN clause of
INTO TABLE. Placing the RESUME YES option before the PART option inhibits
concurrent partition processing while the utility is running.
LOAD DATA INDDN EMPLDS1 CONTINUEIF(72:72)=’X’
RESUME YES
INTO TABLE DSN8910.EMP REPLACE PART 1
The following example allows partitioning independence when more than one
partition is loaded concurrently.
LOAD DATA INDDN SYSREC LOG NO
INTO TABLE DSN8910.EMP PART 2 REPLACE
When index-based partitioning is used, LOAD INTO PART integer is not allowed if
an identity column is part of the partitioning index. When table-based partitioning
is used, LOAD INTO PART integer is not allowed if an identity column is used in a
| partitioning clause of the CREATE TABLE or ALTER TABLE statement. If
| IDENTITYOVERRIDE is used, these operations are allowed.
Coding your LOAD job with SHRLEVEL CHANGE and using partition parallelism
is equivalent to concurrent, independent insert jobs. For example, in a large
partitioned table space that is created with DEFINE NO, the LOAD utility starts
three tasks. The first task tries to insert the first row, which causes an update to the
DBD. The other two tasks time out while they wait to access the DBD. The first
task holds the lock on the DBD while the data sets are defined for the table space.
| v The XML column can be loaded from the input record when the total input
| record length is less than 32K. XML column value can be placed in the INPUT
| record with or without any other any other loading column values. The input
| record can be in delimited or non-delimited format. For a non-delimited format,
| the XML column is treated like a variable character with a 2-byte length
| preceding the XML value. For a delimited format there are no length bytes
| present.
| v The XML column can be loaded from a separate file whether the XML column
| length is less than 32K or not.
| Results:
| When you load XML documents into a table, and the XML value cannot be cast to
| the type that you specified when you created the index, the value is ignored
| without any warnings or errors, and the document is inserted into the table.
| When you insert XML documents into a table with XML indexes that are of type
| DECFLOAT, the values might be rounded when they are inserted. If the index is
| unique, the rounding might cause duplicates even if the original values are not
| exactly the same.
| DB2 does not compress an XML table space during the LOAD process. If the XML
| table space is defined with COMPRESS YES, the XML table space is compressed
| during REORG.
You are responsible for ensuring that the data in the file does not include the
chosen delimiters. If the delimiters are part of the file’s data, unexpected errors can
occur.
Table 35 lists the default hexadecimal values for the delimiter characters based on
encoding scheme.
Table 35. Default delimiter values for different encoding schemes
EBCDIC ASCII/Unicode ASCII/Unicode
Character EBCDIC SBCS DBCS/MBCS SBCS MBCS
Character string X'7F' X'7F' X'22' X'22'
delimiter
Decimal point X'4B' X'4B' X'2E' X'2E'
character
Column X'6B' X'6B' X'2C' X'2C'
delimiter
Note: In most EBCDIC code pages, the hexadecimal values that are specified in
Table 35 are a double quotation mark(") for the character string delimiter, a
period(.) for the decimal point character, and a comma(,) for the column
delimiter.
Table 36 on page 263 lists the maximum allowable hexadecimal values for any
delimiter character based on the encoding scheme.
Table 37 identifies the acceptable data type forms for the delimited file format that
the LOAD and UNLOAD utilities use.
Table 37. Acceptable data type forms for delimited files.
Acceptable form for loading Form that is created by
Data type a delimited file unloading a delimited file
CHAR, VARCHAR A delimited or non-delimited Character data that is
character string enclosed by character
delimiters. For VARCHAR,
length bytes do not precede
the data in the string.
GRAPHIC (any type) A delimited or non-delimited Data that is unloaded as a
character stream delimited character string.
For VARGRAPHIC, length
bytes do not precede the data
in the string.
INTEGER (any type) 1 A stream of characters that Numeric data in external
represents a number in format.
EXTERNAL format
DECIMAL (any type) 2 A character string that A string of characters that
represents a number in represents a number.
EXTERNAL format
FLOAT 3 A representation of a number A string of characters that
in the range -7.2E + 75 to represents a number in
7.2E + 75 in EXTERNAL floating-point notation.
format
BLOB, CLOB A delimited or non-delimited Character data that is
character string enclosed by character
delimiters. Length bytes do
not precede the data in the
string.
DBCLOB A delimited or non-delimited Character data that is
character string enclosed by character
delimiters. Length bytes do
not precede the data in the
string.
DATE A delimited or non-delimited Character string
character string that contains representation of a date.
a date value in EXTERNAL
format
TIME A delimited or non-delimited Character string
character string that contains representation of a time.
a time value in EXTERNAL
format
Table 37. Acceptable data type forms for delimited files. (continued)
Acceptable form for loading Form that is created by
Data type a delimited file unloading a delimited file
TIMESTAMP A delimited or non-delimited Character string
character string that contains representation of a
a timestamp value in timestamp.
EXTERNAL format
Note:
1. Field specifications of INTEGER or SMALLINT are treated as INTEGER
EXTERNAL.
2. Field specifications of DECIMAL, DECIMAL PACKED, or DECIMAL
ZONED are treated as DECIMAL EXTERNAL.
3. Field specifications of FLOAT, REAL, or DOUBLE are treated as FLOAT
EXTERNAL.
LOAD requires access to the primary indexes on the parent tables of any loaded
tables. For simple, segmented, and partitioned table spaces, it drains all writers
from the parent table’s primary indexes. Other users cannot make changes to the
parent tables that result in an update to their own primary indexes. Concurrent
inserts and deletes on the parent tables are blocked, but updates are allowed for
columns that are not defined as part of the primary index.
Duplicate values of a primary key: A primary index must be a unique index and
must exist if the table definition is complete. Therefore, when you load a parent
table, you build at least its primary index. You need an error data set, and
probably also a map data set and a discard data set.
Invalid foreign key values: A dependent table has the constraint that the values of
its foreign keys must be values of the primary keys of corresponding parent tables.
By default, LOAD enforces that constraint in much the same way as it enforces the
uniqueness of key values in a unique index. First, it loads all records to the table.
Subsequently, LOAD checks the validity of the records with respect to the
constraints, identifies any invalid record by an error message, and deletes the
record from the table. You can choose to copy this record to a discard data set.
Again you need at least an error data set, and probably also a map data set and a
discard data set.
However the project table has a primary key, the project number. In this case, the
record that is rejected by LOAD defines a project number, and any row in the
project activity table that refers to the rejected number is also rejected. The
summary report identifies those as causing secondary errors. If you use a discard
data set, records for both types of errors are copied to it.
Missing primary key values: The deletion of invalid records does not cascade to
other dependent tables that are already in place. Suppose now that the project and
project activity tables exist in separate table spaces, and that they are both
currently populated and possess referential integrity. In addition, suppose that the
data in the project table is now to be replaced (using LOAD REPLACE) and that
the replacement data for some department was inadvertently not supplied in the
input data. Rows that reference that department number might already exist in the
project activity table. LOAD, therefore, automatically places the table space that
contains the project activity table (and all table spaces that contain dependent
tables of any table that is being replaced) into CHECK-pending status.
The CHECK-pending status indicates that the referential integrity of the table
space is in doubt; it might contain rows that violate a referential constraint. DB2
places severe restrictions on the use of a table space in CHECK-pending status;
typically, you run the CHECK DATA utility to reset this status. For more
information, see “Resetting the CHECK-pending status” on page 288.
Consequences of ENFORCE NO: If you use the ENFORCE NO option, you tell
LOAD not to enforce referential constraints. Sometimes you have good reasons for
doing that, but the result is that the loaded table space might violate the
constraints. Hence, LOAD places the loaded table space in CHECK-pending status.
If you use REPLACE, all table spaces that contain any dependent tables of the
tables that were loaded are also placed in CHECK-pending status. You must reset
the status of each table before you can use any of the table spaces.
For example, the violations might occur because parent rows do not exist. In this
case, correcting the parent tables better than deleting the dependent rows. In this
case, ENFORCE NO is more appropriate than ENFORCE CONSTRAINTS. After
you correct the parent table, you can use CHECK DATA to reset the
CHECK-pending status.
Compressing data
You can use LOAD with the REPLACE, RESUME NO, or RESUME YES options to
build a compression dictionary. The RESUME NO option requires the table space
to be empty and RESUME YES will only build a dictionary if the table space is
empty. If your table space, or a partition in a partitioned table space, is defined
with COMPRESS YES, the dictionary is created while records are loaded. After the
dictionary is completely built, the rest of the data is compressed as it is loaded. For
Partition-by-growth table spaces, the utility only builds one dictionary and the
| same dictionary page is populated through all partitions. For XML table spaces
| that are defined with COMPRESS YES, compression does not occur until the first
| REORG.
The data is not compressed until the dictionary is built. You must use LOAD
REPLACE or RESUME NO to build the dictionary. To save processing costs, an
initial LOAD does not go back to compress the records that were used to build the
dictionary.
The number of records that are required to build a dictionary is dependent on the
frequency of patterns in the data. For large data sets, the number of rows that are
required to build the dictionary is a small percentage of the total number of rows
that are to be compressed. For the best compression results, build a new dictionary
whenever you load the data.
Consider using KEEPDICTIONARY if the last dictionary was built by REORG; the
REORG utility’s sampling method can yield more representative dictionaries than
LOAD and can thus mean better compression. REORG with KEEPDICTIONARY is
efficient because the data is not decompressed in the process.
Use KEEPDICTIONARY if you want to try to compress all the records during
LOAD, and if you know that the data has not changed much in content since the
last dictionary was built. An example of LOAD with the KEEPDICTIONARY
option is shown in Figure 37 on page 267.
LOAD DATA
REPLACE KEEPDICTIONARY
INTO TABLE DSN8910.DEPT
( DEPTNO POSITION (1) CHAR(3),
DEPTNAME POSITION (5) VARCHAR,
MGRNO POSITION (37) CHAR(6),
ADMRDEPT POSITION (44) CHAR(3),
LOCATION POSITION (48) CHAR(16) )
ENFORCE NO
IMS DPROP runs as a z/OS application and can extract data from VSAM and
physical sequential access method (SAM) files, as well from DL/I databases. Using
IMS DPROP, you do not need to extract all the data in a database or data set. You
use a statement such as an SQL subselect to indicate which fields to extract and
which conditions, if any, the source records or segments must meet.
With JCL models that you edit, you can have IMS DPROP produce the statements
for a DB2 LOAD utility job. If you have more than one DB2 subsystem, you can
name the one that is to receive the output. IMS DPROP can generate LOAD control
statements in the job to relate fields in the extracted data to target columns in DB2
tables.
You have the following choices for how IMS DPROP writes the extracted data:
v 80-byte records, which are included in the generated job stream
v A separate physical sequential data set (which can be dynamically allocated by
IMS DPROP), with a logical record length that is long enough to accommodate
any row of the extracted data
In the first case, the LOAD control statements that are generated by IMS DPROP
include the CONTINUEIF option to describe the extracted data to DB2 LOAD.
In the second case, you can have IMS DPROP name the data set that contains the
extracted data in the SYSREC DD statement in the LOAD job. (In that case, IMS
DPROP makes no provision for transmitting the extracted data across a network.)
Normally, you do not need to edit the job statements that are produced by IMS
DPROP. However, in some cases you might need to edit; for example, if you want
to load character data into a DB2 column with INTEGER data type, you need to
edit the job statements. (DB2 LOAD does not consider CHAR and INTEGER data
to be compatible.)
IMS DPROP is a versatile tool that contains more control, formatting, and output
options than are described here. For more information about this tool, see IMS
DataPropagator: An Introduction.
To use the cross-loader function, you first need to declare a cursor by using the
EXEC SQL utility. Within the cursor definition, specify a SELECT statement that
identifies the result table that you want to use as the input data for the LOAD job.
| The result table cannot include XML columns. The column names in the SELECT
statement must be identical to the column names in the table that is being loaded.
You can use the AS clause in the SELECT list to change the column names that are
returned by the SELECT statement so that they match the column names in the
target table. The columns in the SELECT list do not need to be in the same order
as the columns in the target table. Also, the SELECT statement needs to refer to
any remote tables by their three-part name.
After you declare the cursor, specify the cursor name with the INCURSOR option
in the LOAD statement. You cannot load the input data into the same table on
which you defined the cursor. You can, however, use the same cursor to load
multiple tables.
When you submit the LOAD job, DB2 parses the SELECT statement in the cursor
definition and checks for errors. If the statement is invalid, the LOAD utility issues
an error message and identifies the condition that prevented the execution. If the
statement syntax is valid but an error occurs during execution, the LOAD utility
also issues an error message. The utility terminates when it encounters an error.
If no errors occur, the utility loads the result table that is identified by the cursor
into the specified target table according to the following rules:
v LOAD matches the columns in the input data to columns in the target table by
name, not by sequence.
v If the number of columns in the cursor is less than the number of columns in the
table that is being loaded, DB2 loads the missing columns with their default
values. If the missing columns are defined as NOT NULL without defaults, the
LOAD job fails.
v If you specify IGNOREFIELDS YES, LOAD skips any columns in the input data
that do not exist in the target table.
v If the data types in the target table do not match the data types in the cursor,
DB2 tries to convert the data as much as possible. If the conversion fails, the
LOAD job fails. You might be able to avoid these conversion errors by using
SQL conversion functions in the SELECT statement of the cursor declaration.
v If the encoding scheme of the input data is different than the encoding scheme
of the target table, DB2 converts the encoding schemes automatically.
v The sum of the lengths of all of the columns cannot exceed 32 KB.
v If the SELECT statement in the cursor definition specifies a table with at least
one LOB column and a ROWID that was created with the GENERATED
ALWAYS clause with a unique index on it, you cannot specify this ROWID
column in the SELECT list of the cursor.
| v If the SELECT statement in the cursor definition specifies a table with a row
| change timestamp column that was created with the GENERATED ALWAYS
| clause, you cannot specify this row change timestamp column in the SELECT list
| of the cursor.
Also, although you do not need to specify casting functions for any distinct types
in the input data or target table, you might need to add casting functions to any
additional WHERE clauses in the SQL.
For examples of loading data from a cursor, see “Sample LOAD control
statements” on page 292.
To create an inline copy, use the COPYDDN and RECOVERYDDN keywords. You
can specify up to two primary and two secondary copies. Inline copies are
produced during the RELOAD phase of LOAD processing.
The SYSCOPY record that is produced by an inline copy contains ICTYPE=F and
SHRLEVEL=R. The STYPE column contains an R if the image copy was produced
by LOAD REPLACE LOG(YES). It contains an S if the image copy was produced
by LOAD REPLACE LOG(NO). The data set that is produced by the inline copy is
logically equivalent to a full image copy with SHRLEVEL REFERENCE, but the
data within the data set differs in the following ways:
v Data pages might be out of sequence and some might be repeated. If pages are
repeated, the last one is always the correct copy.
v Space map pages are out of sequence and might be repeated.
v If the compression dictionary is rebuilt with LOAD, the set of dictionary pages
occurs twice in the data set, with the second set being the correct one.
The total number of duplicate pages is small, with a negligible effect on the
required space for the data set.
You must specify LOAD REPLACE. If you specify RESUME YES or RESUME NO
but not REPLACE, an error message is issued and LOAD terminates.
Improving performance
To improve LOAD utility performance, you can take the following actions:
v Use one LOAD DATA statement when loading multiple tables in the same table
space. Follow the LOAD statement with multiple INTO TABLE WHEN clauses.
v Run LOAD concurrently against separate partitions of a partitioned table space.
Alternatively, specify the INDDN and DISCARDDN keywords in your utility
JCL to invoke partition parallelism. This specification reduces the elapsed time
required for loading large amounts of data into partitioned table spaces.
Advantages of the SORTKEYS option: With SORTKEYS, index keys are passed in
memory rather than written to work files. Avoiding this I/O to the work files
improves LOAD performance.
You also reduce disk space requirements for the SYSUT1 and SORTOUT data sets,
especially if you provide an estimate of the number of keys to sort.
The SORTKEYS option reduces the elapsed time from the start of the RELOAD
phase to the end of the BUILD phase.
However, if the index keys are already in sorted order, or no indexes exist,
SORTKEYS does not provide any advantage.
You can reduce the elapsed time of a LOAD job for a table space or partition with
more than one defined index by specifying the parameters to invoke a parallel
index build. For more information, see “Building indexes in parallel for LOAD” on
page 275.
Estimating the number of keys: You can specify an estimate of the number of keys
for the job to sort. If the estimate is omitted or specified as 0, LOAD writes the
extracted keys to the work data set, which reduces the performance improvement
of using SORTKEYS.
If more than one table is being loaded, repeat the preceding steps for each table,
and sum the results.
Preformatting of a table space that contains a table that is used for query
processing can cause table space scans to read additional empty pages, extending
the elapsed time for these queries. LOAD or REORG PREFORMAT is not
recommended for tables that have a high ratio of reads to inserts if the reads result
in table space scans.
Preformatting boundaries: You can manage your own data sets or have DB2
manage the data sets. For user-managed data sets, DB2 does not delete and
reallocate them during utility processing. The size of the data set does not shrink
back to the original data set allocation size but either remains the same or increases
in size if additional space or data is added. This characteristic has implications
when LOAD or REORG PREFORMAT is used because of the preformatting that is
done for all free pages between the high-used RBA (or page) to the high-allocated
RBA. This preformatting includes secondary extents that have been allocated.
For DB2-managed data sets, DB2 deletes and reallocates them if you specify
REPLACE on the LOAD or REORG job. This results in the data sets being re-sized
to their original allocation size. They remain that size if the data that is being
reloaded does not fill the primary allocation and forces a secondary allocation. This
means the LOAD or REORG PREFORMAT option with DB2-managed data causes
at least the full primary allocation amount of a data set to be preformatted after
the reload of data into the table space.
For both user-managed and DB2-managed data sets, if the data set goes into
secondary extents during utility processing, the high-allocated RBA becomes the
end of the secondary extent, and that becomes the high value for preformatting.
Table space scans can also be elongated because empty preformatted pages are
read. Use the LOAD or REORG PREFORMAT option for table spaces that start out
empty and are filled through high insert activity before any query access is
performed against the table space. Mixing inserts and nonindexed queries against a
preformatted table space might have a negative impact on the query performance
Tables 38, 39, and 40 identify the compatibility of data types for assignments and
comparisons. Y indicates that the data types are compatible. N indicates that the
data types are not compatible. D indicates the defaults that are used when you do
not specify the input data type in a field specification of the INTO TABLE
statement.
Notes:
1. Conversion applies when either the input data or the target table is Unicode.
Input fields with data types CHAR, CHAR MIXED, CLOB, DBCLOB, VARCHAR,
VARCHAR MIXED, GRAPHIC, GRAPHIC EXTERNAL, and VARGRAPHIC are
converted from the CCSIDs of the input file to the CCSIDs of the table space when
they do not match. For example:
v You specify the ASCII or UNICODE option for the input data, and the table
space is EBCDIC.
v You specify the EBCDIC or UNICODE option, and the table space is ASCII.
v You specify the ASCII or EBCDIC option, and the table space is Unicode.
v The CCSID option is specified, and the CCSIDs of the input data are not the
same as the CCSIDs of the table space.
CLOB, BLOB, and DBCLOB input field types cannot be converted to any other
field type.
You can also remove a specified character from the beginning, end, or both ends of
the data by specifying the STRIP option. This option is valid only with the CHAR,
| VARCHAR, GRAPHIC, VARGRAPHIC, BINARY, and VARBINARY data type
options. If you specify both the TRUNCATE and STRIP options, LOAD performs
the strip operation first. For example, if you specify both TRUNCATE and STRIP
for a field that is to be loaded into a VARCHAR(5) column, LOAD alters the
character strings as shown in Table 41. In this table, an underscore represents a
character that is to be stripped.
Table 41. Results of specifying both TRUNCATE and STRIP for data that is to be loaded into
a VARCHAR(5) column.
Specified STRIP String after strip
option Input string operation String that is loaded
STRIP BOTH ‘_ABCDEFG_’ ‘ABCDEFG’ ‘ABCDE’
STRIP LEADING ‘_ABC_’ ‘ABC_’ ‘ABC_’
STRIP TRAILING ‘_ABC_DEF_’ ‘_ABC_DEF’ ‘_ABC_’
For unique indexes, any two null values are assumed to be equal, unless the index
was created with the UNIQUE WHERE NOT NULL clause. In that case, if the key
is a single column, it can contain any number of null values, although its other
values must be unique.
Neither the loaded table nor its indexes contain any of the records that might have
produced an error. Using the error messages, you can identify faulty input records,
correct them, and load them again. If you use a discard data set, you can correct
the records there and add them to the table with LOAD RESUME.
LOAD uses parallel index build if all of the following conditions are true:
v More than one index needs to be built.
v The LOAD utility statement specifies a non-zero estimate of the number of keys
on the SORTKEYS option.
For a diagram of parallel index build processing, see Figure 77 on page 506.
You can either allow the utility to dynamically allocate the data sets that the SORT
phase needs, or provide the necessary data sets yourself. Select one of the
following methods to allocate sort work and message data sets:
Method 1: LOAD determines the optimal number of sort work and message data
sets.
Method 2: You control allocation of sort work data sets, while LOAD allocates
message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: You have the most control over rebuild processing; you must specify
both sort work and message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
| Note: Using this method does not eliminate the requirement for a UTPRINT DD
| card.
Data sets used: If you select Method 2 or 3 in the preceding information, use the
information provided here, along with “Determining the number of sort subtasks,”
“Allocation of sort subtasks” on page 277, and “Estimating the sort work file size”
on page 277 to define the necessary data sets.
Each sort subtask must have its own group of sort work data sets and its own
print message data set. Possible reasons to allocate data sets in the utility job JCL
rather than using dynamic allocation are:
v To control the size and placement of the data sets
v To minimize device contention
v To optimally utilize free disk space
v To limit the number of utility subtasks that are used to build indexes
The DD names SWnnWKmm define the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more
data sets that are to be used by that subtask pair. For example:
SW01WK01 The first sort work data set that is used by the subtask as it builds
the first index.
SW01WK02 The second sort work data set that is used by the subtask as it
builds the first index.
SW02WK01 The first sort work data set that is used by the subtask as it builds
the second index.
SW02WK02 The second sort work data set that is used by the subtask as it
builds the second index.
The DD names UTPRINnn define the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
v The number of subtask pairs equals the number of sort work data set groups
that are allocated.
v The number of subtask pairs equals the number of message data sets that are
allocated.
v If you allocate both sort work and message data set groups, the number of
subtask pairs equals the smallest number of data sets that are allocated.
Allocation of sort subtasks: LOAD attempts to assign one sort subtask pair for
each index that is to be built. If LOAD cannot start enough subtasks to build one
index per subtask pair, it allocates any excess indexes across the pairs (in the order
that the indexes were created), so that one or more subtask pairs might build more
than one index.
During parallel index build processing, LOAD assigns all foreign keys to the first
utility subtask pair. Remaining indexes are then distributed among the remaining
subtask pairs according to the creation date of the index. If a table space does not
participate in any relationships, LOAD distributes all indexes among the subtask
pairs according to the index creation date, assigning the first created index to the
first subtask pair.
Refer to Table 42 for conceptual information about subtask pairing when the
number of indexes (seven indexes) exceeds the available number of subtask pairs
(five subtask pairs).
Table 42. LOAD subtask pairing for a relational table space
Subtask pair Assigned index
SW01WKmm Foreign keys, fifth created index
SW02WKmm First created index, sixth created index
SW03WKmm Second created index, seventh created index
SW04WKmm Third created index
SW05WKmm Fourth created index
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys in all of the indexes that are being
processed by the subtask in order to calculate each sort work file size. After you
determine which indexes are assigned to which subtask pairs, use one of the
following formulas to calculate the required space:
v If the indexes being processed include a mixture of data-partitioned secondary
indexes and nonpartitioned indexes, use the following formula:
2 × (longest index key + 15) × (number of extracted keys)
v Otherwise, if only one type of index is being built, use the following formula:
2 × (longest index key + 13) × (number of extracted keys)
longest index key The length of the longest key that is to be
processed by the subtask. For the first subtask pair
for LOAD, compare the length of the longest key
and the length of the longest foreign key, and use
the larger value. For nonpadded indexes, longest
index key means the maximum possible length of a
key with all varying-length columns, padded to
their maximum lengths, plus 2 bytes for each
varying-length column.
number of extracted keys The number of keys from all indexes that are to be
sorted and that the subtask is to process.
When loading into a segmented table space, LOAD leaves free pages, and free
space on each page, in accordance with the current values of the FREEPAGE and
PCTFREE parameters. (You can set those values with the CREATE TABLESPACE,
ALTER TABLESPACE, CREATE INDEX, or ALTER INDEX statements.) LOAD
leaves one free page after reaching the FREEPAGE limit for each table in the table
space.
| For XML table spaces, FREEPAGE and PCTFREE are not processed until the first
| REORG.
If you are replacing a partition, these preceding restrictions are relaxed; the
partition that is being replaced can be in the RECOVER-pending status, and its
corresponding index partition can be in the REBUILD-pending status. However, all
secondary indexes must not be in the page set REBUILD-pending status. See
Appendix C, “Advisory or restrictive states,” on page 895 for more information
about resetting a restrictive status.
available again when the affected partitions are rebuilt by using the
REBUILD INDEX utility, or recovered by using the RECOVER utility.
See Table 171 on page 901 for information about resetting the RECOVER-pending
status, Table 170 on page 900 for information about resetting the REBUILD-pending
status, and “REORG-pending status” on page 901 for information about resetting
the REORG-pending status.
Any field specification that describes the data is checked before a field procedure is
executed. That is, the field specification must describe the data as it appears in the
input record.
ROWID generated by default: The LOAD utility can set from input data columns
that are defined as ROWID GENERATED BY DEFAULT. The input field must be
specified as a ROWID. No conversions are allowed. The input data for a ROWID
column must be a unique, valid value for a row ID. If the value of the row is not
unique, a duplicate key violation occurs. If such an error occurs, the load fails. In
this case, you need to discard the duplicate value and re-run the LOAD job with a
new unique value, or allow DB2 to generate the value of the row ID.
You can use the DEFAULTIF attribute with the ROWID keyword. If the condition
is met, the column is loaded with a value that is generated by DB2. You cannot use
the NULLIF attribute with the ROWID keyword because row ID columns cannot
be null.
| Row change timestamp generated always: The row change timestamp column that
| is defined as GENERATED ALWAYS cannot be included in the field specification
| list unless you specify IGNOREFIELDS YES, because DB2 generates the timestamp
| value for this column.
v Load the LOB value directly from the input data set: Use this method only
when the sum of the lengths of all of the columns to be loaded, including the
LOB column, does not exceed 32 KB. To load a LOB value directly from the
input data set:
1. In the input data set, include the LOB value preceded by a 2-byte binary
field that contains the length of the LOB.
2. Specify CLOB, BLOB, or DBCLOB in the field specification portion of the
LOAD statement. These options indicate that the field in the input data set is
a LOB value. For example, to load a CLOB into the RESUME column, specify
something like RESUME POSITION(7) CLOB. This specification indicates that
position 7 of the input data set contains the length of the CLOB followed by
the CLOB value that is to be loaded into the RESUME column.
v Load the LOB value from a file that is listed in the input data set: When you
load a LOB value from a file, the LOB value can be greater than 32 KB. To load
a LOB value from a file:
1. In the input data set, specify the names of the files that contain the LOB
values. Each file can be either a PDS, PDSE, or an HFS file.
2. Specify either BLOBF, CLOBF, or DBCLOBF in the field specification portion
of the LOAD statement. For example, to load a LOB into the RESUME
column of a table, specify something like RESUME POSITION(7) VARCHAR CLOBF.
This specification indicates that position 7 of the input data set contains the
name of a file from which a varying-length CLOB is to be loaded into the
RESUME column.
v Load data from another table: To transfer data from one location to another
location or from one table to another table at the same location, use a cursor.
This method of loading data is called the cross-loader function. For more
information about how to use this function, see “Loading data by using the
cross-loader function” on page 268.
When you use the cross-loader function, the LOB value can be greater than 32
KB. For this method, DB2 uses a separate buffer for LOB data and therefore
stores only 8 bytes per LOB column. The sum of the lengths of the non-LOB
columns plus the sum of 8 bytes per LOB column cannot exceed 32 KB.
Use either the STATISTICS option or the RUNSTATS utility to collect statistics so
that the DB2 catalog statistics contain information about the newly loaded data.
Recording these new statistics enables DB2 to select SQL paths with accurate
information. Then rebind any application plans that depend on the loaded tables to
update the path selection of any embedded SQL statements.
| If you perform a LOAD operation on a base table that contains an XML column,
| DB2 does not collect inline statistics for the related XML table space or its indexes.
Collecting inline statistics for discarded rows: If you specify the DISCARDDN
and STATISTICS options and a row is found with check constraint errors or
conversion errors, the row is not loaded into the table and DB2 does not collect
inline statistics on it. However, the LOAD utility collects inline statistics prior to
discarding rows that have unique index violations or referential integrity
violations. In these cases, if the number of discarded rows is large enough to make
the statistics significantly inaccurate, run the RUNSTATS utility separately on the
table to gather the most accurate statistics.
Terminating LOAD
If you terminate LOAD by using the TERM UTILITY command during the reload
phase, the records are not erased. The table space remains in RECOVER-pending
status, and indexes remain in the REBUILD-pending status.
If you terminate LOAD by using the TERM UTILITY command during the sort or
build phases, the indexes that are not yet built remain in the REBUILD-pending
status.
If the LOAD job terminates during the RELOAD, SORT, BUILD, or SORTBLD
phases, both RESTART and RESTART(PHASE) phases restart from the beginning of
the RELOAD phase. However, restart of LOAD RESUME YES or LOAD PART
RESUME YES in the BUILD or SORTBLD phase results in message DSNU257I.
Table 45 lists the LOAD phases and their effects on any pending states when the
utility is terminated in a particular phase.
Table 45. LOAD phases and their effects on pending states when terminated.
Phase Effect on pending status
Reload v Places table space in RECOVER-pending status, then resets the status.
v Places indexes in REBUILD-pending status.
v Places table space in COPY-pending status.
v Places table space in CHECK-pending status.
Build v Resets REBUILD-pending status for non unique indexes.
Indexval v Resets REBUILD-pending status for unique indexes.
Enforce v Resets CHECK-pending status for table space.
Restarting LOAD
You can restart LOAD at its last commit point (RESTART(CURRENT)) or at the
beginning of the phase during which operation ceased (RESTART(PHASE)). LOAD
output messages identify the completed phases; use the DISPLAY command to
identify the specific phase during which operation stopped.
Notes:
1. SYSMAP and SYSERR data sets might not be required for all load jobs. See Chapter 16,
“LOAD,” on page 205 for exact requirements.
2. If the SYSERR data set is not required and has not been provided, LOAD uses SYSUT1
as a work data set to contain error information.
3. You must not restart during the RELOAD phase if you specified SYSREC DD *.This
statement prevents internal commits from being taken, and RESTART performs like
RESTART(PHASE), except with no data back out. Also, you must not restart if your
SYSREC input consistsof multiple, concatenated data sets.
4. The utility can be restarted with either RESTART or RESTART(PHASE). However,
because this phase does not take checkpoints, RESTART is always re-executed from the
beginning of the phase.
5. A LOAD RESUME YES job cannot be restarted in the BUILD or SORTBLD phase.
6. Use RESTART or RESTART(PHASE) to restart at the beginning of the RELOAD phase.
7. This utility can be restarted with either RESTART or RESTART(PHASE).However, the
utility can be re-executed from the last internal checkpoint. This is dependent on the
data sets that are used and whether any input data sets have been rewritten.
8. The SYSUT1 data set is required if the target table space is segmented or partitioned.
9. If report is required and this is a load without discard processing, SYSMAP is required
to complete the report phase.
10. Any job that finished abnormally in the RELOAD, SORT, BUILD, or SORTBUILD phase
restarts from the beginning of the RELOAD phase.
You can restart LOAD at its last commit point or at the beginning of the phase
during which operation ceased. LOAD output messages identify the completed
phases; use the DISPLAY command to identify the specific phase during which
operation stopped.
Restarting after an out-of-space condition: See “Restarting after the output data
set is full” on page 41 for guidance in restarting LOAD from the last commit point
after receiving an out-of-space condition.
Claims and drains: Table 47 shows which claim classes LOAD drains and the
restrictive states the utility sets.
Table 47. Claim classes of LOAD operations
LOAD LOAD PART LOAD LOAD PART
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Target NONE NONE CHANGE CHANGE
Table space, index, or DA/UTUT DA/UTUT CW/UTRW CW/UTRW
physical partition of a table
space or index space
Nonpartitioned secondary DA/UTUT DR CW/UTRW CW/UTRW
index1
Data-partitioned secondary DA/UTUT DA/UTUT CW/UTRW CW/UTRW
index2
Index logical partition3 None DA/UTUT None CW/UTRW
Primary index (with DW/UTRO DW/UTRO CR/UTRW CR/UTRW
ENFORCE option only)
RI dependents CHKP (NO) CHKP (NO) CHKP (NO) CHKP (NO)
Legend:
v CHKP (NO): Concurrently running applications do not see CHECK-pending status after
commit.
v CR: Claim the read claim class.
v CW: Claim the write claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
v None: Object is not affected by this utility.
v RI: Referential integrity
| Notes:
| 1. Includes the document ID indexes and node ID indexes over non-partitioned XML table
| spaces and XML indexes.
| 2. Includes document ID indexes and node ID indexes over partitioned XML table spaces.
| 3. Includes logical partitions of an XML index over partitioned table spaces.
Compatibility: Table 48 shows whether or not utilities are compatible with LOAD
and can run concurrently on the same target object. The target object can be a table
space, an index space, or a partition of a table space or index space.
Table 48. Compatibility of LOAD with other utilities
LOAD SHRLEVEL LOAD SHRLEVEL
Action NONE CHANGE
BACKUP SYSTEM YES YES
CHECK DATA DELETE NO No No
CHECK DATA DELETE YES No No
CHECK INDEX No No
CHECK LOB No No
COPY INDEXSPACE SHRLEVEL No Yes
CHANGE
COPY INDEXSPACE SHRLEVEL No No
REFERENCE
COPY TABLESPACE SHRLEVEL No Yes
CHANGE
COPY TABLESPACE SHRLEVEL No No
REFERENCE
COPYTOCOPY No Yes
DIAGNOSE Yes Yes
LOAD SHRLEVEL CHANGE No Yes
LOAD SHRLEVEL NONE No No
MERGECOPY No Yes
MODIFY RECOVERY No Yes
MODIFY STATISTICS No Yes
QUIESCE No No
REBUILD INDEX No No
RECOVER (no options) No No
RECOVER ERROR RANGE No No
RECOVER TOCOPY or TORBA No No
REORG INDEX No No
REORG TABLESPACE UNLOAD No No
CONTINUE or PAUSE
REORG TABLESPACE UNLOAD No No
ONLY or EXTERNAL
REPAIR DUMP or VERIFY No No
REPAIR LOCATE KEY or RID No No
DELETE or REPLACE
REPAIR LOCATE TABLESPACE No No
PAGE REPLACE
REPORT Yes No
RESTORE SYSTEM No No
RUNSTATS INDEX SHRLEVEL No Yes
CHANGE
SQL operations and other online utilities on the same target partition are
incompatible.
You can also remove the restriction by using one of these operations:
v LOAD REPLACE LOG YES
v LOAD REPLACE LOG NO with an inline copy
v REORG LOG YES
v REORG LOG NO with an inline copy
v REPAIR SET with NOCOPYPEND
If you use LOG YES and do not make an image copy of the table space,
subsequent recovery operations are possible but take longer than if you had made
an image copy.
Although CHECK DATA is usually preferred, you can also reset the
CHECK-pending status by using any of the following operations:
v Drop tables that contain invalid rows.
v Replace the data in the table space, by using LOAD REPLACE and enforcing
check and referential constraints.
v Recover all members of the table space that were set to a prior quiesce point.
v Use REPAIR SET with NOCHECKPEND.
You want to run CHECK DATA against the table space that contains the project
activity table to reset the status. First, review the review the description of DELETE
YES and exception tables. Then, when you run the utility, ensure the availability of
all table spaces that contain either parent tables or dependent tables of any table in
the table spaces that are being checked.
DELETE YES: This option deletes invalid records and resets the status, but it is
not the default. Use DELETE NO, the default, to find out quickly how large your
problem is; you can choose to correct it by reloading, rather than correcting the
current situation.
Exception tables: With DELETE YES, you do not use a discard data set to receive
copies of the invalid records; instead, you use another DB2 table called an
exception table. This section assumes that you already have an exception table
available for every table that is subject to referential or table check constraints. (For
instructions on creating them, see “Create exception tables” on page 71.)
If you use DELETE YES, you must name an exception table for every descendent
of every table in every table space that is being checked. Deletes that are caused by
CHECK DATA are not subject to any of the SQL delete rules; they cascade without
restraint to the lowest-level descendent.
If table Y is the exception table for table X, name it with the following clause in the
CHECK DATA statement:
FOR EXCEPTION IN X USE Y
Example: In the following example, CHECK DATA is to be run against the table
space that contains the project activity table. Assume that the exception tables
DSN8910.EPROJACT and DSN8910.EEPA exist.
CHECK DATA TABLESPACE DSN8D91A.PROJACT
DELETE YES
FOR EXCEPTION IN DSN8910.PROJACT USE DSN8910.EPROJACT
IN DSN8910.EMPPROJACT USE DSN8910.EEPA
SORTDEVT SYSDA
SORTNUM 4
If the statement does not name error or work data sets, the JCL for the job must
contain DD statements similar to the following DD statements:
//SYSERR DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSUT1 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTOUT DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//UTPRINT DD SYSOUT=A
When the two jobs are complete, what table spaces are in CHECK-pending status?
v If you enforced constraints when loading the project table, the table space is not
in CHECK-pending status.
v Because you did not enforce constraints on the project activity table, the table
space is in CHECK-pending status.
v Because you used LOAD RESUME (not LOAD REPLACE) when loading the
project activity table, its dependents (the employee-to-project-activity table) are
not in CHECK-pending status. That is, the operation might not delete any
parent rows from the project table, and therefore might not violate the referential
integrity of its dependent. However if you delete records from PROJACT when
checking, you still need an exception table for EMPPROJACT.
Therefore you should check the data in the project activity table.
DB2 records the identifier of the first row of the table that might violate referential
or table check constraints. For partitioned table spaces, that identifier is in
SYSIBM.SYSTABLEPART; for nonpartitioned table spaces, that identifier is in
SYSIBM.SYSTABLES. The SCOPE PENDING option speeds the checking by
confining it to just the rows that might be in error.
Example: In the following example, CHECK DATA is to be run against the table
space that contains the project activity table after LOAD RESUME:
CHECK DATA TABLESPACE DSN8D91A.PROJACT
SCOPE PENDING
DELETE YES
FOR EXCEPTION IN DSN8910.PROJACT USE DSN8910.EPROJACT
IN DSN8910.EMPPROJACT USE DSN8910.EEPA
SORTDEVT SYSDA
SORTNUM 4
As before, the JCL for the job needs DD statements to define the error and sort
data sets.
To rebuild an index that is inconsistent with its data, use the REBUILD INDEX
utility.
updated as part of the insert operation. Because the LOAD utility inserts keys into
an auxiliary index, free space within the index might be consumed and index page
splits might occur. Consider reorganizing an index on the auxiliary table after
LOAD completes to introduce free space into the index for future inserts and
loads.
When you run LOAD with the REPLACE option, the utility updates this range of
used version numbers for indexes that are defined with the COPY NO attribute.
LOAD REPLACE sets the OLDEST_VERSION column to the current version
number, which indicates that only one version is active; DB2 can then reuse all of
the other version numbers.
Recycling of version numbers is required when all of the version numbers are
being used. All version numbers are being used when one of the following
situations is true:
v The value in the CURRENT_VERSION column is one less than the value in the
OLDEST_VERSION column.
v The value in the CURRENT_VERSION column is 15, and the value in the
OLDEST_VERSION column is 0 or 1.
You can also run REBUILD INDEX, REORG INDEX, or REORG TABLESPACE to
recycle version numbers for indexes that are defined with the COPY NO attribute.
To recycle version numbers for indexes that are defined with the COPY YES
attribute or for table spaces, run MODIFY RECOVERY.
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
| Notes:
| 1. The table space is set to ICOPY-pending status if the records are discarded and no pending status if the records
| are not discarded.
|
|
Sample LOAD control statements
Example 1: Specifying field positions. The LOAD control statement in Figure 38
specifies that the LOAD utility is to load the records from the data set that is
defined by the SYSREC DD statement into table DSN8810.DEPT. SYSREC is the
default input data set.
Each POSITION clause specifies the location of a field in the input record. In this
example, LOAD accepts the input that is shown in Figure 39 on page 293 and
interprets it as follows:
v The first 3 bytes of each record are loaded into the DEPTNO column of the
table.
v The next 36 bytes, including trailing blanks, are loaded into the DEPTNAME
column.
If this input column were defined as VARCHAR(36), the input data would need
to contain a 2-byte binary length field preceding the data. This binary field
would begin at position 4.
v The next three fields are loaded into columns that are defined as CHAR(6),
CHAR(3), and CHAR(16).
The RESUME YES clause specifies that the table space does not need to be empty;
new records are added to the end of the table.
LOAD DATA
RESUME YES
INTO TABLE DSN8910.DEPT
(DEPTNO POSITION (1:3) CHAR(3),
DEPTNAME POSITION (4:39) CHAR(36),
MGRNO POSITION (40:45) CHAR(6),
ADMRDEPT POSITION (46:48) CHAR(3),
LOCATION POSITION (49:64) CHAR(16))
Figure 39 on page 293. shows the input to the preceding LOAD job.
Example 3: Loading selected records into multiple tables. The control statement in
Figure 40 on page 294 specifies that the LOAD utility is to load certain data from
the EMPLDS input data set into tables DSN8910.EMP, SMITH.EMPEMPL, and
DSN8810.DEPT. The input data set is identified by the INDDN option. The WHEN
clauses indicate which records are to be loaded into each table. For the EMP and
DEPT tables, the utility is to load only records that begin with the string LKA. For
the EMPEMPL table, the utility is to load only records that begin with the string
ABC. The RESUME YES option indicates that the table space does not need to be
empty for the LOAD job to proceed. The new rows are added to the end of the
tables. This example assumes that the first two tables being loaded have exactly
the same forma, and that the input data matches that format; therefore, no field
specifications are needed for those two INTO TABLE clauses. The third table has a
different format, so field specifications are required and are supplied in the
example.
The POSITION clauses specify the location of the fields in the input data for the
DEPT table. For each source record that is to be loaded into the DEPT table:
v The characters in positions 7 through 9 are loaded into the DEPTNO column.
v The characters in positions 10 through 35 are loaded into the DEPTNAME
column.
v The characters in positions 36 through 41 are loaded into the MGRNO column.
v The characters in positions 42 through 44 are loaded into the ADMRDEPT
column.
Figure 40. Example LOAD statement that loads selected records into multiple tables
Example 4: Loading data of different data types. The control statement in Figure 41
specifies that LOAD is to load data from the SYSRECPJ input data set into table
DSN8910.PROJ. The input data set is identified by the INDDN option. Assume that
the table space that contains table DSN8910.PROJ is currently empty.
For each input record, data is loaded into the specified columns (that is, PROJNO,
PROJNAME, DEPTNO, and so on) to form a table row. Any other PROJ columns
that are not specified in the LOAD control statement are set to the default value.
The POSITION clauses define the starting positions of the fields in the input data
set. The ending positions of the fields in the input data set are implicitly defined
either by the length specification of the data type (CHAR length) or the length
specification of the external numeric data type (LENGTH).
The numeric data that is represented in SQL constant format (EXTERNAL format)
is converted to the correct internal format by the LOAD process and placed in the
indicated column names. The two dates (PRSTDATE and PRENDATE) are
assumed to be represented by eight digits and two separator characters, as in the
USA format (for example, 11/15/2006). The length of the date fields is given as 10
explicitly, although in many cases, the default is the same value.
The COLDEL option indicates that the column delimiter is a comma (,). The
CHARDEL option indicates that the character string delimiter is a double
quotation mark ("). The DECPT option indicates that the decimal point character is
a period (.). You are not required to explicitly specify these particular characters,
because they are all defaults.
//*
//STEP3 EXEC DSNUPROC,UID=’JUQBU101.LOAD2’,TIME=1440,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSERR DD DSN=JUQBU101.LOAD2.STEP3.SYSERR,
// DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
// SPACE=(4096,(20,20),,,ROUND)
//SYSDISC DD DSN=JUQBU101.LOAD2.STEP3.SYSDISC,
// DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
// SPACE=(4096,(20,20),,,ROUND)
//SYSMAP DD DSN=JUQBU101.LOAD2.STEP3.SYSMAP,
// DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
// SPACE=(4096,(20,20),,,ROUND)
//SYSUT1 DD DSN=JUQBU101.LOAD2.STEP3.SYSUT1,
// DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
// SPACE=(4096,(20,20),,,ROUND)
//UTPRINT DD SYSOUT=*
//SORTOUT DD DSN=JUQBU101.LOAD2.STEP3.SORTOUT,
// DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
// SPACE=(4096,(20,20),,,ROUND)
//SYSIN DD *
LOAD DATA
FORMAT DELIMITED COLDEL ’,’ CHARDEL ’"’ DECPT ’.’
INTO TABLE TBQB0103
(FILENO CHAR,
DATE1 DATE EXTERNAL,
TIME1 TIME EXTERNAL,
TIMESTMP TIMESTAMP EXTERNAL)
/*
//SYSREC DD *
"001", 2000-02-16, 00.00.00, 2000-02-16-00.00.00.0000
"002", 2001-04-17, 06.30.00, 2001-04-17-06.30.00.2000
"003", 2002-06-18, 12.30.59, 2002-06-18-12.30.59.4000
"004", 1991-08-19, 18.59.30, 1991-08-19-18.59.30.8000
"005", 2000-12-20, 24.00.00, 2000-12-20-24.00.00.0000
/*
Some of the data that is to be loaded into a single row spans more than one input
record. In this situation, an X in column 72 indicates that the input record contains
fields that are to be loaded into the same row as the fields in the next input record.
In the LOAD control statement, CONTINUEIF(72:72)='X' indicates that LOAD is to
concatenate any input records that have an X in column 72 with the next record
before loading the data.
For each assembled input record (that is, after the concatenation), fields are loaded
into the DSN8910.TOPTVAL table columns (that is, MAJSYS, ACTION, OBJECT ...,
DSPINDEX) to form a table row. Any columns that are not specified in the LOAD
control statement are set to the default value.
The POSITION clauses define the starting positions of the fields in the assembled
input records. Starting positions are numbered from the first column of the
internally assembled input record, not from the start of the input records in the
sequential data set. The ending positions of the fields are implicitly defined by the
length specification of the data type (CHAR length).
No conversions are required to load the input character strings into their
designated columns, which are also defined to be fixed-length character strings.
However, because columns INFOTXT, HELPTXT, and PFKTXT are defined as 79
characters in length and the strings that are being loaded are 71 characters in
length, those strings are padded with blanks as they are loaded.
Figure 43. Example of concatenating multiple input records before loading the data
Example 7: Loading null values. The control statement in Figure 44 specifies that
data from the SYSRECST data set is to be loaded into the specified columns in
table SYSIBM.SYSSTRINGS. The input data set is identified by the INDDN option.
The NULLIF option for the ERRORBYTE and SUBBYTE columns specifies that if
the input field contains a blank, LOAD is to place a null value in the indicated
column for that particular row. The DEFAULTIF option for the TRANSTAB column
indicates that the utility is to load the default value for this column if the input
field value is GG. The CONTINUEIF option indicates that LOAD is to concatenate
any input records that have an X in column 80 with the next record before loading
the data.
The CONTINUEIF option indicates that before loading the data LOAD is to
concatenate any input records that have an X in column 72 with the next record.
The POSITION clauses define the starting positions of the fields in the input data
set. The ending positions of the fields in the input data set are implicitly defined
by the length specification of the data type (CHAR length). In this case, the
characters in positions 1 through 3 are loaded into the ACTNO column, the
characters in positions 5 through 10 are loaded into the ACTKWD column, and the
characters in position 13 onward are loaded into the ACTDESC column. Because
the ACTDESC column is of type VARCHAR, the input data needs to contain a
2-byte binary field that contains the length of the character field. This binary field
begins at position 13.
Example 10: Loading data by using a parallel index build. The control statement in
Figure 47 on page 299 specifies that data from the SYSREC input data set is to be
loaded into table DSN8810.DEPT. Assume that 22 000 rows need to be loaded into
table DSN8910.DEPT, which has three indexes. In this example, the SORTKEYS
option is used to improve performance by forcing a parallel index build. The
SORTKEYS option specifies 66 000 as an estimate of the number keys to sort in
parallel during the SORTBLD phase. (This estimate was computed by using the
calculation that is described in “Improved performance with SORTKEYS” on page
271.) Because more than one index needs to be built, LOAD builds the indexes in
parallel.
The CONTINUEIF option indicates that, before loading the data, LOAD is to
concatenate any input records that have a plus sign (+) in column 79 and a plus
sign (+) in column 80 with the next record.
Example 11: Creating inline copies. The LOAD control statement in Figure 48 on
page 300 specifies that the LOAD utility is to load data from the SYSREC data set
into the specified columns of table ADMF001.TB0S3902. See “Example 1: Specifying
field positions” on page 292. for an explanation of the POSITION clauses.
COPYDDN(COPYT1) indicates that LOAD is to create inline copies and write the
primary image copy to the data set that is defined by the COPYT1 template. This
template is defined in one of the preceding TEMPLATE control statements. For
more information about TEMPLATE control statements, see “Syntax and options of
the TEMPLATE control statement ” on page 641 of the TEMPLATE chapter. To
create an inline copy, you must also specify the REPLACE option, which indicates
that any data in the table space is to be replaced.
Example 12: Collecting statistics. The example in Figure 49 on page 302 is similar
to example 11, except that the STATISTICS option and other related options have
been added so that during the LOAD job, DB2 also gathers statistics for the table
space. Gathering these statistics eliminates the need to run the RUNSTATS utility
after completing the LOAD operation.
REPORT YES indicates that the statistics are to be sent to SYSPRINT as output.
UPDATE ALL and HISTORY ALL indicate that all collected statistics are to be
updated in the catalog and catalog history tables.
Example 13: Loading Unicode data. The following control statement specifies that
Unicode data from the REC1 input data set is to be loaded into table
ADMF001.TBMG0301. The UNICODE option specifies the type of input data. Only
data that satisfies the condition that is specified in the WHEN clause is to be
loaded. The CCSID option specifies the three coded character set identifiers for the
input file: one for SBCS data, one for mixed data, and one for DBCS data. LOG
YES indicates that logging is to occur during the LOAD job.
LOAD DATA INDDN REC1 LOG YES REPLACE
UNICODE CCSID(00367,01208,01200)
INTO TABLE "ADMF001 "."TBMG0301"
WHEN(00004:00005 = X’0003’)
Example 14: Loading data from multiple input data sets by using partition
parallelism. The LOAD control statement in Figure 50 on page 304 contains a
series of INTO TABLE statements that specify which data is to be loaded into
which partitions of table DBA01.TBLX3303. For each INTO TABLE statement:
v Data is to be loaded into the partition that is identified by the PART option. For
example, the first INTO TABLE statement specifies that data is to be loaded into
the first partition of table DBA01.TBLX3303.
v Data is to be loaded from the data set that is identified by the INDDN option.
For example, the data from the PART1 data set is to be loaded into the first
partition.
v Any discarded rows are to be written to the data set that is specified by the
DISCARDDN option. For example, rows that are discarded during the loading
of data from the PART1 data set are written to the DISC1 data set.
v The data is loaded into the specified columns (EMPNO, LASTNAME, and
SALARY).
LOAD uses partition parallelism to load the data into these partitions.
The TEMPLATE utility control statement defines the data set naming convention
for the data set that is to be dynamically allocated during the following LOAD job.
The name of the template is ERR3. The ERRDDN option in the LOAD statement
specifies that any errors are to be written to the data set that is defined by this
ERR3 template. For more information about TEMPLATE control statements, see
“Syntax and options of the TEMPLATE control statement ” on page 641 in the
TEMPLATE chapter.
TEMPLATE ERR3
DSN &UT..&JO..&ST..ERR3&MO.&DAY.
UNIT SYSDA DISP(NEW,CATLG,CATLG)
LOAD DATA
REPLACE
ERRDDN ERR3
INTO TABLE DBA01.TBLX3303
PART 1
INDDN PART1
DISCARDDN DISC1
(EMPNO POSITION(1) CHAR(6),
LASTNAME POSITION(8) VARCHAR(15),
SALARY POSITION(25) DECIMAL(9,2))
.
.
.
INTO TABLE DBA01.TBLX3303
PART 5
INDDN PART5
DISCARDDN DISC5
(EMPNO POSITION(1) CHAR(6),
LASTNAME POSITION(8) VARCHAR(15),
SALARY POSITION(25) DECIMAL(9,2))
/*
Example 15: Loading data from another table in the same system by using a
declared cursor. The following LOAD control statement specifies that all rows that
are identified by cursor C1 are to be loaded into table MYEMP. The INCURSOR
option is used to specify cursor C1, which is defined in the EXEC SQL utility
control statement. Cursor C1 points to the rows that are returned by executing the
statement SELECT * FROM DSN8810.EMP. In this example, the column names in
table DSN8810.EMP are the same as the column names in table MYEMP. Note that
the cursor cannot be defined on the same table into which DB2 is to load the data.
EXEC SQL
DECLARE C1 CURSOR FOR SELECT * FROM DSN8810.EMP
ENDEXEC
LOAD DATA
INCURSOR(C1)
REPLACE
INTO TABLE MYEMP
STATISTICS
Example 16: Loading data partitions in parallel from a remote site by using a
declared cursor. The LOAD control statement in Figure 51 on page 305 specifies
that for each specified partition of table MYEMPP, the rows that are identified by
the specified cursor are to be loaded. In each INTO TABLE statement, the PART
option specifies the partition number, and the INCURSOR option specifies the
cursor. For example, the rows that are identified by cursor C1 are to be loaded into
the first partition. The data for each partition is loaded in parallel.
Each cursor is defined in a separate EXEC SQL utility control statement and points
to the rows that are returned by executing the specified SELECT statement. These
SELECT statement are being executed on a table at a remote server, so the
three-part name is used to identify the table. In this example, the column names in
table CHICAGO.DSN8810.EMP are the same as the column names in table
MYEMPP.
EXEC SQL
DECLARE C1 CURSOR FOR SELECT * FROM CHICAGO.DSN8810.EMP
WHERE EMPNO <= ’099999’
ENDEXEC
EXEC SQL
DECLARE C2 CURSOR FOR SELECT * FROM CHICAGO.DSN8810.EMP
WHERE EMPNO > ’099999’ AND EMPNO <= ’199999’
ENDEXEC
EXEC SQL
DECLARE C3 CURSOR FOR SELECT * FROM CHICAGO.DSN8810.EMP
WHERE EMPNO > ’199999’ AND EMPNO <= ’299999’
ENDEXEC
EXEC SQL
DECLARE C4 CURSOR FOR SELECT * FROM CHICAGO.DSN8810.EMP
WHERE EMPNO > ’299999’ AND EMPNO <= ’999999’
ENDEXEC
LOAD DATA
INTO TABLE MYEMPP PART 1 REPLACE INCURSOR(C1)
INTO TABLE MYEMPP PART 2 REPLACE INCURSOR(C2)
INTO TABLE MYEMPP PART 3 REPLACE INCURSOR(C3)
INTO TABLE MYEMPP PART 4 REPLACE INCURSOR(C4)
Figure 51. Example of loading data partitions in parallel using a declared cursor
Example 17: Loading LOB data from a file The LOAD control statement in
Figure 52 specifies that data from 000130DSN!10.SDSNIVPD(DSN8R130) is to be
loaded into the MY_EMP_PHOTO_RESUME table. The characters in positions 1
through 6 are loaded into the EMPNO column, and the characters starting from
position 7 are to be loaded into the RESUME column. CLOBF indicates that the
characters in position 7 are the name of a file from which a CLOB is to be loaded.
REPLACE indicates that the new data will replace any existing data. Although no
logging is to be done, as indicated by the LOG NO option, the table space is not to
be set in CHECK-pending state, because NOCOPYPEND is specified.
//*****************************************************************
//* LOAD LOB from file
//*****************************************************************
//LOADIT EXEC DSNUPROC,UID='LOADIT',TIME=1440,
// UTPROC='',
// SYSTEM='DSN'
//SYSREC DD*
000130DSN!10.SDSNIVPD(DSN8R130)
//SYSUT1 DD DSN=SYSADM.LOAD.SYSUT1,DISP=(MOD,DELETE,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTOUT DD DSN=SYSADM.LOAD.SORTOUT,DISP=(MOD,DELETE,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSIN DD *
LOAD DATA
REPLACE LOG NO NOCOPYPEND
SORTKEYS 1
INTO TABLE MY_EMP_PHOTO_RESUME
(EMPNO POSITION(1:6) CHAR(6),
RESUME POSITION(7) VARCHAR CLOBF)
MERGECOPY operates on the image copy data sets of a table space, and not on
the table space itself.
Output: Output from the MERGECOPY utility consists of one of the following
types of copies:
v A new single incremental image copy
v A new full image copy
You can create the new image copy for the local or recovery site.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute MERGECOPY, but only
on a table space in the DSNDB01 or DSNDB06 database.
Syntax diagram
WORKDDN SYSUT1
WORKDDN ddname
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. You can specify one LIST keyword per MERGECOPY
control statement. Do not specify LIST with the TABLESPACE keyword.
MERGECOPY is invoked once for each table space in the list. This utility
will only process clone data if the CLONE keyword is specified. The use of
CLONED YES on the LISTDEF statement is not sufficient. For more
information about LISTDEF specifications, see Chapter 15, “LISTDEF,” on
page 185.
TABLESPACE database-name.table-space-name
Specifies the table space that is to be copied, and, optionally, the database
to which it belongs.
database-name
The name of the database that the table space belongs to. The default
is DSNDB04.
table-space-name
The name of the table space whose incremental image copies are to be
merged.
You cannot specify DSNUM and LIST in the same MERGECOPY control
statement. Use PARTLEVEL on the LISTDEF instead. If image copies were
taken by data set (rather than by table space), MERGECOPY must use the
copies by data set.
| CLONE
| Indicates that MERGECOPY is to process only image copy data sets that
| were taken against clone objects. This utility will only process clone data if
| the CLONE keyword is specified. The use of CLONED YES on the
| LISTDEF statement is not sufficient.
WORKDDN ddname
Specifies a DD statement for a temporary data set or template, which is to
be used for intermediate merged output. WORKDDN is optional.
ddname is the DD name. The default is SYSUT1.
Use the WORKDDN option if you are not able to allocate enough data sets
to execute MERGECOPY; in that case, a temporary data set is used to hold
intermediate output. If you omit the WORKDDN option, you might find
that only some of the image copy data sets are merged. When
MERGECOPY has ended, a message is issued that tells the number of data
sets that exist and the number of data sets that have been merged. To
continue the merge, repeat MERGECOPY with a new output data set.
NEWCOPY
Specifies whether incremental image copies are to be merged with the full
image copy. NEWCOPY is optional.
NO
Merges incremental image copies into a single incremental image copy
but does not merge them with the full image copy. The default is NO.
YES
Merges all incremental image copies with the full image copy to form a
new full image copy.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the output image copy data sets at the
local site. ddname1 is the primary output image copy data set. ddname2 is
the backup output image copy data set. COPYDDN is optional.
The default is COPYDDN(SYSCOPY), where SYSCOPY identifies the
primary data set.
The COPYDDN keyword specifies either a DD name or a TEMPLATE
name specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
RECOVERYDDN (ddname3,ddname4)
Specifies the DD statements for the output image copy data sets at the
recovery site. You can have a maximum of two output data sets; the
outputs are identical. ddname3 is the primary output image copy data set.
ddname4 is the backup output image copy data set. RECOVERYDDN is
optional. No default value exists for RECOVERYDDN.
The RECOVERYDDN keyword specifies either a DD name or a
TEMPLATE name specification from a previous TEMPLATE control
statement. If utility processing detects that the specified name is both a DD
name in the current job step and a TEMPLATE name, the utility uses the
DD name. For more information about TEMPLATE specifications, see
Chapter 31, “TEMPLATE,” on page 641.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space
Object whose copies are to be merged.
Data sets: The input data sets for the merge operation are dynamically allocated.
To merge incremental copies, allocate in the JCL a work data set (WORKDDN) and
up to two new copy data sets (COPYDDN) for the utility job. You can allocate the
data sets to tape or disk. If you allocate them to tape, you need an additional tape
drive for each data set.
With the COPYDDN option of MERGECOPY, you can specify the DD names for
the output data sets. The option has the format COPYDDN (ddname1,ddname2), where
ddname1 is the DD name for the primary output data set in the system that
currently runs DB2, and ddname2 is the DD name for the backup output data set in
the system that currently runs DB2. The default for ddname1 is SYSCOPY.
The RECOVERYDDN option of MERGECOPY lets you specify the output image
copy data sets at the recovery site. The option has the format RECOVERYDDN
(ddname3, ddname4), where ddname3 is the DD name for the primary output image
copy data set at the recovery site, and ddname4 is the DD name for the backup
output data set at the recovery site.
Defining the work data set: The work data set should be at least equal in size to
the largest input image copy data set that is being merged. Use the same DCB
attributes that are used for the image copy data sets.
If NEWCOPY is YES, the utility inserts an entry for the new full image copy into
the SYSIBM.SYSCOPY catalog table.
In either case, if any of the input data sets might not be allocated, or you did not
specify a temporary work data set (WORKDDN), the utility performs a partial
merge.
For large table spaces, consider using MERGECOPY to create full image copies.
With the NEWCOPY YES option, however, you can merge a full image copy of a
table space with incremental copies of the table space and of individual data sets
to make a new full image copy of the table space.
If the image copy data sets that you want to merge reside tape, refer to “Retaining
tape mounts” on page 413 for general information about specifying the appropriate
parameters on the DD statements.
To delete all log information that is included in a copy that MERGECOPY makes,
perform the following steps:
1. Find the record of that copy in the catalog table SYSIBM.SYSCOPY. You can
find it by selecting database name, table space name, and date (columns
DBNAME, TSNAME, and ICDATE).
2. Column START_RBA contains the RBA of the last image copy that
MERGECOPY used. Find the record of the image copy that has the same value
of START_RBA.
3. In that record, find the date in column ICDATE. You can use MODIFY
RECOVERY to delete all copies and log records for the table space that were
made before that date.
RECOVER uses the LOG RBA of image copies to determine the starting point in
the log that is needed for recovery. Normally, a timestamp directly corresponds to
a LOG RBA. Because of this, and because MODIFY uses dates to clean up recovery
history, you might decide to use dates to delete old archive log tapes. This decision
might cause a problem if you use MERGECOPY. MERGECOPY inserts the LOG
RBA of the last incremental image copy into the SYSCOPY row that is created for
the new image copy. The date that is recorded in the ICDATE column of SYSCOPY
row is the date that MERGECOPY was executed.
See “Restarting after the output data set is full” on page 41 for guidance in
restarting MERGECOPY from the last commit point after receiving an out-of-space
condition.
Table 52 shows the restrictive state that the utility sets on the target object.
Table 52. Claim classes of MERGECOPY operations.
Target MERGECOPY
Table space or partition UTRW
Legend:
MERGECOPY can run concurrently on the same target object with any utility
except the following utilities:
v COPY TABLESPACE
v LOAD
v MERGECOPY
v MODIFY
v RECOVER
v REORG TABLESPACE
v UNLOAD (only when from the same image copy data set)
Example 2: Creating merged incremental copies and using template switching. Each
MERGECOPY control statement in Figure 54 on page 316 specifies that
MERGECOPY is to merge incremental image copies from the specified table space
into a single incremental image copy for that table space. For each control
statement, the COPYDDN option specifies that the output image copies are to be
written to data sets that are defined by the T1 template. The T1 template has
specified the LIMIT option. This means that the output image copies are to be
written to DASD, if the output image copy size is less than 5 MB. If the limit is
exceeded, template switching from template T1 to template T5 takes place and the
output image copies are to be written to TAPE. This template is defined in the
TEMPLATE utility control statement. For more information about TEMPLATE
utility control statements, see “Syntax and options of the TEMPLATE control
statement ” on page 641 in the TEMPLATE chapter.
Example 3: Creating a merged full image copy. The control statement in Figure 55
specifies that MERGECOPY is to merge all incremental image copies with the full
image copy from table space DSN8S91C to create a new full image copy.
For each full and incremental SYSCOPY record that is deleted from
SYSIBM.SYSCOPY, the utility returns a message identifying the name of the copy
data set.
For information about deleting SYSLGRNX rows, see “Deleting SYSLGRNX and
SYSCOPY rows for a single partition or the entire table space” on page 322.
If MODIFY RECOVERY deletes at least one SYSCOPY record and the target table
space or partition is not recoverable, the target object is placed in COPY-pending
status.
For table spaces and indexes that are defined with COPY YES, the MODIFY
RECOVERY utility updates the OLDEST_VERSION column of the following
catalog tables:
v SYSIBM.SYSTABLESPACE
v SYSIBM.SYSTABLEPART
v SYSIBM.SYSINDEXES
v SYSIBM.SYSINDEXPART
For more information about how and when MODIFY RECOVERY updates these
tables, see “The effect of MODIFY RECOVERY on version numbers” on page 325.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
Syntax diagram
DSNUM ALL
MODIFY RECOVERY LIST listdef-name
TABLESPACE table-space-name DSNUM integer
database-name.
| DELETE
CLONE AGE integer
(*)
DATE integer
(*)
RETAIN LAST ( integer )
LOGLIMIT
GDGLIMIT
LAST ( integer )
LOGLIMIT
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. You can specify one LIST keyword per MODIFY RECOVERY
control statement. Do not specify LIST with the TABLESPACE keyword.
MODIFY is invoked once for each table space in the list. This utility will only
process clone data if the CLONE keyword is specified. The use of CLONED
YES on the LISTDEF statement is not sufficient. For more information about
LISTDEF specifications, see Chapter 15, “LISTDEF,” on page 185.
TABLESPACE database-name.table-space-name
Specifies the database and the table space for which records are to be deleted.
database-name Specifies the name of the database to which the table space
belongs. database-name is optional. The default is DSNDB04.
table-space-name
Specifies the name of the table space.
DSNUM integer
Identifies a single partition or data set of the table space for which records are
to be deleted; ALL deletes records for the entire data set and table space.
integer is the number of a partition or data set.
The default is ALL.
For a partitioned table space, integer is its partition number. The maximum is
4096.
| For a nonpartitioned table space, use the data set integer at the end of the data
| set name as cataloged in the VSAM catalog. If image copies are taken by
| partition or data set and you specify DSNUM ALL, the table space is placed in
| COPY-pending status if a full image copy of the entire table space does not
| exist. The data set name has the following format, where y is either I or J, z is
| either a 1 or 2, and nnn is the data set integer.
| catname.DSNDBx.dbname.tsname.y000z.Annn
| If you specify DSNUM n, MODIFY RECOVERY does not delete any SYSCOPY
| records for the partitions that have an RBA greater than that of the earliest
| point to which the entire table space could be recovered. That point might
| indicate a full image copy, a LOAD operation with LOG YES or a REORG
| operation with LOG YES.
See “Deleting SYSLGRNX and SYSCOPY rows for a single partition or the
entire table space” on page 322 for more information about specifying
DSNUM.
| CLONE
| Indicates that MODIFY RECOVERY is to delete SYSCOPY records and
| SYSLGRNX records for only clone objects. If CLONE is not specified, only
| records for the base objects are deleted. This utility will only process clone data
| if the CLONE keyword is specified. The use of CLONED YES on the LISTDEF
| statement is not sufficient.
DELETE
Indicates that records are to be deleted. See the DSNUM description for
restrictions on deleting partition statistics.
AGE integer
| Deletes all SYSCOPY and SYSLGRNX records that are older than a
| specified number of days. SYSLGRNX records that meet the age
| deletion criteria specified will be deleted even if no SYSCOPY records
| are deleted.
| integer is the number of days, and can range from 0 to 32767. Records
| that are created today are of age 0 and cannot be deleted by this
| option.
| (*) deletes all records, regardless of their age.
DATE integer
| Deletes all SYSCOPY and SYSLGRNX records that are written before a
| specified date. SYSLGRNX records that meet the date deletion criteria
| specified will be deleted even if no SYSCOPY records are deleted.
| integer can be in eight- or six-character format. You must specify a year
| (yyyy or yy), month (mm), and day (dd) in the form yyyymmdd or
| yymmdd. DB2 checks the system clock and converts six-character dates
| to the most recent, previous eight-character equivalent.
| (*) deletes all records, regardless of the date on which they were
| written.
| RETAIN®
| Indicates that records are to be retained. Older records are deleted.
| The following object is named in the utility control statement and does not require
| a DD statement in the JCL:
| Table space
| Object for which records are to be deleted.
| v If image copies exist at only the data set level for a nonpartitioned table space,
| use DSNUM ALL. If DSNUM integer is used, SYSLGRNX records are not
| deleted.
| v If image copies exist at only the table space or index space level, use DSNUM
| ALL.
| v If image copies exist at both the partition level and the table space or index
| space level, use DSNUM ALL. Restriction: In this case, if you use DSNUM
| integer, MODIFY RECOVERY does not delete any SYSCOPY or SYSLGRNX
| records that are newer than the oldest recoverable point at the table space or
| index space level.
| v If image copies exist at both the data set level and the table space level for a
| nonpartitioned table space, use DSNUM ALL. Restriction: In this case, if you
| use DSNUM integer, MODIFY RECOVERY does not delete any SYSCOPY or
| SYSLGRNX records that are newer than the oldest recoverable point at the table
| space level.
| The preceding guidelines pertain to all image copies, regardless of how they were
| created, including those copies that were created by COPY, COPYTOCOPY, LOAD,
| REORG or MERGECOPY.
You can restart a MODIFY RECOVERY utility job, but it starts from the beginning
again. For guidance in restarting online utilities, see “Restarting an online utility”
on page 39.
Table 54 shows the restrictive state that the utility sets on the target object.
Table 54. Claim classes of MODIFY RECOVERY operations.
Target MODIFY RECOVERY
Table space or partition UTRW
Legend:
MODIFY RECOVERY can run concurrently on the same target object with any
utility except the following utilities:
v COPY TABLESPACE
v LOAD
v MERGECOPY
v MODIFY RECOVERY
v RECOVER TABLESPACE
v REORG TABLESPACE
When you run MODIFY RECOVERY, the utility updates this range of used version
numbers for table spaces and for indexes that are defined with the COPY YES
attribute. MODIFY RECOVERY updates the OLDEST_VERSION column of the
appropriate catalog table or tables with the version number of the oldest version
that has not yet been applied to the entire object. DB2 can reuse any version
numbers that are not in the range that is set by the values in the
OLDEST_VERSION and CURRENT_VERSION columns.
Recycling of version numbers is required when all of the version numbers are
being used. All version numbers are being used when one of the following
situations is true:
v The value in the CURRENT_VERSION column is one less than the value in the
OLDEST_VERSION column.
v The value in the CURRENT_VERSION column is 255 for table spaces or 15 for
indexes, and the value in the OLDEST_VERSION column is 0 or 1.
To recycle version numbers for indexes that are defined with the COPY NO
attribute, run LOAD REPLACE, REBUILD INDEX, REORG INDEX, or REORG
TABLESPACE.
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
Example 2: Deleting SYSCOPY and SYSLGRNX records that are older than a
certain date. The following control statement specifies that MODIFY RECOVERY is
to delete all SYSCOPY and SYSLGRNX records that were written before 10
September 2002.
MODIFY RECOVERY TABLESPACE DSN8D91A.DSN8S91D DELETE DATE(20020910)
Figure 56. Example MODIFY RECOVERY statements that delete SYSCOPY records for
partitions
Example 4: Deleting all SYSCOPY records for objects in a list and viewing the
results. In the following example job, the LISTDEF utility control statements define
three lists (L1, L2, L3). The first group of REPORT utility control statements then
specify that the utility is to report recovery information for the objects in these
lists. Next, the MODIFY RECOVERY control statement specifies that the utility is
to delete all SYSCOPY records for the objects in the L1 list. Finally, the second
group of REPORT control statements specify that the utility is to report the
recovery information for the same three lists. In this second report, no information
will be reported for the objects in the L1 list because all of the SYSCOPY records
have been deleted.
Figure 57. Example MODIFY RECOVERY statement that deletes all SYSCOPY records
For more information about the LISTDEF utility control statements, see Chapter 15,
“LISTDEF,” on page 185. For more information about the REPORT utility control
statements, see Chapter 27, “REPORT,” on page 561.
| Example 5: Deleting SYSCOPY and SYSLGRNX records for clone objects. The
| following control statement specifies that MODIFY RECOVERY is to delete
| SYSCOPY records and SYSLGRNX records for only clone objects.
| MODIFY RECOVERY TABLESPACE DBKQBL01.TPKQBL01
| CLONE
| DELETE AGE(*)
| Restriction: MODIFY STATISTICS does not delete statistics history records for
| clone tables because statistics are not collected for these tables.
Output: MODIFY STATISTICS deletes rows from the following catalog tables:
v SYSIBM.SYSCOLDIST_HIST
v SYSIBM.SYSCOLUMNS_HIST
v SYSIBM.SYSINDEXES_HIST
v SYSIBM.SYSINDEXPART_HIST
v SYSIBM.SYSINDEXSTATS_HIST
v SYSIBM.SYSLOBSTATS_HIST
v SYSIBM.SYSTABLEPART_HIST
v SYSIBM.SYSTABSTATS_HIST
v SYSIBM.SYSTABLES_HIST
| v SYSKEYTARGETS_HIST
| v SYSKEYTGTDIST_HIST
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database to run MODIFY STATISTICS.
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority.
Syntax diagram
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. You cannot
repeat the LIST keyword or specify it with TABLESPACE, INDEXSPACE, or
INDEX.
The list can contain index spaces, table spaces, or both. MODIFY STATISTICS
is invoked once for each object in the list.
TABLESPACE database-name.table-space-name
Specifies the database and the table space for which catalog history records are
to be deleted.
database-name Specifies the name of the database to which the table space
belongs. database-name is optional. The default is DSNDB04.
table-space-name
Specifies the name of the table space for which statistics are to
be deleted.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space for which catalog history
information is to be deleted. The utility lists the name in the
SYSIBM.SYSINDEXES table.
database-name Optionally specifies the name of the database to which the
index space belongs. The default is DSNDB04.
index-space-name
Specifies the name of the index space for which the statistics
are to be deleted.
INDEX creator-id.index-name
Specifies the index for which catalog history information is to be deleted.
creator-id
Optionally specifies the creator of the index. The default is DSNDB04.
index-name
Specifies the name of the index for which the statistics are to be
deleted. Enclose the index name in quotation marks if the name
contains a blank.
DELETE
Indicates that records are to be deleted.
ALL Deletes all statistics history rows that are related to the specified object
from all catalog history tables.
Rows from the following history tables are deleted only when you
specify DELETE ALL:
v SYSTABLES_HIST
v SYSTABSTATS_HIST
v SYSINDEXES_HIST
v SYSINDEXSTATS_HIST
| v SYSKEYTARGETS_HIST
ACCESSPATH
Deletes all access-path statistics history rows that are related to the
specified object from the following history tables:
v SYSIBM.SYSCOLDIST_HIST
v SYSIBM.SYSCOLUMNS_HIST
| v SYSKEYTGTDIST_HIST
SPACE
Deletes all space-tuning statistics history rows that are related to the
specified object from the following history tables:
v SYSIBM.SYSINDEXPART_HIST
v SYSIBM.SYSTABLEPART_HIST
v SYSIBM.SYSLOBSTATS_HIST
AGE (integer)
Deletes all statistics history rows that are related to the specified object and
that are older than a specified number of days.
(integer)
Specifies the number of days in a range from 0 to 32 767. This option
cannot delete records that are created today (age 0).
(*) Deletes all records, regardless of their age.
DATE (integer)
Deletes all statistics history rows that were written before a specified date.
(integer)
Specifies the date in an eight-character format. Specify a year (yyyy), month
(mm), and day (dd) in the form yyyymmdd.
(* )
Deletes all records, regardless of the date on which they were written.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space or index space
Object for which records are to be deleted.
Be aware that when you manually insert, update, or delete catalog information,
DB2 does not store the historical information for those operations in the historical
catalog tables.
You can choose to delete only the statistics rows that relate to access path selection
by specifying the ACCESSPATH option. Alternatively, you can delete the rows that
relate to space statistics by using the SPACE option. To delete rows in all statistics
history catalog tables, including the SYSIBM.SYSTABLES_HIST catalog table, you
must specify the DELETE ALL option in the utility control statement.
To delete statistics from the RUNSTATS history tables, you can either use the
MODIFY STATISTICS utility or issue SQL DELETE statements. The MODIFY
STATISTICS utility simplifies the purging of old statistics without requiring you to
write the SQL DELETE statements. You can also delete rows that meet the age and
date criteria by specifying the corresponding keywords (AGE and DATE) for a
particular object.
To avoid time outs when you delete historical statistics with MODIFY STATISTICS,
you should increase the LOCKMAX parameter for DSNDB06.SYSHIST with ALTER
TABLESPACE.
You can restart a MODIFY STATISTICS utility job, but it starts from the beginning
again. For guidance in restarting online utilities, see “Restarting an online utility”
on page 39.
Table 56 shows the restrictive state that the utility sets on the target object.
Table 56. Claim classes of MODIFY STATISTICS operations.
Target MODIFY STATISTICS
Table space, index, or index space UTRW
Legend:
Example 2: Deleting access path records for all objects in a list. The MODIFY
STATISTICS control statement in Figure 58 specifies that the utility is to delete
access-path statistics history rows that were created before 17 April 2000 for objects
in the specified list. The list, M1, is defined in the preceding LISTDEF control
statement and includes table spaces DB0E1501.TL0E1501 and
DSN8D81A.DSN8S81E. For more information about LISTDEF control statements,
see Chapter 15, “LISTDEF,” on page 185.
Figure 58. MODIFY STATISTICS control statement that specifies that access path history
records are to be deleted
Figure 59. MODIFY STATISTICS control statement that specifies that space-tuning statistics
records are to be deleted
Example 4: Deleting all statistics history records for an index space. The control
statement in Figure 60 specifies that MODIFY STATISTICS is to delete all statistics
history records for index space DBOE1501.IUOE1501. Note that the deleted records
are not limited by date because (*) is specified.
Figure 60. MODIFY STATISTICS control statement that specifies that all statistics history
records are to be deleted
See “Syntax and options of the OPTIONS control statement” for details.
Output: The OPTIONS control statement sets the specified processing options for
the duration of the job step, or until replaced by another OPTIONS control
statement within the same job step.
Syntax diagram
OPTIONS
PREVIEW LISTDEFDD ddname TEMPLATEDD ddname event-spec
OFF
KEY key-value
event-spec:
ITEMERROR,HALT WARNING,RC4
EVENT ( )
ITEMERROR,SKIP , WARNING,RC0
WARNING,RC8
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
PREVIEW Specifies that the utility control statements that follow are to run in
PREVIEW mode. The utility checks for syntax errors in all utility
control statements, but normal utility execution does not take
place. If the syntax is valid, the utility expands all LISTDEF lists
and TEMPLATE DSNs that appear in SYSIN and prints results to
the SYSPRINT data set.
PREVIEW evaluates and expands all LISTDEF statements into an
actual list of table spaces or index spaces. It evaluates TEMPLATE
DSNs and uses variable substitution for actual data set names
when possible. It also expands lists from the SYSLISTD DD and
TEMPLATE DSNs from the SYSTEMPL DD that a utility invocation
references.
A definitive preview of TEMPLATE DSN values is not always
possible. Substitution values for some variables, such as &DATE.,
&TIME., &SEQ. and &PART., can change at execution time. In
some cases, PREVIEW generates approximate data set names. The
OPTIONS utility substitutes unknown character variables with the
character string ″UNKNOWN″ and unknown integer variables
with zeroes.
Instead of OPTIONS PREVIEW, you can use a JCL PARM to
activate preview processing. Although the two functions are
identical, use JCL PARM to preview an existing set of utility
control statements. Use the OPTION PREVIEW control statement
when you invoke DB2 utilities through a stored procedure.
The JCL PARM is specified as the third JCL PARM of DSNUTILB
and on the UTPROC variable of DSNUPROC, as shown in the
following JCL:
//STEP1 EXEC DSNUPROC,UID=’JULTU106.RECOVE1’,
// UTPROC=’PREVIEW’,SYSTEM=’SSTR’
Use WARNING to alter the return code for warning messages. You
can alter the return code from message DSNU010I with this option.
If you alter the message return code, message DSNU1024I is issued
to document the new return code.
Action choices are as follows:
RC0
Lowers the final return code of a single utility invocation that
ends in a return code 4 to a return code of 0. Use RC0 to force
a return code of 0 for warning messages.
Use this option only when return code 4 is expected, is
acceptable, and other mechanisms are in place to validate the
results of a utility execution.
RC4
Specifies that return codes for warning messages are to remain
unchanged. Use RC4 to override a previous OPTIONS
WARNING specification in the same job step.
RC8
Raises the final return code of a single utility invocation that
ends in a return code 4 to a return code of 8. Use RC8 to force
a return code of 8 for warning messages. The return code of 8
causes the job step to terminate and subsequent utility control
statements are not executed.
OFF Specifies that all default options are to be restored. OPTIONS OFF
does not override the PREVIEW JCL parameter, which, if specified,
remains in effect for the entire job step. You cannot specify any
other OPTIONS keywords with OPTIONS OFF.
OPTIONS OFF is equivalent to OPTIONS LISTDEFDD SYSLISTD
TEMPLATEDD SYSTEMPL EVENT (ITEMERROR, HALT,
WARNING, RC4).
KEY Specifies an option that you should use only when you are
instructed by IBM Software Support. OPTIONS KEY is followed by
a single operand that IBM Software Support provides when
needed.
You can restart an OPTIONS utility job, but it starts from the beginning again. If
you are restarting this utility as part of a larger job in which OPTIONS completed
successfully, but a later utility failed, do not change the OPTIONS utility control
statement, if possible. If you must change the OPTIONS utility control statement,
use caution; any changes can cause the restart processing to fail. For example, if
you specify a valid OPTIONS statement in the initial invocation, and then on
restart, specify OPTIONS PREVIEW, the job fails. For guidance in restarting online
utilities, see “Restarting an online utility” on page 39.
OPTIONS PREVIEW
TEMPLATE COPYLOC UNIT(SYSDA)
DSN(&DB..&TS..D&JDATE..&STEPNAME..COPY&IC.&LOCREM.&PB.)
DISP(NEW,CATLG,CATLG) SPACE(200,20) TRK
VOLUMES(SCR03)
TEMPLATE COPYREM UNIT(SYSDA)
DSN(&DB..&TS..&UT..T&TIME..COPY&IC.&LOCREM.&PB.)
DISP(NEW,CATLG,CATLG) SPACE(100,10) TRK
LISTDEF CPYLIST INCLUDE TABLESPACES DATABASE DBLT0701
COPY LIST CPYLIST FULL YES
COPYDDN(COPYLOC,COPYLOC)
RECOVERYDDN(COPYREM,COPYREM)
SHRLEVEL REFERENCE
Figure 61. Example OPTIONS statement for checking syntax and previewing lists and
templates.
The first OPTIONS statement specifies that the LISTDEF definition library is
identified by the V1LIST DD statement and the TEMPLATE definition library is
identified by the V1TEMPL DD statement. These definition libraries apply to the
subsequent COPY utility control statement. Therefore, if DB2 does not find the
PAYTBSP list in SYSIN, it searches the V1LIST library, and if DB2 does not find the
PAYTEMP1 template in SYSIN, it searches the V1TEMP library.
The second OPTIONS statement is similar to the first, but it identifies different
libraries and applies to the second COPY control statement. This second COPY
control statement looks similar to the first COPY job. However, this statement
processes a different list and uses a different template. Whereas the first COPY job
uses the PAYTBSP list from the V1LIST library, the second COPY job uses the
PAYTBSP list from the V2LIST library. Also, the first COPY job uses the PAYTEMP1
template from the V1TEMPL library, the second COPY job uses the PAYTEMP1
template from the V2TEMPL library.
OPTIONS LISTDEFDD V1LIST TEMPLATEDD V1TEMPL
COPY LIST PAYTBSP COPYDDN(PAYTEMP1,PAYTEMP1)
| Example 3: Forcing a return code 0. In the following example, the first OPTIONS
control statement forces a return code of 0 for the subsequent MODIFY
RECOVERY utility control statement. Ordinarily, this statement ends with a return
code of 4 because it specifies that DB2 is to delete all SYSCOPY and SYSLGRNX
records for table space A.B. The second OPTIONS control statement restores the
default options, so that no return codes will be overridden for the second MODIFY
RECOVERY control statement.
OPTIONS EVENT(WARNING,RC0)
MODIFY RECOVERY TABLESPACE A.B DELETE AGE(*)
OPTIONS OFF
MODIFY RECOVERY TABLESPACE C.D DELETE AGE(30)
Example 4: Checking syntax and skipping errors while processing list objects. In
Figure 62 on page 343, the first OPTIONS utility control statement specifies that the
subsequent utility control statements are to run in PREVIEW mode. In PREVIEW
mode, DB2 checks for syntax errors in all utility control statements, but normal
utility execution does not take place. If the syntax is valid, DB2 expands the three
lists (LIST1_LISTDEF, LIST2_LISTDEF, and LIST3_LISTDEF) and prints these
results to the SYSPRINT data set.
The second OPTIONS control statement specifies how DB2 is to handle return
codes of 8 in any subsequent utility statements that process a valid list. If
processing of a list item produces return code 8, DB2 skips that item, and
continues to process the rest of the items in the list, but DB2 does not process the
next utility control statement. Instead, the job ends with return code 8.
Figure 62. Example OPTIONS statements for checking syntax and skipping errors
Output: With the WRITE(YES) option, QUIESCE writes changed pages for the table
spaces and their indexes from the DB2 buffer pool to disk. The catalog table
SYSCOPY records the current RBA and the timestamp of the quiesce point. A row
with ICTYPE=’Q’ is inserted into SYSIBM.SYSCOPY for each table space that is
quiesced. DB2 also inserts a SYSCOPY row with ICTYPE=’Q’ for any indexes
(defined with the COPY YES attribute) over a table space that is being quiesced.
(Table spaces DSNDB06.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSUTILX are
an exception; their information is written to the log.)
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database. If the object on
which the utility operates is in an implicitly created database, DBADM authority
on the implicitly created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute QUIESCE, but only on
a table space in the DSNDB01 or DSNDB06 database.
You can specify DSNDB01.SYSUTILX, but you cannot include it in a list with other
table spaces to be quiesced. Recover to current of the catalog/directory table spaces
is preferred and recommended. However, if a point-in-time recovery of the
catalog/directory table spaces is desired, a separate quiesce of DSNDB06.SYSCOPY
is required after a quiesce of the other catalog/directory table spaces.
Syntax diagram
TABLESPACE table-space-name
database-name. PART integer
TABLESPACESET table-space-name
TABLESPACE database-name.
| WRITE YES
CLONE WRITE NO
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. The utility allows one LIST keyword for each QUIESCE
control statement. Do not specify LIST with the TABLESPACE or
TABLESPACESET keyword. QUIESCE is invoked once for the entire list.
For the QUIESCE utility, the related index spaces are considered to be list
items for the purposes of OPTIONS ITEMERROR processing. You can alter
the utility behavior during processing of related indexes with the
OPTIONS ITEMERROR statement. This utility will only process clone data
if the CLONE keyword is specified. The use of CLONED YES on the
LISTDEF statement is not sufficient. For more information about LISTDEF
specifications, see Chapter 15, “LISTDEF,” on page 185.
TABLESPACE database-name.table-space-name
For QUIESCE TABLESPACE, specifies the table space that is to be
quiesced.
3. Create JCL statements, by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 17. (For examples of JCL for
QUIESCE, see “Sample QUIESCE control statements” on page 352.)
4. Prepare a utility control statement that specifies the options for the tasks that
you want to perform, as described in “Instructions for specific tasks.”
5. Check the compatibility table in “Concurrency and compatibility for QUIESCE”
on page 351 if you want to run other jobs concurrently on the same target
objects.
6. Plan for restart if the QUIESCE job doesn’t complete, as described in
“Terminating or restarting QUIESCE” on page 351.
7. Run QUIESCE by using one of the methods that are described in Chapter 3,
“Invoking DB2 online utilities,” on page 17.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space
Object that is to be quiesced. (If you want to quiesce only one partition of
a table space, you must use the PART option in the control statement.)
If you use QUIESCE TABLESPACE instead and do not include every member, you
might encounter problems when you run RECOVER on the table spaces in the
structure. RECOVER checks if a complete table space set is recovered to a single
point in time. If the complete table space set is not recovered to a single point in
time, RECOVER places all dependent table spaces into CHECK-pending status.
You should QUIESCE and RECOVER the LOB table spaces to the same point in
time as the associated base table space. A group of table spaces that have a
referential relationship should all be quiesced to the same point in time.
When you use QUIESCE WRITE YES on a table space, the utility inserts a
SYSCOPY row that specifies ICTYPE=’Q’ for each related index that is defined
with COPY=YES in order to record the quiesce point.
v Each table space set is expanded into a list of table spaces that have a referential
relationship, into a list that contains a base table space with all of its LOB table
| spaces, or into a list that contains a base table space with all of its XML table
| spaces.
v If you specify a list of table spaces or table space sets to quiesce and duplicate a
table space, utility processing continues, and the table space is quiesced only
once. QUIESCE issues return code 4 and warning message DSNU533I to alert
you of the duplication.
v If you specify the same table space twice in a list, using PART n in one
specification, and PART m for the other specification, each partition is quiesced
once.
Figure 63. Termination messages when you run QUIESCE on a table space with pending
restrictions
When you run QUIESCE on a table space or index space that is in COPY-pending,
CHECK-pending, or RECOVER-pending status, you might also receive one or
more of the messages that are shown in Figure 64.
If any of the preceding conditions is true, QUIESCE terminates with a return code
of 4 and issues a DSNU473I warning message.
You can restart a QUIESCE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
| QUIESCE specifies whether the changed pages from the table spaces and index
| spaces are to be written to disk. The default option, YES establishes a quiesce point
| and writes the changed pages from the table spaces and index spaces to disk. The
| NO option establishes a quiesce point, but does not write the changed pages from
| the table spaces and index spaces to disk. QUIESCE is not performed on table
| spaces with the NOT LOGGED attribute.
Table 58 shows which claim classes QUIESCE drains and any restrictive state that
the utility sets on the target object.
Table 58. Claim classes of QUIESCE operations.
Target WRITE YES WRITE NO
Table space or partition DW/UTRO DW/UTRO
Partitioning index, data-partitioned DW/UTRO
secondary index, or partition
Nonpartitioned secondary index DW/UTRO
Legend:
v DW - Drain the write claim class - concurrent access for SQL readers
v UTRO - Utility restrictive state - read-only access allowed
Table 59 shows which utilities can run concurrently with QUIESCE on the same
target object. The target object can be a table space, an index space, or a partition
of a table space or index space. If compatibility depends on particular options of a
utility, that information is also documented in the table. QUIESCE does not set a
utility restrictive state if the target object is DSNDB01.SYSUTILX.
Table 59. Compatibility of QUIESCE with other utilities
Compatible with
Action QUIESCE?
CHECK DATA DELETE NO Yes
CHECK DATA DELETE YES No
CHECK INDEX Yes
CHECK LOB Yes
COPY INDEXSPACE SHRLEVEL CHANGE No
COPY INDEXSPACE SHRLEVEL REFERENCE Yes
COPY TABLESPACE SHRLEVEL CHANGE No
COPY TABLESPACE SHRLEVEL REFERENCE Yes
QUIESCE on SYSUTILX is an exclusive job; such a job can interrupt another job
between job steps, possibly causing the interrupted job to time out.
Figure 65 on page 353. shows the output that the preceding command produces.
Figure 65. Example output from a QUIESCE job that establishes a quiesce point for three
table spaces
Figure 66. shows the output that the preceding command produces.
Figure 66. Example output from a QUIESCE job that establishes a quiesce point for a list of
objects
Example 3: Establishing a quiesce point for a table space set. The following control
statement specifies that QUIESCE is to establish a quiesce point for the indicated
table space set. In this example, the table space set includes table space
DSN8D81A.DSN8S81D and all table spaces that are referentially related to it. Run
REPORT TABLESPACESET to obtain a list of table spaces that are referentially
related. For more information about this option, see Chapter 27, “REPORT,” on
page 561.
QUIESCE TABLESPACESET TABLESPACE DSN8D91A.DSN8S91D
Figure 67 on page 354. shows the output that the preceding command produces.
Figure 67. Example output from a QUIESCE job that establishes a quiesce point for a table
space set
The preceding command produces the output that is shown in Figure 68. Notice
that the COPY YES index EMPNOI is placed in informational COPY-pending
(ICOPY) status:
Figure 68. Example output from a QUIESCE job that establishes a quiesce point, without
writing the changed pages to disk.
| Example 5: Establishing a quiesce point for a list of objects. The following control
| statement specifies that the QUIESCE utility is to establish a quiesce point for the
| specified clone table space and its indexes, and write the changes to disk.
| QUIESCE TABLESPACE DBJM0901.TPJM0901 WRITE YES CLONE
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v STATS privilege for the database is required if the STATISTICS keyword is
specified.
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
To run REBUILD INDEX STATISTICS REPORT YES, you must use a privilege set
that includes the SELECT privilege on the catalog tables.
Syntax diagram
REBUILD
INDEX ( creatorid.index-name )
PART integer
(ALL) table-space-spec
LIST listdef-name
,
INDEXSPACE ( index-space-name )
database-name. PART integer
(ALL) table-space-spec
SORTDEVT device-type SORTNUM integer stats-spec
table-space-spec:
TABLESPACE table-space-name
database-name. PART integer
| change-spec:
| drain-spec:
stats-spec:
HISTORY ALL FORCEROLLUP YES
ACCESSPATH NO
SPACE
NONE
correlation-stats-spec:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
INDEX creator-id.index-name
Indicates the qualified name of the index to be rebuilt. Use the form
creator-id.index-name to specify the name.
creator-id
Specifies the creator of the index. This qualifier is optional. If you omit the
qualifier creator-id, DB2 uses the user identifier for the utility job.
index-name
Specifies the qualified name of the index that is to be rebuilt. For an index,
you can specify either an index name or an index space name. Enclose the
index name in quotation marks if the name contains a blank.
To rebuild multiple indexes, separate each index name with a comma. All
listed indexes must reside in the same table space. If more than one index is
listed and the TABLESPACE keyword is not specified, DB2 locates the first
valid index name that is cited and determines the table space in which that
index resides. That table space is used as the target table space for all other
valid index names that are listed.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space that is obtained from the
SYSIBM.SYSINDEXES table.
database-name
Specifies the name of the database that is associated with the index. This
qualifier is optional.
index-space-name
Specifies the qualified name of the index space to copy. For an index, you
can specify either an index name or an index space name.
If you specify more than one index space, they must all be defined on the
same table space.
For an index, you can specify either an index name or an index space name.
(ALL)
Specifies that all indexes in the table space that is referred to by the
| TABLESPACE keyword are to be rebuilt. If you specify ALL, only indexes on
| the base table are included.
TABLESPACE database-name.table-space-name
Specifies the table space from which all indexes are to be rebuilt.
database-name
Identifies the database to which the table space belongs. The default is
DSNDB04.
table-space-name
Identifies the table space from which all indexes are to be rebuilt.
PART integer
Specifies the physical partition of a partitioning index or a data-partitioned
secondary index in a partitioned table that is to be rebuilt. When the target of
the REBUILD operation is a nonpartitioned secondary index, the utility
reconstructs logical partitions. If any of the following situations are true for a
nonpartitioned index, you cannot rebuild individual logical partitions:
v the index was created with DEFER YES
v the index must be completely rebuilt (This situation is likely in a disaster
recovery scenario)
v the index is in page set REBUILD-pending (PSRBD) status
For these cases, you must rebuild the entire index.
integer is the number of the partition and must be in the range from 1 to the
number of partitions that are defined for the table space. The maximum is
4096.
You cannot specify PART with the LIST keyword. Use LISTDEF PARTLEVEL
instead.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The utility
allows one LIST keyword for each REBUILD INDEX control statement. The list
must contain either all index spaces or all table spaces. For a table space list,
REBUILD is invoked once per table space. For an index space list, DB2 groups
indexes by their related table space and executes the rebuild once per table
space. This utility will only process clone data if the CLONE keyword is
specified. The use of CLONED YES on the LISTDEF statement is not sufficient.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 185.
| SHRLEVEL
| Indicates the type of access that is to be allowed for the index, table space, or
| partition that is to be checked during REBUILD INDEX processing.
| REFERENCE
| Specifies that applications can read from but cannot write to the table
| space or partition that REBUILD accesses. Applications cannot read or
| write from the index REBUILD is building. The default is REFERENCE.
| CHANGE
| Specifies that applications can read from and write to the table space or
| partition. The index is placed in RBDP and can be avoided by dynamic
| SQL. CHANGE is invalid for indexes over XML tables.
| Do not specify SHRLEVEL CHANGE for an index on a NOT LOGGED
| table space.
| Restriction:
| v SHRLEVEL CHANGE is not well suited for unique indexes and
| concurrent DML because the index is placed in RBDP while being built.
| Inserts and updates of the index will fail with a resource unavailable
| (-904) because uniqueness checking cannot be done while the index is in
| RBDP.
| v SHRLEVEL CHANGE is not allowed on not logged tables, XML indexes,
| or spatial indexes.
| MAXRO
| Specifies the maximum amount of time for the last iteration of log processing.
| During that iteration, applications have read-only access.
| The actual execution time of the last iteration might exceed the specified value
| for MAXRO.
| integer
| integer is the number of seconds. Specifying a small positive value reduces
| the length of the period of read-only access, but it might increase the
| elapsed time for REBUILD INDEX to complete. If you specify a huge
| positive value, the second iteration of log processing is probably the last
| iteration.
| The default is the value of the lock timeout system parameter IRLMKWT.
| LONGLOG
| Specifies the action that DB2 is to perform, after sending a message to the
| console, if the number of records that the next iteration of logging is to process
| is not sufficiently lower than the number that the previous iterations processed.
| This situation means that the reading of the log by the REBUILD INDEX utility
| is not being done at the same time as the writing of the application log.
| CONTINUE
| Specifies that until the time on the JOB statement expires, DB2 is to
| continue performing reorganization, including iterations of log processing,
| if the estimated time to perform an iteration exceeds the time that is
| specified for MAXRO.
NO
Indicates that the set of messages is not to be sent as output to SYSPRINT.
The default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT. The
generated messages are dependent on the combination of keywords (such
as TABLESPACE, INDEX, TABLE, and COLUMN) that you specify with
the RUNSTATS utility. However, these messages are not dependent on the
specification of the UPDATE option. REPORT YES always generates a
report of SPACE and ACCESSPATH statistics.
KEYCARD
Specifies that all of the distinct values in all of the 1 to n key column
combinations for the specified indexes are to be collected. n is the number of
columns in the index.
FREQVAL
Controls the collection of frequent-value statistics. If you specify FREQVAL, it
must be followed by two additional keywords:
NUMCOLS
Indicates the number of key columns that are to be concatenated when
collecting frequent values from the specified index. If you specify 3, the
utility collects frequent values on the concatenation of the first three key
columns. The default is 1, which means that DB2 is to collect frequent
values only on the first key column of the index.
COUNT
Indicates the number of frequent values that are to be collected. If you
specify 15, the utility collects 15 frequent values from the specified key
columns. The default is 10.
UPDATE
Indicates whether the collected statistics are to be inserted into the catalog
tables. UPDATE also allows you to select statistics that are used for access path
selection or statistics that are used by database administrators.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that the only catalog table columns that are to be updated are
those that provide statistics that are used for access path selection.
SPACE
Indicates that the only catalog table columns that are to be updated are
those that provide statistics to help the database administrator assess
the status of a particular table space or index.
NONE
Indicates that catalog tables are not to be updated with the collected
statistics. This option is valid only when REPORT YES is specified.
HISTORY
Records all catalog table inserts or updates to the catalog history tables.
The default is supplied by the value that is specified in STATISTICS HISTORY
on panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that the only catalog history table columns that are to be
updated are those that provide statistics that are used for access path
selection.
SPACE
Indicates that only space-related catalog statistics are to be updated in
catalog history tables.
NONE
Indicates that catalog history tables are not to be updated with the
collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when you
execute RUNSTATS even if some indexes or index partitions are empty. This
keyword enables the optimizer to select the best access path.
The following options are available for the FORCEROLLUP keyword:
YES Indicates that forced aggregation or rollup processing is to be done,
even though some indexes or index partitions might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all indexes or index partitions.
If data is not available, the utility issues DSNU623I message if you have set the
installation value for STATISTICS ROLLUP on panel DSNTIPO to NO.
If you recover a table space to a prior point in time and do not recover all the
indexes to the same point in time, you must rebuild all of the indexes.
Some logging might occur if both of the following conditions are true:
v The index is a nonpartitioning index.
v The index is being concurrently accessed either by SQL on a different partition
of the same table space or by a utility that is run on a different partition of the
same table space.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
data set. Otherwise, DFSORT dynamically allocates the temporary data set.
| 3. It is recommended that you use dynamic allocation by specifying SORTDEVT in the
| utility statement because dynamic allocation reduces the maintenance required of the
| utility job JCL.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space
Object whose indexes are to be rebuilt.
Calculating the size of the work data sets: To calculate the approximate size (in
bytes) of the SORTWKnn data set, use the following formula:
Using two or three large SORTWKnn data sets are preferable to several small ones.
Calculating the size of the sort work data sets: To calculate the approximate size
(in bytes) of the ST01WKnn data set, use the following formula:
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
| For rebuilding an index or a partition of an index, the SHRLEVEL option lets you
| choose the data access level that you have during the rebuild:
| Operator actions: LONGLOG specifies the action that DB2 is to perform if log
| processing is not occurring quickly enough. See “Option descriptions” on page 423
| for a description of the LONGLOG options. If the operator does not respond to the
| console message DSNU377I, the LONGLOG option automatically goes into effect.
| You can take one of the following actions:
| v Execute the TERM UTILITY command to terminate the rebuild process.
| DB2 does not take the action specified in the LONGLOG phrase if any one of these
| events occurs before the delay expires:
When you run the REBUILD INDEX utility concurrently on separate partitions of a
partitioned index (either partitioning or secondary), the sum of the processor time
is approximately the time for a single REBUILD INDEX job to run against the
entire index. For partitioning indexes, the elapsed time for running concurrent
REBUILD INDEX jobs is a fraction of the elapsed time for running a single
REBUILD INDEX job against an entire index.
| By specifying a short delay time (less than the system timeout value, IRLMRWT),
| you can reduce the impact on applications by reducing time-outs. You can use the
| RETRY option to give the online REBUILD INDEX utility chances to complete
| successfully. If you do not want to use RETRY processing, you can still use
| DRAIN_WAIT to set a specific and more consistent limit on the length of drains.
| RETRY allows an online REBUILD that is unable to drain the objects that it
| requires to try again after a set period (RETRY_DELAY). Objects will remain in
| their original state if the drain fails in the LOG phase.
| Because application SQL statements can queue behind any unsuccessful drain that
| the online REBUILD has tried, define a reasonable delay before you retry to allow
| this work to complete; the default is lock timeout subsystem parameter IRLMRWT.
| When the default DRAIN WRITERS is used with SHRLEVEL CHANGE and
| RETRY, multiple read-only log iterations can occur. Because online REBUILD can
| have to do more work when RETRY is specified, multiple or extended periods of
| restricted access might occur. Applications that run with REBUILD must perform
| frequent commits. During the interval between retries, the utility is still active;
| consequently, other utility activity against the table space and indexes is restricted.
| Building indexes in parallel: Parallel index build reduces the elapsed time for a
REBUILD INDEX job by sorting the index keys and rebuilding multiple indexes or
index partitions in parallel, rather than sequentially. Optimally, a pair of subtasks
processes each index; one subtask sorts extracted keys, while the other subtask
builds the index. REBUILD INDEX begins building each index as soon as the
corresponding sort generates its first sorted record. If you specify STATISTICS, a
third subtask collects the sorted keys and updates the catalog table in parallel.
The subtasks that are used for the parallel REBUILD INDEX processing use DB2
connections. If you receive message DSNU397I that indicates that the REBUILD
INDEX utility is constrained, increase the number of concurrent connections by
using the MAX BATCH CONNECT parameter on panel DSNTIPE.
Figure 69 on page 369 shows the flow of a REBUILD INDEX job with a parallel
index build. The same flow applies whether you rebuild a data-partitioned
secondary index or a partitioning index. DB2 starts multiple subtasks to unload the
entire partitioned table space. Subtasks then sort index keys and build the
partitioning index in parallel. If you specify STATISTICS, additional subtasks
collect the sorted keys and update the catalog table in parallel, eliminating the
need for a second scan of the index by a separate RUNSTATS job.
Figure 69. How a partitioning index is rebuilt during a parallel index build
Figure 70 shows the flow of a REBUILD INDEX job with a parallel index build.
DB2 starts multiple subtasks to unload all partitions of a partitioned table space
and to sort index keys in parallel. The keys are then merged and passed to the
build subtask, which builds the nonpartitioned secondary index. If you specify
STATISTICS, a separate subtask collects the sorted keys and updates the catalog
table.
Figure 70. How a nonpartitioned secondary index is rebuilt during a parallel index build
Sort work data sets for parallel index build:You can either allow the utility to
dynamically allocate the data sets that SORT needs, or provide the necessary data
sets yourself. Select one of the following methods to allocate sort work data sets
and message data sets:
Method 1: REBUILD INDEX determines the optimal number of sort work data sets
and message data sets.
1. Specify the SORTDEVT keyword in the utility statement.
2. Allow dynamic allocation of sort work data sets by not supplying SORTWKnn
DD statements in the REBUILD INDEX utility JCL.
3. Allocate UTPRINT to SYSOUT.
Method 2: You control allocation of sort work data sets, and REBUILD INDEX
allocates message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: You have the most control over rebuild processing; you must specify
both sort work data sets and message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
Data sets that are used: If you select Method 2 or 3, define the necessary data
sets by using the information provided here and in the following topics:
v “Determining the number of sort subtasks” on page 371
v “Allocation of sort subtasks” on page 371
v “Estimating the sort work file size” on page 371
Each sort subtask must have its own group of sort work data sets and its own
print message data set. In addition, you need to allocate the merge message data
set when you build a single nonpartitioned secondary index on a partitioned table
space.
Possible reasons to allocate data sets in the utility job JCL rather than using
dynamic allocation are to:
v Control the size and placement of the data sets
v Minimize device contention
v Optimally utilize free disk space
v Limit the number of utility subtasks that are used to build indexes
The DD names SWnnWKmm define the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more
data sets that are to be used by that subtask pair. For example:
SW01WK01 Is the first sort work data set that is used by the subtask that
builds the first index.
SW01WK02 Is the second sort work data set that is used by the subtask that
builds the first index.
SW02WK01 Is the first sort work data set that is used by the subtask that
builds the second index.
SW02WK02 Is the second sort work data set that is used by the subtask that
builds the second index.
The DD names UTPRINnn define the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
If you allocate the UTPRINT DD statement to SYSOUT in the job statement, the
sort message data sets and the merge message data set, if required, are
dynamically allocated. If you want the sort message data sets, merge message data
sets, or both, allocated to a disk or tape data set rather than to SYSOUT, you must
supply the UTPRINnn or the UTMERG01 DD statements (or both) in the utility
JCL. If you do not allocate the UTPRINT DD statement to SYSOUT, and you do
not supply a UTMERG01 DD statement in the job statement, partitions are not
unloaded in parallel.
Allocation of sort subtasks: REBUILD INDEX attempts to assign one sort subtask
for each index that is to be built. If REBUILD INDEX cannot start enough subtasks
to build one index per subtask, it allocates any excess indexes across the pairs (in
the order that the indexes were created), so that one or more subtasks might build
more than one index.
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys that are present in all of the indexes or
index partitions that are being processed by the subtask in order to calculate each
sort work file size. When you determine which indexes or index partitions are
assigned to which subtask pairs, use the formula listed in “Data sets that REBUILD
INDEX uses” on page 364 to calculate the required space.
Overriding dynamic DFSORT allocation: DB2 estimates how many rows are to
be sorted and passes this information to DFSORT on the parameter FILSZ.
DFSORT then dynamically allocates the necessary sort work space.
If the table space contains rows with VARCHAR columns, DB2 might not be able
to accurately estimate the number of rows. If the estimated number of rows is too
high and the sort work space is not available or if the estimated number of rows is
too low, DFSORT might fail and cause an abend. Important: Run RUNSTATS
UPDATE SPACE before the REBUILD INDEX utility so that DB2 calculates a more
accurate estimate.
You can override this dynamic allocation of sort work space in two ways:
v Allocate the sort work data sets with SORTWKnn DD statements in your JCL.
v Override the DB2 row estimate in FILSZ using control statements that are
passed to DFSORT. However, using control statements overrides size estimates
that are passed to DFSORT in all invocations of DFSORT in the job step,
including any sorts that are done in any other utility that is executed in the
same step. The result might be reduced sort efficiency or an abend due to an
out-of-space condition.
You can reset the REBUILD-pending status for an index with any of these
operations:
v REBUILD INDEX
v REORG TABLESPACE SORTDATA
v REPAIR SET INDEX with NORBDPEND
v START DATABASE command with ACCESS FORCE
Important: Use the START DATABASE command with ACCESS FORCE only as a
means of last resort.
You must either make these table spaces available, or run the RECOVER
TABLESPACE utility on the catalog or directory, using an authorization ID with the
installation SYSADM or installation SYSOPR authority.
Recommendation: Make a full image copy of the index to create a recovery point;
this action also resets the ICOPY status.
If you restart a job that uses the STATISTICS keyword, inline statistics collection
does not occur. To update catalog statistics, run the RUNSTATS utility after the
restarted REBUILD INDEX job completes.
For more guidance about restarting online utilities, see “Restarting an online
utility” on page 39.
Table 61 shows which claim classes REBUILD INDEX drains and any restrictive
state that the utility sets on the target object.
| Table 61. Claim classes of REBUILD INDEX operations.
| REBUILD REBUILD REBUILD
| INDEX INDEX PART INDEX
| SHRLEVEL SHRLEVEL SHRLEVEL
| Target REFERENCE REFERENCE CHANGE
| Table space or partition DW/UTRO DW/UTRO CR/UTRW
| Partitioning index, data-partitioned DA/UTUT DA/UTUT CR/UTRW
| secondary index, or physical
| partition1
| Nonpartitioned secondary index2 DA/UTUT DR CR/UTRW
3
| Logical partition of an index N/A DA/UTUT CR/UTRW
Table 62 shows which utilities can run concurrently with REBUILD INDEX on the
same target object. The target object can be an index space or a partition of an
index space. If compatibility depends on particular options of a utility, that
information is also shown. REBUILD INDEX does not set a utility restrictive state
if the target object is DSNDB01.SYSUTILX.
Table 62. Compatibility of REBUILD INDEX with other utilities
Action REBUILD INDEX
CHECK DATA No
CHECK INDEX No
CHECK LOB Yes
COPY INDEX No
COPY TABLESPACE SHRLEVEL CHANGE No
COPY TABLESPACE SHRLEVEL REFERENCE Yes
DIAGNOSE Yes
LOAD No
MERGECOPY Yes
MODIFY Yes
QUIESCE No
REBUILD INDEX No
RECOVER INDEX No
RECOVER TABLESPACE No
REORG INDEX No
REORG TABLESPACE UNLOAD CONTINUE or PAUSE No
REORG TABLESPACE UNLOAD ONLY or EXTERNAL with No
cluster index
REORG TABLESPACE UNLOAD ONLY or EXTERNAL without Yes
cluster index
When you run REBUILD INDEX, the utility updates this range of used version
numbers for indexes that are defined with the COPY NO attribute. REBUILD
INDEX sets the OLDEST_VERSION column to the current version number, which
indicates that only one version is active; DB2 can then reuse all of the other
version numbers.
Recycling of version numbers is required when all of the version numbers are
being used. All version numbers are being used when one of the following
situations is true:
v The value in the CURRENT_VERSION column is one less than the value in the
OLDEST_VERSION column
v The value in the CURRENT_VERSION column is 15, and the value in the
OLDEST_VERSION column is 0 or 1.
You can also run LOAD REPLACE, REORG INDEX, or REORG TABLESPACE to
recycle version numbers for indexes that are defined with the COPY NO attribute.
To recycle version numbers for indexes that are defined with the COPY YES
attribute or for table spaces, run MODIFY RECOVERY.
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
If sufficient virtual storage resources are available, DB2 starts one pair of utility
sort subtasks for each partition. This example does not require UTPRINnn DD
statements because it uses DSNUPROC to invoke utility processing. DSNUPROC
includes a DD statement that allocates UTPRINT to SYSOUT.
//SAMPJOB JOB ...
//STEP1 EXEC DSNUPROC,UID=’SAMPJOB.RBINDEX’,UTPROC=’’,SYSTEM=’DSN’
//SYSIN DD *
REBUILD INDEX (DSN8910.XEMP1 PART 2, DSN8910.XEMP1 PART 3)
SORTDEVT SYSWK
SORTNUM 4
/*
Example 5: Rebuilding all indexes of a table space. The following control statement
specifies that REBUILD INDEX is to rebuild all indexes for table space
DSN8D91A.DSN8S91E. The SORTDEVT and SORTNUM keywords indicate that
the utility is to use dynamic data set and message set allocation. Parallelism is
used by default.
If sufficient virtual storage resources are available, DB2 starts one utility sort
subtask to build the partitioning index and another utility sort subtask to build the
nonpartitioning index. This example does not require UTPRINnn DD statements
because it uses DSNUPROC to invoke utility processing. DSNUPROC includes a
DD statement that allocates UTPRINT to SYSOUT.
//SAMPJOB JOB ...
//STEP1 EXEC DSNUPROC,UID=’SAMPJOB.RCVINDEX’,UTPROC=’’,SYSTEM=’DSN’
//SYSIN DD *
REBUILD INDEX (ALL) TABLESPACE DSN8D91A.DSN8S91E
SORTDEVT SYSWK
SORTNUM 4
/*
Example 6: Rebuilding indexes only if they are in a restrictive state and gathering
inline statistics. The control statement in Figure 72 on page 378 specifies that
REBUILD INDEX is to rebuild partition 9 of index ID0S482D if it is in
REBUILD-pending (RBDP), RECOVER-pending (RECP), or advisory
REORG-pending (AREO*) state. This condition that the index be in a certain
restrictive state is indicated by the SCOPE PENDING option. The STATISTICS
FORCEROLLUP YES option indicates that the utility is to collect inline statistics on
the index partition that it is rebuilding and to force aggregation of those statistics.
| Example 7: Rebuilding indexes that are on clone tables. The following control
| statement specifies that REBUILD INDEX is to reconstruct only the specified
| indexes that are on clone tables.
| REBUILD INDEX (ADMF001.IUKQAI01)
| CLONE
The largest unit of data recovery is the table space or index space; the smallest is
the page. You can recover a single object, system-level backups, or a list of objects.
The RECOVER utility recovers an entire table space, index space, a partition or
data set, pages within an error range, or a single page. You can recover data from
image copies of an object or from a system-level backup records that contain
changes to the object. Point in time recovery with consistency automatically detects
the uncommitted transactions running at the recover point in time and will roll
back their changes on the recovered objects. So after recover, objects will be left in
their transactionally consistent state.
Output: Output from RECOVER consists of recovered data (a table space, index,
partition or data set, error range, or page within a table space).
| If you use the RECOVER utility to recover a point-in-time object that is part of a
| referentially related table space set, a base table space and LOB table space set, or a
| base table space and XML table space set, you must ensure that you recover the
| entire set of table spaces. If you do not include every member of the set, or if you
do not recover the entire set to the same point in time, RECOVER sets the
CHECK-pending status on for all dependent table spaces, base table spaces, or
LOB table spaces in the set.
| Recommendations:
| v If you use the RECOVER utility to recover data to an image copy by specifying
| TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY, specify a copy that was made
| with the SHRLEVEL REFERENCE option.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute RECOVER, but only on
a table space in the DSNDB01 or DSNDB06 database.
Syntax diagram
LOGRANGES YES
LOCALSITE (2)
RECOVERYSITE LOGRANGES NO
Notes:
1 Not valid for nonpartitioning indexes.
2 Use the LOGRANGES NO option only at the direction of IBM Software Support. This option can
cause the LOGAPPLY phase to run much longer and, in some cases, apply log records that
should not be applied.
object:
TABLESPACE table-space-name
database-name.
INDEXSPACE index-space-name
database-name.
INDEX index-name
creator-id.
list-options-spec:
| non-LOGONLY-options-spec
TORBA X’byte-string’ LOGONLY
TOLOGPOINT X’byte-string’
| non-LOGONLY-options-spec:
|
|
| REUSE CURRENTCOPYONLY
|
| PARALLEL
(num-objects) TAPEUNITS ( num-tape-units )
|
| RESTOREBEFORE X'byte-string' FROMDUMP
DUMPCLASS (dcl)
|
||
recover-options-spec:
TOCOPY data-set
TOVOLUME CATALOG REUSE CURRENTCOPYONLY
vol-ser
TOSEQNO integer
TOLASTCOPY
REUSE CURRENTCOPYONLY
TOLASTFULLCOPY
REUSE CURRENTCOPYONLY
ERROR RANGE
Option descriptions
You can specify a list of objects by repeating the TABLESPACE, INDEX, or
INDEXSPACE keywords. If you use a list of objects, the valid keywords are:
DSNUM, TORBA, TOLOGPOINT, LOGONLY, PARALLEL, and either LOCALSITE
or RECOVERYSITE.
DSNUM
Identifies a partition within a partitioned table space or a partitioned index, or
identifies a data set within a nonpartitioned table space that is to be recovered.
You cannot specify a single data set of a nonpartitioned index or a logical
partition of a nonpartitioned index. Alternatively, the option can recover the
entire table space or index space.
ALL
Specifies that the entire table space or index space is to be recovered. The
default is ALL.
integer
Specifies the number of the partition or data set that is to be recovered.
The maximum value is 4096.
Specifying DSNUM is not valid for nonpartitioning indexes.
For a partitioned table space or index space: The integer is its partition
number.
For a nonpartitioned table space: Find the integer at the end of the data set
name. The data set name has the following format:
| catname.DSNDBx.dbname.tsname.y000z.Annn
where:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
tsname Is the table space name.
y Is I or J.
| z Is 1 or 2.
nnn Is the data set integer.
PAGE page-number
Specifies a particular page that is to be recovered. You cannot specify this
option if you are recovering from a concurrent copy.
page-number is the number of the page, in either decimal or hexadecimal
notation. For example, both 999 and X'3E7' represent the same page. PAGE is
invalid with the LIST specification.
CONTINUE
Specifies that the recovery process is to continue. Use this option only if an
error causes RECOVER to terminate during reconstruction of a page. In
this case, the page is marked as “broken”. After you repair the page, you
can use the CONTINUE option to recover the page, starting from the point
of failure in the recovery log.
TORBA X'byte-string'
Specifies, in a non-data-sharing environment, a point on the log to which
RECOVER is to recover. Specify an RBA value.
In a data sharing environment, use TORBA only when you want to recover to
a point before the originating member joined the data sharing group. If you
specify an RBA after this point, the recovery fails.
| For a NOT LOGGED table space, the value must be a recoverable point.
Using TORBA terminates the recovery process with the last log record whose
relative byte address (RBA) is not greater than byte-string, which is a string of
up to 12 hexadecimal characters. If byte-string is the RBA of the first byte of a
log record, that record is included in the recovery.
| Uncommitted work by units of recovery that are active at the specified RBA
| will be backed out by RECOVER, leaving each object in a consistent state.
TOLOGPOINT X'byte-string'
Specifies a point on the log to which RECOVER is to recover. Specify either an
RBA or an LRSN value.
The LRSN is a string of 12 hexadecimal characters and is reported by the
DSN1LOGP utility.
| For a NOT LOGGED table space, the value must be a recoverable point.
| Uncommitted work by units of recovery that are active at the specified LRSN
| or RBA will be backed out by RECOVER, leaving each object in a consistent
| state.
REUSE
Specifies that RECOVER is to logically reset and reuse DB2-managed data sets
without deleting and redefining them. If you do not specify REUSE, DB2
deletes and redefines DB2-managed data sets to reset them.
If you are recovering an object because of a media failure, do not specify
REUSE.
If a data set has multiple extents, the extents are not released if you use the
REUSE parameter.
CURRENTCOPYONLY
Specifies that RECOVER is to improve the performance of restoring concurrent
copies (copies that were made by the COPY utility with the CONCURRENT
option) by using only the most recent primary copy for each object in the list.
| CURRENTCOPYONLY now also applies to image copies taken without the
| CONCURRENT option. This means that only the most recent image copy (to
| the recovery point) will be used by recover. If the most recent image copy
| cannot be allocated, opened, or read, then the recovery for the object will not
| proceed.
When you specify CURRENTCOPYONLY for a concurrent copy, RECOVER
builds a DFSMSdss RESTORE command for each group of objects that is
associated with a concurrent copy data set name. If the RESTORE fails,
RECOVER does not automatically use the next most recent copy or the backup
copy, and the object fails. If you specify DSNUM ALL with
CURRENTCOPYONLY and one partition fails during the restore process, the
entire utility job on that object fails.
If you specify CURRENTCOPYONLY and the most recent primary copy of the
object to be recovered is not a concurrent copy, DB2 ignores this keyword.
| For objects in the recovery list whose recovery base is a system-backup, the
| default is CURRENTCOPYONLY.
PARALLEL
Specifies the maximum number of objects in the list that are to be restored in
parallel from image copies on disk or tape. RECOVER attempts to retain tape
mounts for tapes that contain stacked image copies when the PARALLEL
keyword is specified. In addition, to maximize performance, RECOVER
| The FROMDUMP and DUMPCLASS options that you specify for the
| RECOVER utility override the RESTORE/RECOVER FROM DUMP and
| DUMPCLASS NAME install options that you specify on installation panel
| DSNTIP6.
| RESTOREBEFORE X'byte-string'
| Specifies that RECOVER is to search for an image copy, concurrent copy, or
| system-level backup (if yes has been specified for SYSTEM-LEVEL BACKUPS
| on install panel DSNTIP6) with an RBA or LRSN value earlier than the
| specified X'byte-string' value to use in the RESTORE phase. To avoid specific
| image copies, concurrent copies, or system-level backups with matching or
| more recent RBA or LRSN values in START_RBA, the RECOVER utility applies
| the log records and restores the object to its current state or the specified
| TORBA or TOLOGPOINT value. The RESTOREBEFORE value is compared
| with the RBA or LRSN value in the START_RBA column in the
| SYSIBM.SYSCOPY record for those copies. For system-level backups, the
| RESTOREBEFORE value is compared with the data complete LRSN.
| If you specify a TORBA or TOLOGPOINT value with the RESTOREBEFORE
| option, the RBA or LRSN value for RESTOREBEFORE must be lower than the
| specified TORBA OR TOLOGPOINT value. If you specify RESTOREBEFORE,
| you cannot specify TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY.
LOGONLY
Specifies that the target objects are to be recovered from their existing data sets
by applying only log records to the data sets. DB2 applies all log records that
were written after a point that is recorded in the data set itself.
To recover an index space by using RECOVER LOGONLY, you must define the
index space with the COPY YES attribute.
Use the LOGONLY option when the data sets of the target objects have already
been restored to a point of consistency by another process offline, such as
DFSMSdss concurrent copy.
| LOGONLY is not allowed on a table space or index space with the NOT
| LOGGED attribute.
TOCOPY data-set
Specifies the particular image copy data set that DB2 is to use as a source for
recovery.
data-set is the name of the data set.
If the data set is a full image copy, it is the only data set that is used in the
recovery. If it is an incremental image copy, RECOVER also uses the previous
full image copy and any intervening incremental image copies.
If you specify the data set as the local backup copy, DB2 first tries to allocate
the local primary copy. If the local primary copy is unavailable, DB2 uses the
local backup copy.
If you use TOCOPY or TORBA to recover a single data set of a nonpartitioned
table space, DB2 issues message DSNU520I to warn that the table space can
become inconsistent following the RECOVER job. This point-in-time recovery
can cause compressed data to exist without a dictionary or can even overwrite
the data set that contains the current dictionary.
If you use TOCOPY with a particular partition or data set (identified with
DSNUM), the image copy must be for the same partition or data set, or for the
whole table space or index space. If you use TOCOPY with DSNUM ALL, the
image copy must be for DSNUM ALL. You cannot specify TOCOPY with a
LIST specification.
If the image copy data set is a z/OS generation data set, supply a fully
qualified data set name, including the absolute generation and version number.
If the image copy data set is not a generation data set and more than one
image copy data set with the same data set name exists, use one of the
following options to identify the data set exactly:
TOVOLUME
Identifies the image copy data set.
CATALOG
Indicates that the data set is cataloged. Use this option only for an image
copy that was created as a cataloged data set. (Its volume serial is not
recorded in SYSIBM.SYSCOPY.)
RECOVER refers to the SYSIBM.SYSCOPY catalog table during execution.
If you use TOVOLUME CATALOG, the data set must be cataloged. If you
remove the data set from the catalog after creating it, you must catalog the
data set again to make it consistent with the record for this copy that
appears in SYSIBM.SYSCOPY.
vol-ser
Identifies the data set by an alphanumeric volume serial identifier of its
first volume. Use this option only for an image copy that was created as a
noncataloged data set. Specify the first vol-ser in the SYSCOPY record to
locate a data set that is stored on multiple tape volumes.
TOSEQNO integer
Identifies the image copy data set by its file sequence number. integer is
the file sequence number.
TOLASTCOPY
Specifies that RECOVER is to restore the object to the last image copy that was
taken. If the last image copy is a full image copy, it is restored to the object. If
the last image copy is an incremental image copy, the most recent full copy
along with any incremental copies are restored to the object.
TOLASTFULLCOPY
Specifies that the RECOVER utility is to restore the object to the last full image
copy that was taken. Any incremental image copies that were taken after the
full image copy are not restored to the object.
ERROR RANGE
Specifies that all pages within the range of reported I/O errors are to be
recovered. Recovering an error range is useful when the range is small, relative
to the object that contains it; otherwise, recovering the entire object is
preferred. You cannot specify this option if you are recovering from a
concurrent copy.
In some situations, recovery using the ERROR RANGE option is not possible,
such as when a sufficient quantity of alternate tracks cannot be obtained for all
bad records within the error range. You can use the IBM Device Support
Facility, ICKDSF service utility to determine whether this situation exists. In
such a situation, redefine the error data set at a different location on the
volume or on a different volume, and then run the RECOVER utility without
the ERROR RANGE option.
You cannot specify ERROR RANGE with a LIST specification.
For additional information about the use of this keyword, see Part 4 of DB2
Administration Guide.
| CLONE
| Indicates that RECOVER is to recover only clone table data in the specified
| table spaces, index spaces or indexes that contain indexes on clone tables. This
| utility will only process clone data if the CLONE keyword is specified. The use
| of CLONED YES on the LISTDEF statement is not sufficient.
LOCALSITE
Specifies that RECOVER is to use image copies from the local site. If you
specify neither LOCALSITE or RECOVERYSITE, RECOVER uses image copies
from the current site of invocation. (The current site is identified on the
installation panel DSNTIPO under SITE TYPE and in the macro DSN6SPRM
under SITETYP.)
RECOVERYSITE
Specifies that RECOVER is to use image copies from the recovery site. If you
specify neither LOCALSITE or RECOVERYSITE, RECOVER uses image copies
from the current site of invocation. (The current site is identified on the
installation panel DSNTIPO under SITE TYPE and in the macro DSN6SPRM
under SITETYP.)
LOGRANGES YES
Specifies that RECOVER should use SYSLGRNX information for the
LOGAPPLY phase. This option is the default.
LOGRANGES NO
Specifies that RECOVER should not use SYSLGRNX information for the
LOGAPPLY phase. Use this option only under the direction of IBM Software
Support.
This option can cause RECOVER to run much longer. In a data sharing
environment this option can result in the merging of all logs from all members
that were created since the last image copy.
This option can also cause RECOVER to apply logs that should not be applied.
For example, assume that you take an image copy of a table space and then
run REORG LOG YES on the same table space. Assume also that the REORG
utility abends and you then issue the TERM UTILITY command for the
REORG job. The SYSLGRNX records that are associated with the REORG job
are deleted, so a RECOVER job with the LOGRANGES YES option (the
default) skips the log records from the REORG job. However, if you run
RECOVER LOGRANGES NO, the utility applies these log records.
Recovering data and indexes: You do not always need to recover both the data and
indexes. If you recover the table space or index space to a current RBA or LRSN,
| any referentially related objects do not need to be recovered. If you plan to recover
| a damaged object to a point in time, use a consistent point in time for all of its
| referentially related objects, including related LOB and XML table spaces, for
| optimal performance. You must rebuild the indexes from the data if one of the
following conditions is true:
v The table space is recovered to a point in time.
v An index is damaged.
v An index is in REBUILD-pending status.
v No image copy of the index is available.
If you need to recover both the data and the indexes, and no image copies of the
indexes are available, use the following procedure:
1. Use RECOVER TABLESPACE to recover the data.
2. Run REBUILD INDEX on any related indexes to rebuild them from the data.
If you have image copies of both the table spaces and the indexes, you can recover
both sets of objects in the same RECOVER utility statement. The objects are
recovered from the image copies and logs.
| The RECOVER utility chooses the most recent backup (an image copy, a concurrent
| copy, or a system-level backup) to restore based on the recovery point for the table
| spaces or indexes (with the COPY YES attribute) being recovered.
| The RECOVER utility invokes DFSMShsm to restore the data sets for the object
| from the system-level backup of the database copy pool.
| To determine whether the system-level backups of the database copy pool reside
| on the disk or tape:
| 1. Run the DFSMShsm LIST COPYPOOL command with the ALLVOLS option.
| 2. Run the DSNJU004 utility output. For data sharing, run the DSNJU004 utility
| output on each member.
| 3. Review the output from the DFSMShsm LIST COPYPOOL command with the
| ALLVOLS option.
| 4. Review the DB2 system-level backup information in the DSNJU004 utility
| output.
| If the system-level backup chosen as the recovery base for the database copy pool
| no longer resides on DASD and the FROMDUMP option has not been specified,
| then the recovery of the object will fail. You can then specify the RECOVER
| FROMDUMP option, or specify it on install panel DSNTIP6, to direct the utility to
| use the system-level backup that was dumped to tape. The RECOVER
| RESTOREBEFORE option can also be used to direct the utility to use a recovery
| base prior to the system-level backup.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
| Table space, index space, or index
Object that is to be recovered. If you want to
recover less than an entire table space:
v Use the DSNUM option to recover a partition or
data set.
v Use the PAGE option to recover a single page.
v Use the ERROR RANGE option to recover a
range of pages with I/O errors.
| Image copy data set Copy that RECOVER is to restore. DB2 accesses
| this information through the DB2 catalog. If you
| want to retain the tape volume mounts for your
| image copy data sets, refer to “Retaining tape
| mounts” on page 413 for more information.
| System-level backups The RECOVER utility chooses the most recent
| backup (an image copy, a concurrent copy, or a
| system-level backup) to restore based on the
| recovery point for the table spaces or indexes (with
| the COPY YES attribute) being recovered. If you
| want to learn more about how RECOVER uses
| system-level backups, refer to “How to use a
| system-level backup” on page 390 for more
| information. The RESTORE SYSTEM utility uses
| the most recent system-level backup of the
| database copy pool that DB2 took prior to the
| SYSPITR log truncation point. If you want to
| determine which system-level backups DB2
| restores, refer to “How to determine which
| system-level backups DB2 restores” on page 588 for
| more information.
To recover multiple table spaces, create a list of table spaces that are to be
recovered; repeat the TABLESPACE keyword before each specified table space. The
following RECOVER statement specifies that the utility is to recover partition 2 of
the partitioned table space DSN8D91A.DSN8S91E, and recover the table space
DSN8D91A.DSN8S91D to the quiesce point (RBA X'000007425468').
RECOVER TABLESPACE DSN8D91A.DSN8S91E DSNUM 2
TABLESPACE DSN8D91A.DSN8S91D
TORBA X’000007425468’
Each table space that is involved is unavailable for most other applications until
recovery is complete. If you make image copies by table space, you can recover the
entire table space, or you can recover a data set or partition from the table space. If
you make image copies separately by partition or data set, you must recover the
partitions or data sets by running separate RECOVER operations. The following
example shows the RECOVER statement for recovering four data sets in database
DSN8D91A, table space DSN8S91E:
RECOVER PARALLEL (4)
TABLESPACE DSN8D91A.DSN8S91E DSNUM 1
TABLESPACE DSN8D91A.DSN8S91E DSNUM 2
TABLESPACE DSN8D91A.DSN8S91E DSNUM 3
TABLESPACE DSN8D91A.DSN8S91E DSNUM 4
Each of the 4 partitions will be restored in parallel. You can also schedule the
recovery of these data sets to run in four separate jobs.
If a table space or data set is in the COPY-pending status, recovering it might not
be possible. You can reset this status in several ways; for more information, see
“Resetting COPY-pending status” on page 287.
Objects to be restored from a system-level backup will be restored by the main task
for the RECOVER Utility by invoking DFSMShsm.
| Each object can have a different base from which to recover: system-level backup,
| image copy, or concurrent copy.
RECOVER does not place dependent table spaces that are related by informational
referential constraints into CHECK-pending status.
If referential integrity violations are not an issue, you can run a separate job to
recover each table space.
When you specify the PARALLEL keyword, DB2 supports parallelism during the
RESTORE phase and performs recovery as follows:
v During initialization and setup (the UTILINIT recover phase), the utility locates
the full and incremental copy information for each object in the list from
SYSIBM.SYSCOPY.
v The utility sorts the list of objects for recovery into lists to be processed in
parallel according to the number of tape volumes, file sequence numbers, and
sizes of each image copy.
v The number of objects that can be restored in parallel depends on the maximum
number of available tape devices and on how many tape devices the utility
requires for the incremental and full image copy data sets. You can control the
number of objects that are to be processed in parallel on the PARALLEL
keyword. You can control the number of dynamically allocated tape drives on
the TAPEUNITS keyword, which is specified with the PARALLEL keyword.
v If an object in the list requires a DB2 concurrent copy, the utility sorts the object
in its own list and processes the list in the main task, while the objects in the
other sorted lists are restored in parallel. If the concurrent copies that are to be
restored are on tape volumes, the utility uses one tape device and counts it
toward the maximum value that is specified for TAPEUNITS.
| v If objects in the list require a system-level backup that has been dumped to tape
| as its recovery base (that is, the FROMDUMP option has been specified), the
| DB2 RECOVER Utility will invoke DFSMShsm to restore the data sets for the
| objects in parallel, with the degree of parallelism being capped by the maximum
| number of tasks that can be started by the RECOVER. DFSMShsm will restore
| the data sets in parallel based on its install options.
If image copies are taken at the data set level, RECOVER must be performed at the
data set level. To recover the whole table space, you must recover all the data sets
individually in one or more RECOVER steps. If recovery is attempted at the table
space level, DB2 returns an error message.
Alternatively, if image copies are taken at the table space, index, or index space
level, you can recover individual data sets by using the DSNUM parameter.
same time. If this requirement is likely to strain your system resources, for
example, by demanding more tape units than are available, consider running
MERGECOPY regularly to merge image copies into one copy.
Even if you do not periodically merge multiple image copies into one copy when
you do not have enough tape units, the utility can still perform. RECOVER
dynamically allocates the full image copy and attempts to dynamically allocate all
the incremental image copy data sets. If RECOVER successfully allocates every
incremental copy, recovery proceeds to merge pages to table spaces and apply the
log. If a point is reached where an incremental copy cannot be allocated,
RECOVER notes the log RBA or LRSN of the last successfully allocated data set.
Attempts to allocate incremental copies cease, and the merge proceeds using only
the allocated data sets. The log is applied from the noted RBA or LRSN, and the
incremental image copies that were not allocated are ignored.
Recovering a page
Using RECOVER PAGE enables you to recover data on a page that is damaged. In
some situations, you can determine (usually from an error message) which page of
an object has been damaged. You can use the PAGE option to recover a single
page. You can use the CONTINUE option to continue recovering a page that was
damaged during the LOGAPPLY phase of a RECOVER operation.
Recovering a page by using PAGE and CONTINUE: Suppose that you start
RECOVER for table space TSPACE1. During processing, message DSNI012I
informs you of a problem that damages page number 5. RECOVER completes, but
the damaged page, number 5, is in a stopped state and is not recovered. When
RECOVER ends, message DSNU501I informs you that page 5 is damaged.
If more than one page is damaged during RECOVER, perform the preceding steps
for each damaged page.
The following RECOVER statement specifies that the utility is to recover any
current error range problems for table space TS1:
Recovering an error range is useful when the range is small, relative to the object
containing it; otherwise, recovering the entire object is preferable.
Message DSNU086I indicates that I/O errors were detected on a table space and
that you need to recover it. Before you attempt to use the ERROR RANGE option
of RECOVER, you should run the ICKDSF service utility to correct the disk error.
If an I/O error is detected during RECOVER processing, DB2 issues message
DSNU538I to identify the affected target tracks are involved. The message provides
enough information to run ICKDSF correctly.
During the recovery of the entire table space or index space, DB2 might still
encounter I/O errors that indicate DB2 is still using a bad volume. For
user-defined data sets, you should use Access Method Services to delete the data
sets and redefine them with the same name on a new volume. If you use DB2
storage groups, you can remove the bad volume from the storage group by using
| ALTER STOGROUP. If you use DFSMS storage groups, you should also remove
| the bad volume from the DFSMS storage group.
| To recover a set of objects with LOB relationships, you should run RECOVER with
| the TOLOGPOINT option to identify a common recoverable point for all objects.
| For a non-LOB table space, or a LOB table space with a base table space that has
| the NOT LOGGED attribute, the logging attribute of the table space must meet
| these following conditions:
| v For recovery to the current point in time, the current value of the logging
| attribute of the object must match the logging attribute at the most current
| recoverable point.
| v For recovery to a prior point in time, the current value of the logging attribute
| of the object must match the logging attribute at the time that is specified by
| TOLOGPOINT, TORBA, TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY
Because the data sets are restored offline without DB2 involvement, RECOVER
LOGONLY checks that the data set identifiers match those that are in the DB2
catalog. If the identifiers do not match, message DSNU548I is issued, and the job
terminates with return code 8.
To ensure that no other transactions can access DB2 objects between the time that
you restore a data set and the time that you run RECOVER LOGONLY, follow
these steps:
1. Stop the DB2 objects that are being recovered by issuing the following
command:
-STOP DATABASE(database-name) SPACENAM(space-name)
2. Restore all DB2 data sets that are being recovered.
3. Start the DB2 objects that are being recovered by issuing the following
command:
-START DATABASE(database-name) SPACENAM(space-name) ACCESS(UT)
4. Run the RECOVER utility without the TORBA or TOLOGPOINT parameters
and with the LOGONLY parameter to recover the DB2 data sets to the current
point in time and to perform forward recovery using DB2 logs. If you want to
recover the DB2 data sets to a prior point in time, run the RECOVER utility
with either TORBA or TOLOGPOINT, and with the LOGONLY parameters.
5. If you did not recover related indexes in the same RECOVER control statement,
rebuild all indexes on the recovered object.
6. Issue the following command to allow access to the recovered object if the
recovery completes successfully:
-START DATABASE(database-name) SPACENAM(space-name) ACCESS(RW)
With the LOGONLY option, when recovering a single piece of a multi-piece linear
page set, RECOVER opens the first piece of the page set. If the data set is migrated
by DFSMShsm, the data set is recalled by DFSMShsm. Without LOGONLY, no data
set recall is requested.
For all catalog and directory table spaces, you can list the IBM-defined indexes that
have the COPY YES attribute in the same RECOVER utility statement.
The catalog and directory objects that are listed in step 15 in the preceding list can
be grouped together for recovery. You can specify them as a list of objects in a
single RECOVER utility statement. When you specify all of these objects in one
statement, the utility needs to make only one pass of the log for all objects during
the LOGAPPLY phase and can use parallelism when restoring the image copies in
the RESTORE phase. Thus, these objects are recovered faster.
Recovery of the items on the list can be done concurrently or included in the same
job step. However, some restrictions apply:
1. When you recover the following table spaces or indexes, the job step in which
the RECOVER statement appears must not contain any other utility statements.
No other utilities can run while the RECOVER utility is running.
v DSNDB01.SYSUTILX
v All indexes on SYSUTILX
v DSNDB01.DBD01
2. When you recover the following table spaces, no other utilities can run while
the RECOVER utility is running. Other utility statements can exist in the same
job step.
v DSNDB06.SYSCOPY
v DSNDB01.SYSLGRNX
v DSNDB06.SYSDBAUT
v DSNDB06.SYSUSER
v DSNDB06.SYSDBASE
Why the order is important: To recover one object, RECOVER must obtain
information about it from some other object. Table 64 lists the objects from which
RECOVER must obtain information.
Table 64. Objects that the RECOVER utility accesses
Object name Reason for access by RECOVER
DSNDB01.SYSUTILX Utility restart information. The object is not
accessed when it is recovered; RECOVER for
this object is not restartable, and no other
commands can be in the same job step.
SYSCOPY information for SYSUTILX is
obtained from the log.
Planning for point-in-time recovery for the catalog, directory, and all user objects:
When you recover the DB2 catalog, directory, and all user objects, consider the
entire catalog and directory, including all table spaces and index spaces, as one
logical unit. Recover all objects in the catalog, directory, and all user objects to the
same point of consistency. If a point-in-time recovery of the catalog, directory, and
all user objects is planned, a separate quiesce of the DSNDB06.SYSCOPY table
space is required after a quiesce of the other catalog and directory table spaces.
You should be aware of some special considerations when you are recovering
catalog, directory, and all user objects to a point in time in which the DB2
subsystem was in a different mode. For example, if your DB2 subsystem is
currently in new-function mode, and you need to recover to a point in time in
which the subsystem was in compatibility mode. For details, see Part 4 of DB2
Administration Guide.
Recommendation: Before you recover the DB2 catalog, directory, and all user
objects to a prior point in time, shut down the DB2 system cleanly and then restart
the system in access(maint) mode. Recover the catalog and directory objects to the
current state. You can use sample queries and documentation, which are provided
in DSNTESQ in the SDSNSAMP sample library, to check the consistency of the
catalog.
Indexes are rebuilt by REBUILD INDEX. If the only items you have recovered are
table spaces in the catalog or directory, you might need to rebuild their indexes.
Use the CHECK INDEX utility to determine whether an index is inconsistent with
the data it indexes. You can use the RECOVER utility to recover catalog and
directory indexes if the index was defined with the COPY YES attribute and if you
have a full index image copy.
You must recover the catalog and directory before recovering user table spaces.
Be aware that the following table spaces, along with their associated indexes, do
not have entries in SYSIBM.SYSLGRNX, even if they were defined with COPY
YES:
v DSNDB01.SYSUTILX
v DSNDB01.DBD01
v DSNDB01.SYSLGRNX
v DSNDB06.SYSCOPY
v DSNDB06.SYSGROUP
v DSNDB01.SCT02
v DSNDB01.SPT01
These objects are assumed to be open from the point of their last image copy, so
the RECOVER utility processes the log from that point forward.
Point-in-time recovery: Full recovery of the catalog and directory table spaces and
indexes is strongly recommended. However, if you need to plan for point-in-time
recovery of the catalog and directory, here is a way to create a point of consistency:
1. Quiesce all catalog and directory table spaces in a list, except for
DSNDB06.SYSCOPY and DSNDB01.SYSUTILX.
2. Quiesce DSNDB06.SYSCOPY.
Recommendation: Quiesce DSNDB06.SYSCOPY in a separate utility statement;
when you recover DSNDB06.SYSCOPY to its own quiesce point, it contains the
ICTYPE = 'Q’ (quiesce) SYSCOPY records for the other catalog and directory
table spaces.
3. Quiesce DSNDB01.SYSUTILX in a separate job step.
If you need to recover to a point in time, recover DSNDB06.SYSCOPY and
DSNDB01.SYSUTILX to their own quiesce points, and recover other catalog and
directory table spaces to their common quiesce point. The catalog and directory
objects must be recovered in a particular order, as described in “Why the order is
important” on page 399.
Reinitializing DSNDB01.SYSUTILX
You need to reinitialize the DSNDB01.SYSUTILX directory table space if both of
the following conditions are true:
v You cannot successfully execute the -DIS UTIL and -TERM UTIL commands,
because DSNDB01.SYSUTILX is damaged.
v You cannot recover DSNDB01.SYSUTILX, because errors occur in the
LOGAPPLY phase.
1. Issue the -DIS DB(*) SPACENAM(*) RESTRICT command and analyze the
output. Write down the following items:
v All of the objects with a utility in progress (The objects in UTUT, UTRO, or
UTRW status have utilities in progress.)
v Any pending states for these objects (RECP, CHKP, and COPY are examples
of pending states. For a complete list, see Appendix C, “Advisory or
restrictive states,” on page 895.)
2. Edit the following installation jobs so that they contain only the commands that
pertain to DSNDB01.SYSUTILX:
DSNTIJDE
Delete VSAM LDS for DSNDB01.SYSUTILX.
DSNTIJIN
Define VSAM LDS for DSNDB01.SYSUTILX and tailor the AMS DEFINE
command to fit the needs of your DB2 system.
DSNTIJID
Initialize DSNDB01.SYSUTILX.
3. Run the three edited installation jobs in the order listed.
4. Issue the -START DB(dbname) ACCESS(UT) command for each database that
has objects with a utility in progress.
5. Issue the -START DB(dbname)SPACENAM(spname) ACCESS(FORCE) command
on each object with a utility in progress. This action clears all utilities that are
in progress or in pending states. (Any pending states are cleared, but you still
need to resolve the pending states as directed in the next step.)
6. Resolve the pending states for each object by running the appropriate utility.
For example, if an object was in the RECP status, run the RECOVER utility. For
more information about how to resolve pending states, see Appendix C,
“Advisory or restrictive states,” on page 895.
7. Issue -START DB(dbname) ACCESS(RW) for each database.
| The status of an object that is related to a LOB or XML table space can change due
| to a recovery operation, depending on the type of recovery that is performed. If all
| of the following objects for all LOB or XML columns are recovered in a single
| RECOVER utility statement to the present point in time, no pending status exists:
| v Base table space
| v Index on the auxiliary table
| v LOB table space
| v XML table space
| Refer to Table 65 on page 403 for information about the status of a base table
| space, index on the auxiliary table, LOB table space, or XML table space that was
| recovered without its related objects.
| Table 65. Object status after being recovered without its related objects
| Index on the
| auxiliary table
| Base table space status (ROWID, LOB or XML table
| Object Recovery type status node ID, or XML) space status
| Base table space Current® RBA or LRSN None None None
1
| Base table space Point-in-time CHECK-pending None None
| Index on the Current RBA or LRSN None None None
| auxiliary table
| (ROWID, node ID,
| or XML)
| Index on the Point-in-time None CHECK-pending1 None
| auxiliary table
| (ROWID, node ID,
| or XML)
| LOB or XML table Current RBA or LRSN, LOB None None None
| space or XML table space that is
| defined with LOG(YES)
| LOB or XML table Current RBA or LRSN, LOB None None Auxiliary warning2
| space or XML table space that is
| defined with LOG(NO)
| LOB or XML table TOCOPY, COPY was CHECK-pending1 REBUILD-pending None
| space SHRLEVEL REFERENCE
| LOB or XML table TOCOPY, COPY was CHECK-pending1 REBUILD-pending CHECK-pending or
| space SHRLEVEL CHANGE auxiliary warning1
| LOB or XML table TOLOGPOINT or TORBA CHECK-pending1 REBUILD-pending CHECK-pending or
| space (not a quiesce point) auxiliary warning1
| LOB or XML table TOLOGPOINT or TORBA (at CHECK-pending1 REBUILD-pending None
| space a quiesce point)
| Notes:
| 1. RECOVER does not place dependent table spaces that are related by informational referential constraints into
| CHECK-pending status.
| 2. If, at any time, a log record is applied to the LOB or XML table space and a LOB or XML is consequently marked
| invalid, the LOB or XML table space is set to auxiliary warning status.
|
| For information about resetting any of these statuses, see Appendix C, “Advisory
| or restrictive states,” on page 895.
Because a point-in-time recovery of only the table space leaves data in a consistent
state and indexes in an inconsistent state, you must rebuild all indexes by using
REBUILD INDEX. For more information, see “Resetting the REBUILD-pending
status” on page 372.
| After recovering a set of table spaces to a point in time, you can use CHECK
| DATA to check for inconsistencies. The auxiliary CHECK-pending status (ACHKP)
| is set when the CHECK DATA utility detects an inconsistency between a base table
| space with defined LOB or XML columns and a LOB or XML table space. For
| information about how to reset the ACHKP status, see Appendix C, “Advisory or
| restrictive states,” on page 895.
You can also use point-in-time recovery and the point-in-time recovery options to
recover all user-defined table spaces and indexes that are in refresh-pending status
(REFP).
For more information about recovering data to a prior point of consistency, see
Part 4 of DB2 Administration Guide.
If you run the REORG utility to turn off a REORG-pending status, and then
recover to a point in time before that REORG job, DB2 sets restrictive statuses on
all partitions that you specified in the REORG job, as follows:
v Sets REORG-pending (and possibly CHECK-pending) on for the data partitions
v Sets REBUILD-pending on for the associated index partitions
v Sets REBUILD-pending on for the associated logical partitions of nonpartitioned
secondary indexes
For information about resetting these restrictive statuses, see “REORG-pending
status” on page 901 and “REBUILD-pending status” on page 899.
Using offline copies to recover after rebalancing partitions: To recover data after a
REORG job redistributes the data among partitions, use RECOVER LOGONLY. If
you perform a point-in-time recovery, you must keep the offline copies
synchronized with the SYSCOPY records. Therefore, do not use the MODIFY
RECOVERY utility to delete any SYSCOPY records with an ICTYPE column value
of 'A' because these records might be needed during the recovery. Delete these
SYSCOPY records only when you are sure that you no longer need to use the
offline copies that were taken before the REORG that performed the rebalancing.
Actions that can affect recovery status When you perform the following actions
before you recover a table space, the recovery status is affected as described:
v If you alter a table to rotate partitions:
– You can recover the partition to the current time.
– You can recover the partition to a point in time after the alter. The utility can
use a recovery base, (for example, a full image copy, a REORG LOG YES
operation, or a LOAD REPLACE LOG YES operation) that occurred prior to
the alter.
– You cannot recover the partition to a point in time prior to the alter; the
recover fails with MSGDSNU556I and RC8.
v If you change partition boundaries with ALTER or REORG REBALANCE:
– You can recover the partition to the current time if a recovery base (for
example, a full image copy, a REORG LOG YES operation, or a LOAD
REPLACE LOG YES operation) exists.
– You can recover the partition to a point in time after the alter.
– You can recover the partitions that are affected by the boundary change to a
point in time prior to the alter; RECOVER sets REORG-pending status on the
affected partitions and you must reorganize the table space or range of
partitions. All affected partitions must be in the recovery list of a single
RECOVER statement.
v If you alter a table to add a partition:
– You can recover the partition to the current time.
– You can recover the partition to a point in time after the alter.
– You can recover the partition to a point in time prior to the alter; RECOVER
resets the partition to be empty.
When you perform the following actions before you recover an index to a prior
point in time or to the current time, the recovery status is affected as described:
v If you alter the data type of a column to a numeric data type, you cannot
recover the index until you take a full image copy of the index. However, the
index can be rebuilt.
v If you alter an index to NOT PADDED or PADDED , you cannot recover the
index until you take a full image copy of the index. However, the index can be
rebuilt.
For information about recovery status, see Appendix C, “Advisory or restrictive
states,” on page 895.
To improve the performance of the recovery, take a full image copy of the table
space or set of table spaces, and then quiesce them by using the QUIESCE utility.
This action enables RECOVER TORBA or TOLOGPOINT to recover the table
spaces to the quiesce point with minimal use of the log.
If possible, specify a table space and all of its indexes (or a set of table spaces and
all related indexes) in the same RECOVER utility statement, and specify
TOLOGPOINT or TORBA to identify a QUIESCE point. This action avoids placing
indexes in the CHECK-pending or REBUILD-pending status. If the TOLOGPOINT
is not a common QUIESCE point for all objects, use the following procedure:
1. RECOVER table spaces to the value for TOLOGPOINT (either an RBA or
LRSN).
2. Use concurrent REBUILD INDEX jobs to recover the indexes over each table
space.
This procedure ensures that the table spaces and indexes are synchronized, and it
eliminates the need to run the CHECK INDEX utility.
| When using RECOVER with the TORBA or TOLOGPOINT option, ensure that all
| of the objects that are changed by the active units of recovery at the recovery point
| are recovered to the same point-in-time so that they are synchronized:
| v DB2 rolls back changes made to units of recovery that are inflight, inabort,
| postponed abort, or indoubt during the recovery point-in-time.
| v DB2 does not roll back changes made to units of recovery that are INCOMMIT
| during the recovery point-in-time.
| v DB2 rolls back only changes to objects in the RECOVER statement.
RECOVER does not place dependent table spaces that are related by informational
referential constraints into CHECK-pending status.
The TORBA and TOLOGPOINT options set the CHECK-pending status for table
spaces when you perform any of the following actions:
| v Recover all members of a set of table spaces that are to be recovered to the same
| point in time, but referential constraints were defined for a dependent table after
| that point in time. Table spaces that contain those dependent tables are placed in
| CHECK-pending status.
v Recover table spaces with defined LOB or XML columns without recovering
their LOB or XML table spaces.
To avoid setting CHECK-pending status, you must perform both of the following
steps:
v Recover all dependent objects to the same point in time.
If you do not recover each table space to the same quiesce point, and if any of
the table spaces are part of a referential integrity structure, the following actions
occur:
– All dependent table spaces that are recovered are placed in CHECK-pending
status with the scope of the whole table space.
– All dependent table spaces of the recovered table spaces are placed in
CHECK-pending status with the scope of the specific dependent tables.
v Do not add table check constraints or referential constraints after the point in
time to which you want to recover.
| If you recover each table space of a table space set to the same point in time, but
| referential constraints were defined after the same point in time, the
| CHECK-pending status is set for the table space that contains the table with the
| referential constraint.
The TORBA and TOLOGPOINT options set the CHECK-pending status for indexes
when you recover one or more of the indexes to a previous point in time, but you
do not recover the related table space in the same RECOVER statement.
| You can turn off CHECK-pending status for an index by using the TORBA and
| TOLOGPOINT options. Recover indexes along with the related table space to the
| same point in time (preferably a quiesce point) or SHRLEVEL REFERENCE point.
| RECOVER processing resets the CHECK-pending status for all indexes in the same
| RECOVER statement.
For information about resetting the CHECK-pending status of table spaces, see
Chapter 8, “CHECK DATA,” on page 61. For information about resetting the
CHECK-pending status for indexes, see “CHECK-pending status” on page 897.
| Use the RESTOREBEFORE option and specify the RBA or LRSN of the image copy,
| concurrent copy, or system-level backup that you want to avoid, and RECOVER
| will search for an older recovery base. The RECOVER utility then applies log
| records to restore the object to its current state or the specified TORBA or
| TOLOGPOINT value.
Image copy on tape: If the image copy is on tape, messages IEF233D and IEF455D
request the tape for RECOVER, as shown in the following example:
IEF233D M BAB,COPY ,,R92341QJ,DSNUPROC,
OR RESPOND TO IEF455D MESSAGE
*42 IEF455D MOUNT COPY ON BAB FOR R92341QJ,DSNUPROC OR REPLY ’NO’
R 42,NO
IEF234E K BAB,COPY ,PVT,R92341QJ,DSNUPROC
By replying NO, you can initiate the fallback to the previous image copy.
RECOVER responds with messages DSNU030I and DSNU508I, as shown in the
following example:
DSNU030I csect-name - UNABLE TO ALLOCATE R92341Q.UTQPS001.FCOPY010
RC=4, CODE=X’04840000’
DSNU508I csect-name - IN FALLBACK PROCESSING TO PRIOR FULL IMAGE COPY
Reason code X'0484' means that the request was denied by the operator.
Image copy on disk: If the image copy is on disk, you can delete or rename the
image copy data set before RECOVER starts executing. RECOVER issues messages
DSNU030I and DSNU508I, as shown in the following example:
DSNU030I csect-name - UNABLE TO ALLOCATE R92341Q.UTQPS001.FCOPY010,
RC=4, CODE=X’17080000’
DSNU508I csect-name - IN FALLBACK PROCESSING TO PRIOR FULL IMAGE COPY
Reason code X'1708' means that the ICF catalog entry cannot be found.
Improving performance
To improve recovery time, consider enabling the Fast Log Apply function on the
DB2 subsystem. For more information about enabling this function, see the LOG
APPLY STORAGE field on panel DSNTIPL, in Part 2 of DB2 Installation Guide.
Use MERGECOPY to merge your table space image copies before recovering the
table space. If you do not merge your image copies, RECOVER automatically
merges them. If RECOVER cannot allocate all the incremental image copy data sets
when it merges the image copies, RECOVER uses the log instead.
Include a list of table spaces and indexes in your RECOVER utility statement to
apply logs in a single scan of the logs.
If you use RECOVER TOCOPY for full image copies, you can improve
performance by using data compression. The improvement is proportional to the
degree of compression.
Consider specifying the PARALLEL keyword to restore image copies from disk or
tape to a list of objects in parallel.
If possible, DB2 reads the required log records from the active log to provide the
best performance.
Any log records that are not found in the active logs are read from the archive log
data sets, which are dynamically allocated to satisfy the requests. The type of
storage that is used for archive log data sets is a significant factor in the
performance. Consider the following actions to improve performance:
v RECOVER a list of objects in one utility statement to take only a single pass of
the log.
v Keep archive logs on disk to provide the best possible performance.
v Control archive logs data sets by using DFSMShsm to provide the next best
performance. DB2 optimizes recall of the data sets. After the data set is recalled,
DB2 reads it from disk.
v If the archive log must be read from tape, DB2 optimizes access by means of
ready-to-process and look-ahead mount requests. DB2 also permits delaying the
deallocation of a tape drive if subsequent RECOVER jobs require the same
archive log tape. Those methods are described in more detail in the subsequent
paragraphs.
The BSDS contains information about which log data sets to use and where they
reside. You must keep the BSDS information current. If the archive log data sets
are cataloged, the ICF catalog indicates where to allocate the required data set.
DFSMShsm data sets: The recall of the first DFSMShsm archive log data set starts
automatically when the LOGAPPLY phase starts. When the recall is complete and
the first log record is read, the recall for the next archive log data set starts. This
process is known as look-ahead recalling. Its purpose is to recall the next data set
while it reads the preceding one.
When a recall is complete, the data set is available to all RECOVER jobs that
require it. Reading proceeds in parallel.
Non-DFSMShsm tape data sets: DB2 reports on the console all tape volumes that
are required for the entire job. The report distinguishes two types of volumes:
v Any volume that is not marked with an asterisk (*) is required for the for the
job to complete. Obtain these volumes from the tape library as soon as possible.
v Any volume that is marked with an asterisk (*) contains data that is also
contained in one of the active log data sets. The volume might or might not be
required.
As tapes are mounted and read, DB2 makes two types of mount requests:
v Ready-to-process: The current job needs this tape immediately. As soon as the tape
is loaded, DB2 allocates and opens it.
v Look-ahead: This is the next tape volume that is required by the current job.
Responding to this request enables DB2 to allocate and open the data set before
it is needed, thus reducing overall elapsed time for the job.
You can dynamically change the maximum number of input tape units that are
used to read the archive log by specifying the COUNT option of the SET
ARCHIVE command. For example, use the following command to assign 10 tape
units to your DB2 subsystem:
-SET ARCHIVE COUNT (10)
The DISPLAY ARCHIVE READ command shows the currently mounted tape
volumes and their statuses.
Delayed deallocation: DB2 can delay deallocating the tape units used to read the
archive logs. This is useful when several RECOVER utility statements run in
parallel. By delaying deallocation, DB2 can re-read the same volume on the same
tape unit for different RECOVER jobs, without taking time to allocate it again.
You can dynamically change the amount of time that DB2 delays deallocation by
using the TIME option of the SET ARCHIVE command. For example, to specify a
60 minute delay, issue the following command:
-SET ARCHIVE TIME(60)
In a data sharing environment, you might want to specify zero (0) to avoid having
one member hold onto a data set that another member needs for recovery.
Performance summary:
1. Achieve the best performance by allocating archive logs on disk.
2. Consider staging cataloged tape data sets to disk before allocation by the log
read process.
3. If the data sets are read from tape, set both the COUNT and the TIME values
to the maximum allowable values within the system constraints.
the number of available online and offline units, and the RECOVER job
successfully allocates all available units, the job waits for more units to become
available.
For example, if the incremental image copies are on tape and an adequate number
of tape drives are not available, RECOVER does not use the remaining incremental
image copy data sets.
If one of the following actions occurs, the index remains untouched, and utility
processing terminates with return code 8:
v RECOVER processes an index for which no full copy exists.
v The copy cannot be used because of utility activity that occurred on the index or
on its underlying table space,
For more information, see 138.
If you always make multiple image copies, RECOVER should seldom fall back to
an earlier point. Instead, RECOVER relies on the backup copy data set if the
primary copy data set is unusable.
RECOVER does not perform parallel processing for objects that are in backup or
fallback recovery. Instead, the utility performs non-parallel image copy allocation
processing of the objects. RECOVER defers the processing of objects that require
backup or fallback processing until all other objects are recovered, at which time
the utility processes the objects one at a time.
If the RECOVER utility cannot complete because of severe errors that are caused
by the damaged media, you might need to use Access Method Services (IDCAMS)
with the NOSCRATCH option to delete the cluster for the table space or index. If
the table space or index is defined by using STOGROUP, the RECOVER utility
automatically redefines the cluster. For user-defined table spaces or indexes, you
must redefine the cluster before invoking the RECOVER utility.
Terminating RECOVER
Terminating a RECOVER job with the TERM UTILITY command leaves the table
space that is being recovered in RECOVER-pending status, and the index space
that is being recovered in the REBUILD-pending status. If you recover a table
space to a previous point in time, its indexes are left in the REBUILD-pending
status. The data or index is unavailable until the object is successfully recovered or
| rebuilt. If the utility fails in the LOGAPPLY, LOGCSR, or LOGUNDO phases, fix
| the problem that caused the job to stop and restart the job rather than terminate
| the job. For the rest of objects in the recover job, the RECOVER utility restores the
original image copy and repeats the LOGAPPLY, LOGCSR, and LOGUNDO
process again for this subset of objects. All the objects being recovered in one
recover job will be available to the application at the end of the RECOVER utility,
even if some of the objects do not have any active URs operating on them and
therefore no rollback is needed for these objects.
Restarting RECOVER
You can restart RECOVER from the last commit point (RESTART(CURRENT)) or
the beginning of the phase (RESTART(PHASE)). By default, DB2 uses
RESTART(CURRENT).
In both cases, you must identify and fix the causes of the failure before performing
a current restart.
| If RECOVER fails in the LOGCSR phase and you restart the utility, the utility
| restart behavior is RESTART(PHASE).
| If RECOVER fails in the LOGUNDO phase and you restart the utility, the utility
| repeats the RESTORE, LOGAPPLY, LOGCSR, and LOGUNDO phases for only
| those objects that had active units of recovery that needed to be handled and that
| did not complete undo processing prior to the failure.
Table 66 on page 415 shows which claim classes RECOVER claims and drains and
any restrictive state that the utility sets on the target object.
Legend:
v CHKP (YES): Concurrently running applications enter CHECK-pending
after commit
v CW: Claim the write claim class
v DA: Drain all claim classes, no concurrent SQL access
v DR: Drain the repeatable read class, no concurrent access for SQL
repeatable readers
v RI: Referential integrity
v UTRW: Utility restrictive state, read-write access allowed
v UTUT: Utility restrictive state, exclusive control
v none: Object is not affected by this utility
Notes:
1. During the UTILINIT phase, the claim and restrictive states change from
DA/UTUT to CW/UTRW.
| 2. Includes document ID indexes and node ID indexes over nonpartitioned XML
| table spaces and XML indexes.
| 3. Includes document ID indexes and node ID indexes over partitioned XML table
| spaces.
RECOVER does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Table 67 shows which utilities can run concurrently with RECOVER on the same
target object. The target object can be a table space, an index space, or a partition
of a table space or index space. If compatibility depends on particular options of a
utility, that information is also documented in the table.
Table 67. Compatibility of RECOVER with other utilities
Compatible with Compatible with
Compatible with RECOVER RECOVER
RECOVER (no TOCOPY or ERROR-
Action option)? TORBA? RANGE?
CHECK DATA No No No
CHECK INDEX No No No
CHECK LOB No No No
COPY INDEXSPACE No No No
COPY TABLESPACE No No No
To run on DSNDB01.SYSUTILX, RECOVER must be the only utility in the job step
and the only utility running in the DB2 subsystem.
RECOVER on any catalog or directory table space is an exclusive job; such a job
can interrupt another job between job steps, possibly causing the interrupted job to
time out.
Example 3: Recovering a table space partition to the last image copy that was
taken. The following control statement specifies that the RECOVER utility is to
recover the first partition of table space DSN8D81A.DSN8S81D to the last image
copy that was taken. If the last image copy that was taken is a full image copy, this
full image copy is restored. If the last image copy that was taken is an incremental
image copy, the most recent full image copy, along with any incremental image
copies, are restored.
RECOVER TABLESPACE DSN8D81A.DSN8S81D DSNUM 1 TOLASTCOPY
Example 5: Recovering an index to the last full image copy that was taken
without deleting and redefining the data sets. The following control statement
specifies that the RECOVER utility is to recover index ADMF001.IADH082P to the
last full image copy. The REUSE option specifies that DB2 is to logically reset and
reuse DB2-managed data sets without deleting and redefining them.
RECOVER INDEX ADMF001.IADH082P REUSE TOLASTFULLCOPY
LISTDEF RCVR4_LIST
INCLUDE TABLESPACES TABLESPACE DBOL1002.TSOL1002
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1003 PARTLEVEL 3
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1003 PARTLEVEL 6
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1004 PARTLEVEL 5
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1004 PARTLEVEL 9
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IPOL1051 PARTLEVEL 22
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IPOL1061 PARTLEVEL 10
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IXOL1062
Figure 73. Example RECOVER control statement with the CURRENTCOPYONLY option
Figure 74. Example RECOVER control statement for a list of objects on tape
Example 9: Recovering clone table data. The following control statement specifies
that the RECOVER utility is to recover only clone table data in
DBA90601.TLX9061A and recover the data to the last image copy that was taken.
The REUSE option specifies that RECOVER is to logically reset and reuse
DB2-managed data sets without deleting and redefining them.
RECOVER TABLESPACE DBA90601.TLX9061A REUSE TOLASTCOPY
CLONE
| Example 10: Recovering an image copy. The following control statement specifies
that the RECOVER utility is to search for an image copy with an RBA or LRSN
value earlier than the specified X'00000551BE7D' value to use in the RESTORE
phase. Only specified dumps of the database copy pool are used for the restore of
the data sets.
RECOVER LIST RCVRLIST RESTOREBEFORE X’00000551BE7D’ PARALLEL(4)
FROMDUMP DUMPCLASS(dcname)
You can determine when to run REORG INDEX by using the LEAFDISTLIMIT
catalog query option. If you specify the REPORTONLY option, REORG INDEX
produces a report that indicates whether a REORG is recommended; in this case, a
REORG is not performed. These options are not available for indexes on the
directory.
For a diagram of REORG INDEX syntax and a description of available options, see
“Syntax and options of the REORG INDEX control statement” on page 420. For
detailed guidance on running this utility, see “Instructions for running REORG
INDEX” on page 432.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v REORG privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL authority
v SYSADM authority
To execute this utility on an index space in the catalog or directory, you must use a
privilege set that includes one of the following authorities:
v REORG privilege for the DSNDB06 (catalog) database
v DBADM or DBCTRL authority for the DSNDB06 (catalog) database.
v Installation SYSOPR authority
v SYSCTRL authority
v SYSADM or Installation SYSADM authority
v STATS privilege for the database is required if STATISTICS keyword is specified.
While trying to reorganize an index space in the catalog or directory, a user with
authority other than installation SYSADM or installation SYSOPR might receive the
following message:
DSNT500I "resource unavailable"
the REORG INDEX utility again, using an authorization ID with the installation
SYSADM or installation SYSOPR authority.
An ID with installation SYSOPR authority can also execute REORG INDEX, but
only on an index in the DSNDB06 database.
To run REORG INDEX STATISTICS REPORT YES, ensure that the privilege set
includes the SELECT privilege on the catalog tables and on the tables for which
statistics are to be gathered.
Execution phases of REORG INDEX: The REORG INDEX utility operates in these
phases:
Phase Description
UTILINIT Performs initialization and setup
UNLOAD Unloads index space and writes keys to a sequential data set.
BUILD Builds indexes. Updates index statistics.
LOG Processes log iteratively. Used only if you specify SHRLEVEL
CHANGE.
SWITCH Switches access between original and new copy of index space or
partition. Used only if you specify SHRLEVEL REFERENCE or
CHANGE.
UTILTERM Performs cleanup. For DB2-managed data sets and either
SHRLEVEL CHANGE or SHRLEVEL REFERENCE, the utility
deletes the original copy of the table space or index space.
Syntax diagram
SHRLEVEL NONE
SHRLEVEL REFERENCE deadline-spec drain-spec
CHANGE deadline-spec drain-spec change-spec
UNLOAD CONTINUE
LEAFDISTLIMIT (1) (2)
integer REPORTONLY UNLOAD PAUSE stats-spec
ONLY
WORKDDN (SYSUT1)
WORKDDN (ddname) PREFORMAT
Notes:
1 You cannot use UNLOAD PAUSE with the LIST option.
2 You cannot specify any options in stats-spec with the UNLOAD ONLY option.
index-name-spec:
INDEX index-name
creator-id. PART integer
INDEXSPACE index-space-name
database-name.
deadline-spec:
DEADLINE NONE
DEADLINE timestamp
labeled-duration-expression
drain-spec:
Notes:
| 1 The default for DRAIN_WAIT is the value of the IRLMRWT subsystem parameter.
| 2 The default for RETRY is the value of the UTIMOUT subsystem parameter.
| 3 The default for RETRY_DELAY is the smaller of the following two values:
| v DRAIN_WAIT value × RETRY value
| v DRAIN_WAIT value × 10
change-spec:
TIMEOUT TERM
TIMEOUT ABEND
Notes:
1 The default for MAXRO is the RETRY_DELAY default value.
labeled-duration-expression:
stats-spec:
HISTORY ALL FORCEROLLUP YES
ACCESSPATH NO
SPACE
NONE
correlation-stats-spec:
SORTNUM integer
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
INDEX creator-id.index-name
Specifies an index that is to be reorganized.
creator-id. specifies the creator of the index and is optional. If you omit the
qualifier creator id, DB2 uses the user identifier for the utility job. index-name is
the qualified name of the index to copy. For an index, you can specify either an
index name or an index space name. Enclose the index name in quotation
marks if the name contains a blank.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space that is obtained from the
SYSIBM.SYSINDEXES table.
database-name specifies the name of the database that is associated with the
index and is optional. The default is DSNDB04.
index-space-name specifies the qualified name of the index space that is to be
reorganized; the name is obtained from the SYSIBM.SYSINDEXES table.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The INDEX
keyword is required to differentiate this REORG INDEX LIST from REORG
TABLESPACE LIST. The utility allows one LIST keyword for each control
statement of REORG INDEX. The list must not contain any table spaces.
REORG INDEX is invoked once for each item in the list. This utility will only
process clone data if the CLONE keyword is specified. The use of CLONED
YES on the LISTDEF statement is not sufficient. For more information about
LISTDEF specifications, see Chapter 15, “LISTDEF,” on page 185.
Do not specify STATISTICS INDEX index-name with REORG INDEX LIST. If
you want to collect inline statistics for a list of indexes, just specify
STATISTICS.
You cannot specify DSNUM and PART with LIST on any utility.
PART integer
Identifies a partition that is to be reorganized. You can reorganize a single
partition of a partitioning index. You cannot specify PART with LIST. integer
must be in the range from 1 to the number of partitions that are defined for
the partitioning index. The maximum is 4096.
integer designates a single partition.
If you omit the PART keyword, the entire index is reorganized.
REUSE
When used with SHRLEVEL NONE, specifies that REORG is to logically reset
and reuse DB2-managed data sets without deleting and redefining them. If you
do not specify REUSE and SHRLEVEL NONE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If a data set has multiple extents and you use the REUSE parameter, the
extents are not released.
If you specify SHRLEVEL REFERENCE or CHANGE with REUSE, REUSE
does not apply
| CLONE
| Indicates that REORG INDEX is to reorganize only the specified index spaces
| and indexes that are defined on clone tables. This utility will only process
| clone data if the CLONE keyword is specified. The use of CLONED YES on
| the LISTDEF statement is not sufficient.
SHRLEVEL
Specifies the method for performing the reorganization. The parameter
following SHRLEVEL indicates the type of access that is to be allowed during
the RELOAD phase of REORG.
NONE
Specifies that reorganization is to operate by unloading from the area
that is being reorganized (while applications can read but cannot write
to the area), building into that area (while applications have no access),
and then allowing read-write access again. The default is NONE.
If you specify NONE (explicitly or by default), you cannot specify the
following parameters:
v MAXRO
v LONGLOG
v DELAY
v DEADLINE
v DRAIN_WAIT
v RETRY
v RETRY_DELAY
REFERENCE
Specifies that reorganization is to operate as follows:
v Unload from the area that is being reorganized while applications
can read but cannot write to the area.
v Build into a shadow copy of that area while applications can read
but cannot write to the original copy.
v Switch the future access of the applications from the original copy to
the shadow copy by exchanging the names of the data sets, and then
allowing read-write access again.
To determine which data sets are required when you execute REORG
SHRLEVEL REFERENCE, see “Data sets that REORG INDEX uses” on
page 433.
If you specify REFERENCE, you cannot specify the following
parameters:
v UNLOAD (Reorganization with REFERENCE always performs
UNLOAD CONTINUE.)
v MAXRO
v LONGLOG
v DELAY
CHANGE
Specifies that reorganization is to operate as follows:
v Unload from the area that is being reorganized while applications
can read and write to the area.
v Build into a shadow copy of that area while applications can read
and write to the original copy.
v Apply the log of the original copy to the shadow copy while
applications can read and usually write to the original copy.
v Switch the future access of the applications from the original copy to
the shadow copy by exchanging the names of the data sets, and then
allowing read-write access again.
To determine which data sets are required when you execute REORG
SHRLEVEL CHANGE, see “Data sets that REORG INDEX uses” on
page 433.
If you specify CHANGE, you cannot specify the UNLOAD parameter.
Reorganization with CHANGE always performs UNLOAD
CONTINUE.
| SHRLEVEL CHANGE cannot be specified if the table space has the
| NOT LOGGED attribute.
DEADLINE
Specifies the deadline for the SWITCH phase to begin. If DB2 estimates that
the SWITCH phase does not begin by the deadline, DB2 issues the messages
that the DISPLAY UTILITY command issues and then terminates
reorganization.
NONE
Specifies that no deadline exists by which the switch phase of log
processing must begin. The default is NONE.
timestamp
Specifies the deadline for the switch phase of log processing to begin. This
deadline must not have already occurred when REORG is executed.
labeled-duration-expression
Calculates the deadline for the switch phase of log processing to begin. The
calculation is based on either CURRENT TIMESTAMP or CURRENT
DATE. You can add or subtract one or more constant values to specify the
deadline. This deadline must not have already occurred when REORG is
executed.
CURRENT_DATE
Specifies that the deadline is to be calculated based on the CURRENT
DATE.
CURRENT_TIMESTAMP
Specifies that the deadline is to be calculated based on the CURRENT
TIMESTAMP.
constant
Indicates a unit of time and is followed by one of the seven duration
keywords: YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS,
or MICROSECONDS. The singular form of these words is also
acceptable: YEAR, MONTH, DAY, HOUR, MINUTE, SECOND,
MICROSECOND.
RETRY integer
Specifies the maximum number of retries that REORG is to attempt. Valid
values for integer are from 0 to 255. If the keyword is omitted, the utility does
not attempt a retry.
Specifying RETRY can lead to increased processing costs and can result in
multiple or extended periods of read-only access. The default is the value of
the UTIMOUT subsystem parameter.
RETRY_DELAY integer
Specifies the minimum duration, in seconds, between retries. Valid values
for integer are from 1 to 1800.
| If you do not specify RETRY_DELAY, REORG INDEX uses the smaller of
| the following two values:
| v DRAIN_WAIT value × RETRY value
| v DRAIN_WAIT value × 10
MAXRO integer
Specifies the maximum amount of time for the last iteration of log processing.
During that iteration, applications have read-only access.
The actual execution time of the last iteration might exceed the specified
MAXRO value.
The ALTER UTILITY command can change the value of MAXRO.
The default is the RETRY_DELAY default value.
integer integer is the number of seconds. Specifying a small positive value
reduces the length of the period of read-only access, but it might
increase the elapsed time for REORG to complete. If you specify a
huge positive value, the second iteration of log processing is probably
the last iteration. The default is 300 seconds.
DEFER
Specifies that the iterations of log processing with read-write access can
continue indefinitely. REORG never begins the final iteration with
read-only access, unless you change the MAXRO value by using the
ALTER UTILITY command.
If you specify DEFER, you should also specify LONGLOG
CONTINUE.
If you specify DEFER, and DB2 determines that the actual time for an
iteration and the estimated time for the next iteration are both less than
5 seconds, DB2 adds a 5-second pause to the next iteration. This pause
reduces consumption of processor time. The first time this situation
occurs for a given execution of REORG, DB2 sends message DSNU362I
to the console. The message states that the number of log records that
must be processed is small and that the pause occurs. To change the
MAXRO value and thus cause REORG to finish, execute the ALTER
UTILITY command. DB2 adds the pause whenever the situation
occurs; however, DB2 sends the message only if 30 minutes have
elapsed since the last message was sent for a given execution of
REORG.
DRAIN
Specifies drain behavior at the end of the log phase after the MAXRO
threshold is reached and when the last iteration of the log is to be applied.
WRITERS
Specifies the current default action, in which DB2 drains only the
writers during the log phase after the MAXRO threshold is reached
and subsequently issues DRAIN ALL on entering the switch phase.
ALL Specifies that DB2 is to drain all readers and writers during the log
phase, after the MAXRO threshold is reached.
Consider specifying DRAIN ALL if the following conditions are both
true:
v SQL update activity is high during the log phase.
v The default behavior results in a large number of -911 SQL error
messages.
LONGLOG
Specifies the action that DB2 is to perform, after sending a message to the
console, if the number of records that the next iteration of log process is to
process is not sufficiently lower than the number that the previous iterations
processed. This situation means that REORG INDEX is not reading the
application log quickly enough to keep pace with the writing of the application
log.
CONTINUE
Specifies that until the time on the JOB statement expires, DB2 is to
continue performing reorganization, including iterations of log
processing, if the estimated time to perform an iteration exceeds the
time that is specified with MAXRO.
A value of DEFER for MAXRO and a value of CONTINUE for
LONGLOG together mean that REORG INDEX is to continue allowing
access to the original copy of the area that is being reorganized and
does not switch to the shadow copy. The user can execute the ALTER
UTILITY command with a large value for MAXRO when the switching
is desired.
The default is CONTINUE.
TERM Specifies that DB2 is to terminate reorganization after the delay
specified by the DELAY parameter.
DRAIN
Specifies that DB2 is to drain the write claim class after the delay that
is specified by the DELAY parameter. This action forces the final
iteration of log processing to occur.
DELAY integer
Specifies the minimum interval between the time that REORG sends the
LONGLOG message to the console and the time REORG that performs the
action that is specified by the LONGLOG parameter.
integer is the number of seconds. The default is 1200.
TIMEOUT
Specifies the action that is to be taken if the REORG INDEX utility gets a
time-out condition while trying to drain objects in either the log or switch
phases.
TERM
Indicates that DB2 is to behave as follows if you specify the TERM option
and a time out condition occurs:
You cannot use UNLOAD PAUSE if you specify the LIST option.
ONLY Specifies that, after the data has been unloaded, the utility job ends
and the status in SYSIBM.SYSUTIL that corresponds to this utility ID is
removed.
STATISTICS
Specifies that statistics for the index are to be collected; the statistics are either
reported or stored in the DB2 catalog. You cannot collect inline statistics for
indexes on the catalog and directory tables.
Restriction:
v If you specify STATISTICS for encrypted data, DB2 might not provide useful
information on this data.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that only the catalog table columns that provide statistics that
are used for access path selection are to be updated.
SPACE
Indicates that only the catalog table columns that provide statistics to
help the database administrator to assess the status of a particular table
space or index are to be updated.
NONE
Indicates that catalog tables are not to be updated with the collected
statistics. This option is valid only when REPORT YES is specified.
HISTORY
Indicates that all catalog table inserts or updates to the catalog history tables
are to be recorded.
The default is supplied by the specified value in STATISTICS HISTORY on
panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that only the catalog history table columns that provide
statistics used for access path selection are to be updated.
SPACE
Indicates that only space-related catalog statistics are to be updated in
catalog history tables.
NONE
Indicates that catalog history tables are not to be updated with the
collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics are to take place when
RUNSTATS is executed even when some parts are empty. This option enables
the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some parts might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all parts.
If data is not available for all parts and if the installation value for STATISTICS
ROLLUP on panel DSNTIPO is set to NO, DSNU623I message is issued.
WORKDDN(ddname)
ddname specifies the DD statement for the unload data set.
ddname
Is the DD name of the temporary work file for build input. The default
is SYSUT1.
The WORKDDN keyword specifies either a DD name or a TEMPLATE
name from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 641.
Data sharing considerations for REORG: You must not execute REORG on an
object if another DB2 subsystem holds retained locks on the object or has
long-running noncommitting applications that use the object. You can use the
DISPLAY GROUP command to determine whether a member’s status is ″FAILED.″
You can use the DISPLAY DATABASE command with the LOCKS option to
determine if locks are held.
CHECK-pending status: You cannot reorganize an index when the data is in the
CHECK-pending status. See Chapter 8, “CHECK DATA,” on page 61 for more
information about resetting the CHECK-pending status.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
data set. Otherwise, DFSORT dynamically allocates the temporary data set.
| 3. It is recommended that you use dynamic allocation by specifying SORTDEVT in the
| utility statement because dynamic allocation reduces the maintenance required of the
| utility job JCL.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Index Object to be reorganized.
Calculating the size of the work data sets: When reorganizing an index space, you
need a non-DB2 sequential work data set. That data set is identified by the DD
statement that is named in the WORKDDN option. During the UNLOAD phase,
the index keys and the data pointers are unloaded to the work data set. This data
set is used to build the index. It is required only during the execution of REORG.
Use the following formula to calculate the approximate size (in bytes) of the
WORKDDN data set SYSUT1:
size = number of keys x (key length + 8)
where
number of keys = #tablerows
Calculating the size of the sort work data sets: To calculate the approximate size
(in bytes) of the ST01WKnn data set, use the following formula:
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
For user-managed data sets, you must preallocate the shadow data sets before you
execute REORG INDEX with SHRLEVEL REFERENCE or SHRLEVEL CHANGE. If
an index or partitioned index resides in DB2-managed data sets and shadow data
sets do not already exist when you execute REORG INDEX, DB2 creates the
shadow data sets. At the end of REORG processing, the DB2-managed shadow
data sets are deleted. You can create the shadows ahead of time for DB2-managed
data sets.
Shadow data set names: Each shadow data set must have the following name:
| catname.DSNDBx.dbname.psname.y000z.Lnnn
To determine the names of existing shadow data sets, execute one of the following
queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
SELECT DBNAME, TSNAME, IPREFIX
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
SELECT DBNAME, IXNAME, IPREFIX
FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
Defining shadow data sets: Consider the following actions when you preallocate
the data sets:
v Allocate the shadow data sets according to the rules for user-managed data sets.
v Define the shadow data sets as LINEAR.
v Use SHAREOPTIONS(3,3).
v Define the shadow data sets as EA-enabled if the original table space or index
space is EA-enabled.
v Allocate the shadow data sets on the volumes that are defined in the storage
group for the original table space or index space.
If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
the SECQTY value for the table space or index space.
Recommendation: Use the MODEL option, which causes the new shadow data set
to be created like the original data set. This method is shown in the following
example:
DEFINE CLUSTER +
(NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
DATA +
(NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’) )
Creating shadow data sets for indexes: When you preallocate data sets for indexes,
create the shadow data sets as follows:
v Create shadow data sets for the partition of the table space and the
corresponding partition in each partitioning index and data-partitioned
secondary index.
v Create a shadow data set for logical partitions of nonpartitioned secondary
indexes.
Use the same naming scheme for these index data sets as you use for other data
sets that are associated with the base index, except use J0001 instead of I0001. For
more information about this naming scheme, see the information about the shadow
data set naming convention at the beginning of this section.
Estimating the size of shadow data sets: If you do not change the value of
FREEPAGE or PCTFREE, the amount of space that is required for a shadow data
set is approximately comparable to the amount of space that is required for the
original data set. For more information about calculating the size of data sets, see
“Data sets that REORG INDEX uses” on page 433.
Use the following query to identify user-created indexes and DB2 catalog indexes
that you should consider reorganizing with the REORG INDEX utility:
EXEC SQL
SELECT IXNAME, IXCREATOR
FROM SYSIBM.SYSINDEXPART
WHERE LEAFDIST > 200
ENDEXEC
After you run RUNSTATS, issuing the following SQL statement provides the
average distance (multiplied by 100) between successive leaf pages during
sequential access of the ZZZ index.
EXEC SQL
SELECT LEAFDIST
FROM SYSIBM.SYSINDEXPART
WHERE IXCREATOR = 'index_creator_name'
AND IXNAME = 'index_name'
ENDEXEC
For specific REORG threshold numbers, see DB2 Performance Monitoring and Tuning
Guide.
You can determine when to run REORG for indexes by using the LEAFDISTLIMIT
option. If you specify the REPORTONLY option, REORG produces a report that
indicates whether a REORG is recommended; a REORG is not performed.
When you specify the LEAFDISTLIMIT option with the REPORTONLY option,
REORG produces a report with one of the following return codes:
1 No limit met; no REORG performed or recommended.
2 REORG performed or recommended.
Alternatively, information from the SYSINDEXPART catalog table can tell you
which indexes qualify for reorganization.
message DSNU377I to the console. DB2 continues log processing for the length
of time that is specified by DELAY and then performs the action specified by
LONGLOG.
Operator actions: LONGLOG specifies the action that DB2 is to perform if log
processing is not occurring quickly enough. See “Option descriptions” on page 423
for a description of the LONGLOG options. If the operator does not respond to the
console message DSNU377I, the LONGLOG option automatically goes into effect.
You can take one of the following actions:
v Execute the START DATABASE(db) SPACENAM(ts)... ACCESS(RO) command
and the QUIESCE utility to drain the write claim class. DB2 performs the last
iteration, if MAXRO is not DEFER. After the QUIESCE, you should also execute
the ALTER UTILITY command, even if you do not change any REORG
parameters.
v Execute the START DATABASE(db) SPACENAM(ts)... ACCESS(RO) command
and the QUIESCE utility to drain the write claim class. Then, after
reorganization has made some progress, execute the START DATABASE(db)
SPACENAM(ts)... ACCESS(RW) command. This action increases the likelihood
that log processing can improve. After the QUIESCE, you should also execute
the ALTER UTILITY command, even if you do not change any REORG
parameters.
v Execute the ALTER UTILITY command to change the value of MAXRO.
Changing it to a huge positive value, such as 9999999, causes the next iteration
to be the last iteration.
v Execute the ALTER UTILITY command to change the value of LONGLOG.
v Execute the TERM UTILITY command to terminate reorganization.
v Adjust the amount of buffer space that is allocated to reorganization and to
applications. This adjustment can increase the likelihood that log processing
improve after adjusting the space, you should also execute the ALTER UTILITY
command, even if you do not change any REORG parameters.
v Adjust the scheduling priorities of reorganization and applications. This
adjustment can increase the likelihood that log processing improve. After
adjusting the priorities, you should also execute the ALTER UTILITY command,
even if you do not change any REORG parameters.
DB2 does not take the action specified in the LONGLOG phrase if any one of these
events occurs before the delay expires:
v An ALTER UTILITY command is issued.
v A TERM UTILITY command is issued.
v DB2 estimates that the time to perform the next iteration is likely to be less than
or equal to the time specified on the MAXRO keyword.
v REORG terminates for any reason (including the deadline).
For REORG with SHRLEVEL REFERENCE or CHANGE, you can use the ALTER
STOGROUP command to change the characteristics of a DB2-managed data set.
You can effectively change the characteristics of a user-managed data set by
specifying the desired new characteristics when creating the shadow data set; see
“Shadow data sets” on page 435 for more information about shadow data sets. In
particular, placing the original and shadow data sets on different disk volumes
might reduce contention and thus improve the performance of REORG and the
performance of applications during REORG execution.
The SYSIBM.SYSUTIL record for the REORG INDEX utility remains in ″stopped″
status until REORG is restarted or terminated.
While REORG is interrupted by PAUSE, you can re-define the table space
attributes for user defined table spaces. PAUSE is not required for
STOGROUP-defined table spaces. Attribute changes are done automatically by a
REORG following an ALTER INDEX.
Improving performance
To improve REORG performance, run REORG concurrently on separate partitions
of a partitioned index space. The processor time for running REORG INDEX on
partitions of a partitioned index is approximately the same as the time for running
a single REORG index job. The elapsed time is a fraction of the time for running a
single REORG job on the entire index
By specifying a short delay time (less than the system timeout value, IRLMRWT),
you can reduce the impact on applications by reducing time-outs. You can use the
RETRY option to give the online REORG INDEX utility chances to complete
successfully. If you do not want to use RETRY processing, you can still use
DRAIN_WAIT to set a specific and more consistent limit on the length of drains.
RETRY allows an online REORG that is unable to drain the objects it requires to
try again after a set period (RETRY_DELAY). If the drain fails in the SWITCH
phase, the objects remain in their original state (read-only mode for SHRLEVEL
REFERENCE or read-write mode for SHRLEVEL CHANGE). Likewise, objects will
remain in their original state if the drain fails in the LOG phase.
Because application SQL statements can queue behind any unsuccessful drain that
the online REORG has tried, define a reasonable delay before you retry to allow
this work to complete; the default is 5 minutes.
When the default DRAIN WRITERS is used with SHRLEVEL CHANGE and
RETRY, multiple read-only log iterations can occur. Because online REORG can
have to do more work when RETRY is specified, multiple or extended periods of
restricted access might occur. Applications that run with REORG must perform
frequent commits. During the interval between retries, the utility is still active;
consequently, other utility activity against the table space and indexes is restricted.
Recommendation: Run online REORG during light periods of activity on the table
space or index.
If you terminate REORG with the TERM UTILITY command during the build
phase, the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the index is left in RECOVER-pending status. After you
recover the index, rerun the REORG job.
v For SHRLEVEL REFERENCE or CHANGE, the index keys are reloaded into a
shadow index, so the original index has not been affected by REORG. You can
rerun the job.
If you terminate REORG with the TERM UTILITY command during the log phase,
the index keys are reloaded into a shadow index, so the original index has not
been affected by REORG. You can rerun the job.
If you terminate REORG with the TERM UTILITY command during the switch
phase, all data sets that were renamed to their shadow counterparts are renamed
back, so the objects are left in their original state. You can rerun the job. If a
problem occurs in renaming to the original data sets, the objects are left in
RECOVER-pending status. You must recover the index.
The REORG-pending status is not reset until the UTILTERM execution phase. If the
REORG INDEX utility abnormally terminates or is terminated, the objects are left
in RECOVER-pending status. See Appendix C, “Advisory or restrictive states,” on
page 895 for information about resetting either status.
Table 69 lists any restrictive states that are set based on the phase in which REORG
INDEX terminated.
Table 69. Restrictive states set based on the phase in which REORG INDEX terminated
Phase Effect on restrictive status
UNLOAD No effect.
BUILD Sets REBUILD-pending (RBDP) status at the beginning of the build
phase, and resets RBDP at the end of the phase. SHRLEVEL NONE
places an index that was defined with the COPY YES attribute in
RECOVER pending (RECP) status.
LOG No effect.
Table 69. Restrictive states set based on the phase in which REORG INDEX
terminated (continued)
Phase Effect on restrictive status
SWITCH Under certain conditions, if TERM UTILITY is issued, it must complete
successfully; otherwise, objects might be placed in RECP status or RBDP
status. For SHRLEVEL REFERENCE or CHANGE, sets the RECP status
if the index was defined with the COPY YES attribute at the beginning
of the switch phase, and resets RECP at the end of the phase. If the
index was defined with COPY NO, this phase sets the index in RBDP
status at the beginning of the phase, and resets RBDP at the end of the
phase.
If you restart REORG in the outlined phase, it re-executes from the beginning of
the phase. DB2 always uses RESTART(PHASE) by default unless you restart the
job in the UNLOAD phase. In this case, DB2 uses RESTART(CURRENT) by
default.
For each phase of REORG and for each type of REORG INDEX (with SHRLEVEL
NONE, with SHRLEVEL REFERENCE, and with SHRLEVEL CHANGE), the table
indicates the types of restart that are allowed (CURRENT and PHASE). None
indicates that no restart is allowed. The ″Data sets required″ column lists the data
sets that must exist to perform the specified type of restart in the specified phase.
Table 70. REORG INDEX utility restart information
Type of restart Type of restart
Type of restart allowed for allowed for
allowed for SHRLEVEL SHRLEVEL
Phase SHRLEVEL NONE REFERENCE CHANGE Data sets required Notes
UNLOAD CURRENT, PHASE CURRENT, PHASE None SYSUT1
BUILD CURRENT, PHASE CURRENT, PHASE None SYSUT1 1
LOG Phase does not occur Phase does not None None
occur
SWITCH Phase does not occur CURRENT, PHASE CURRENT, PHASE originals and shadows 1
Notes:
1. You can restart the utility with either RESTART or RESTART(PHASE). However, because this phase does not take
checkpoints, RESTART always re-executes from the beginning of the phase.
If you restart a REORG STATISTICS job that was stopped in the BUILD phase by
using RESTART CURRENT, inline statistics collection does not occur. To update
catalog statistics, run the RUNSTATS utility after the restarted job completes.
Restarting a REORG STATISTICS job with RESTART(PHASE) is conditional after
executing UNLOAD PAUSE. To determine if catalog table statistics are to be
updated when you restart a REORG STATISTICS job, see Table 71 on page 443.
This table lists whether or not statistics are updated based on the execution phase
and whether the job is restarted with RESTART(CURRENT) or RESTART(PHASE).
Table 71. Whether statistics are updated when REORG INDEX STATISTICS jobs are
restarted in certain phases
Phase RESTART CURRENT RESTART PHASE
UTILINIT No Yes
UNLOAD No Yes
BUILD No Yes
For instructions on restarting a utility job, see Chapter 3, “Invoking DB2 online
utilities,” on page 17.
Table 72 shows which claim classes REORG INDEX drains and any restrictive state
the utility sets on the target object. The target is an index or index partition.
Table 72. Claim classes of REORG INDEX operations
REORG INDEX
REORG INDEX SHRLEVEL SHRLEVEL REORG INDEX SHRLEVEL
Phase NONE REFERENCE CHANGE
UNLOAD DW/UTRO DW/UTRO CR/UTRW
BUILD DA/UTUT none none
1
Last iteration of LOG n/a DA/UTUT DW/UTRO
SWITCH n/a DA/UTUT DA/UTUT
Legend:
v CR: Claim the read claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTRO: Utility restrictive state, read only access allowed.
v UTUT: Utility restrictive state, exclusive control.
v none: Any claim, drain, or restrictive state for this object does not change in this phase.
Notes:
1. Applicable if you specified DRAIN ALL.
Table 73 shows which utilities can run concurrently with REORG INDEX on the
same target object. The target object can be an index space or a partition. If
compatibility depends on particular options of a utility, that is also shown. REORG
INDEX does not set a utility restrictive state if the target object is an index on
DSNDB01.SYSUTILX.
Table 73. Compatibility of REORG INDEX with other utilities
REORG INDEX SHRLEVEL
Action NONE, REFERENCE, or CHANGE
CHECK DATA No
When reorganizing an index, REORG leaves free pages and free space on each
page in accordance with the current values of the FREEPAGE and PCTFREE
parameters. (You can set those values by using the CREATE INDEX or ALTER
INDEX statement.) REORG leaves one free page after reaching the FREEPAGE
limit for each table in the index space.
When you run REORG INDEX, the utility updates this range of used version
numbers for indexes that are defined with the COPY NO attribute. REORG INDEX
sets the OLDEST_VERSION column to the current version number, which indicates
that only one version is active; DB2 can then reuse all of the other version
numbers.
Recycling of version numbers is required when all of the version numbers are
being used. All version numbers are being used when one of the following
situations is true:
v The value in the CURRENT_VERSION column is one less than the value in the
OLDEST_VERSION column.
v The value in the CURRENT_VERSION column is 15 and the value in the
OLDEST_VERSION column is 0 or 1.
You can also run LOAD REPLACE, REBUILD INDEX, or REORG TABLESPACE to
recycle version numbers for indexes that are defined with the COPY NO attribute.
To recycle version numbers for indexes that are defined with the COPY YES
attribute or for table spaces, run MODIFY RECOVERY.
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
Example 3: Updating access path statistics in the catalog and catalog history
tables while reorganizing an index. The following control statement specifies that
while reorganizing index IU0E0801, REORG INDEX is to also collect statistics,
collect all of the distinct values in the key column combinations, and update access
path statistics in the catalog and catalog history tables. The utility is also to send
any output, including space and access path statistics, to SYSPRINT.
REORG INDEX IUOE0801
STATISTICS
KEYCARD
REPORT YES
UPDATE ACCESSPATH
HISTORY ACCESSPATH
The REORG INDEX statement specifies that the utility is to reorganize the indexes
that are included in the REORG_INDX list. The SHRLEVEL CHANGE option
indicates that during this processing, read and write access is allowed on the areas
that are being reorganized, with the exception of a 100-second period during the
last iteration of log processing. During this time, which is specified by the MAXRO
option, applications have read-only access. The WORKDDN option indicates that
REORG INDEX is to use the data set that is defined by the SUT1 template. If the
SWITCH phase does not begin by the deadline that is specified on the DEADLINE
option, processing terminates.
Figure 75. Example statements for job that reorganizes a list of indexes
You can determine when to run REORG for non-LOB table spaces by using the
OFFPOSLIMIT or INDREFLIMIT catalog query options. If you specify the
REPORTONLY option, REORG produces a report that indicates whether a REORG
is recommended without actually performing the REORG. These options are not
applicable and are disregarded if the target object is a directory table space.
Run the REORG TABLESPACE utility on a LOB table space to help increase the
effectiveness of prefetch. For a LOB table space, REORG TABLESPACE performs
these actions:
v Removes imbedded free space
v Attempts to make LOB pages contiguous
| If you specify SHRLEVEL REFERENCE, a REORG of a LOB table space will make
| LOB pages continuous, remove imbedded free space, and reclaim physical space if
| applicable.
Do not execute REORG on an object if another DB2 holds retained locks on the
object or has long-running noncommitting applications that use the object. You can
use the DISPLAY GROUP command to determine whether a member’s status is
failed. You can use the DISPLAY DATABASE command with the LOCKS option to
determine if locks are held.
Output: If the table space or partition has the COMPRESS YES attribute, the data is
compressed when it is reloaded. If you specify the KEEPDICTIONARY option of
REORG, the current dictionary is used; otherwise a new dictionary is built.
You can execute the REORG TABLESPACE utility on the table spaces in the DB2
catalog database (DSNDB06) and on some table spaces in the directory database
(DSNDB01). It cannot be executed on any table space in the DSNDB07 database.
Table 75. summaries the results of REORG TABLESPACE according to the type of
REORG specified.
Table 75. Summary of REORG TABLESPACE output
Type of REORG specified Results
REORG TABLESPACE Reorganizes all data and all indexes.
Authorization required: To execute this utility on a user table space, you must use
a privilege set that includes one of the following authorities:
v REORG privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL authority
v SYSADM authority
To execute this utility on a table space in the catalog or directory, you must use a
privilege set that includes one of the following authorities:
v REORG privilege for the DSNDB06 (catalog) database
v DBADM or DBCTRL authority for the DSNDB06 (catalog) database
v Installation SYSOPR authority
v SYSCTRL authority
v SYSADM or Installation SYSADM authority
v STATS privilege for the database is required if STATISTICS keyword is specified.
To run REORG TABLESPACE STATISTICS REPORT YES, you must use a privilege
set that includes the SELECT privilege on the catalog tables and tables for which
statistics are to be gathered.
If you use RACF access control with multilevel security and REORG TABLESPACE
is to process a table space that contains a table that has multilevel security with
row-level granularity, you must be identified to RACF and have an accessible valid
security label. You must also meet the following authorization requirements:
v For REORG statements that include the UNLOAD EXTERNAL option, each row
is unloaded only if your security label dominates the data security label. If your
security label does not dominate the data security label, the row is not unloaded,
but DB2 does not issue an error message.
v For REORG statements that include the DISCARD option, qualifying rows are
discarded only if one of the following situations is true:
– Write-down rules are in effect, you have write-down privilege, and your
security label dominates the data’s security label.
– Write-down rules are not in effect and your security label dominates the
data’s security label.
– Your security label is equivalent to the data security label.
For more information about multilevel security and security labels, see Part 3 of
DB2 Administration Guide.
You cannot restart REORG TABLESPACE on a LOB table space in the REORGLOB
phase. Before executing REORG TABLESPACE SHRLEVEL NONE on a LOB table
space that is defined with LOG NO, you should take a full image copy to ensure
recoverability. For SHRLEVEL REFERENCE, an inline image copy is required to
ensure recoverability.
Syntax diagram
| YES
SCOPE ALL LOG YES SORTDATA NO
CLONE REUSE SCOPE PENDING REBALANCE LOG NO
SHRLEVEL NONE
copy-spec
NOSYSREC SHRLEVEL REFERENCE deadline-spec drain-spec
CHANGE deadline-spec drain-spec table-change-spec
10 10
OFFPOSLIMIT INDREFLIMIT
integer integer REPORTONLY
FROM-TABLE-spec
DISCARDDN SYSDISC
DISCARDDN ddname reorg tablespace options
DISCARD FROM-TABLE-spec
NOPAD
Notes:
1 You cannot use UNLOAD PAUSE with the LIST option.
copy-spec:
(1)
COPYDDN(SYSCOPY)
COPYDDN( ddname1 ) RECOVERYDDN(ddname3 )
,ddname2 ,ddname4
,ddname2
Notes:
1 COPYDDN(SYSCOPY) is not the default if you specify SHRLEVEL NONE and no partitions are
in REORG-pending status.
deadline-spec:
DEADLINE NONE
DEADLINE timestamp
labeled-duration-expression
drain-spec:
(4)
MAXRO DRAIN WRITERS LONGLOG CONTINUE DELAY 1200
MAXRO integer DRAIN ALL LONGLOG TERM DELAY integer
DEFER DRAIN
| TIMEOUT TERM
TIMEOUT ABEND
Notes:
| 1 The default for DRAIN_WAIT is the value of the IRLMRWT subsystem parameter.
| 2 The default for RETRY is the value of the UTIMOUT subsystem parameter.
| 3 The default for RETRY_DELAY is the smaller of the following two values:
| v DRAIN_WAIT value × RETRY value
| v DRAIN_WAIT value × 10
4 The default for MAXRO is the RETRY_DELAY default value.
table-change-spec:
MAPPINGTABLE table-name
labeled-duration-expression:
statistics-spec:
STATISTICS
TABLE ( ALL )
SAMPLE integer
COLUMN ALL
TABLE ( table-name )
SAMPLE integer ,
COLUMN ( column-name )
UPDATE ALL
UPDATE ACCESSPATH HISTORY ALL FORCEROLLUP YES
SPACE ACCESSPATH NO
NONE SPACE
NONE
correlation-stats-spec:
FROM-TABLE-spec:
selection-condition-spec:
predicate
selection condition AND predicate
OR selection condition
predicate:
basic predicate
BETWEEN predicate
IN predicate
LIKE predicate
NULL predicate
basic predicate:
(1)
column-name = constant
<> labeled-duration-expression
>
<
>=
<=
Notes:
1 The following forms of the comparison operators are also supported in basic and quantified
predicates: !=, !<, and !>. For details, see “comparison operators” on page 472.
BETWEEN predicate:
constant
labeled-duration-expression
IN predicate:
column-name IN ( constant )
NOT
LIKE predicate:
NULL predicate:
column-name IS NULL
NOT
UNLDDN SYSREC
UNLDDN ddname SORTDEVT device-type SORTNUM integer PREFORMAT
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
TABLESPACE database-name.table-space-name
Specifies the table space (and, optionally, the database to which it belongs) that
is to be reorganized.
If you reorganize a table space, its indexes are also reorganized.
database-name
Is the name of the database to which the table space belongs. The
name cannot be DSNDB07. The default is DSNDB04.
table-space-name
Is the name of the table space that is to be reorganized. The name
cannot be SYSUTILX if the specified database name is DSNDB01.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The utility
allows one LIST keyword for each control statement of REORG TABLESPACE.
The list must contain only table spaces.
Do not specify FROM TABLE, STATISTICS TABLE table-name, or STATISTICS
INDEX index-name with REORG TABLESPACE LIST. If you want to collect
inline statistics for a list of table spaces, specify STATISTICS TABLE (ALL). If
you want to collect inline statistics for a list of indexes, specify STATISTICS
INDEX (ALL). Do not specify PART with LIST.
REORG TABLESPACE is invoked once for each item in the list. This utility will
only process clone data if the CLONE keyword is specified. The use of
CLONED YES on the LISTDEF statement is not sufficient.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 185.
| CLONE
| Indicates that REORG TABLESPACE is to reorganize only clone tables from the
| specified table spaces. This utility will only process clone data if the CLONE
| keyword is specified. The use of CLONED YES on the LISTDEF statement is
| not sufficient. Base tables in the specified table spaces are not reorganized. If
| you specify CLONE, you cannot specify STATISTICS. Statistics are not
| collected for clone tables.
REUSE
When used with SHRLEVEL NONE, specifies that REORG is to logically reset
and reuse DB2-managed data sets without deleting and redefining them. If you
do not specify REUSE and SHRLEVEL NONE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If a data set has multiple extents, the extents are not released if you use the
REUSE parameter.
REUSE does not apply if you also specify SHRLEVEL REFERENCE or
CHANGE.
SCOPE
Indicates the scope of the reorganization of the specified table space or of one
or more specified partitions.
ALL
Indicates that you want the specified table space or one or more partitions
to be reorganized. The default is ALL.
PENDING
Indicates that you want the specified table space or one or more partitions
to be reorganized only if they are in REORG-pending (REORP or AREO*)
status.
PART integer
PART integer1:integer2
Identifies a partition range that is to be reorganized. You can reorganize a
single partition of a partitioned table space, or a range of partitions within a
partitioned table space. integer must be in the range from 1 to the number of
partitions that are defined for the table space or partitioning index. The
maximum is 4096.
integer Designates a single partition.
integer1:integer2
Designates a range of existing table space partitions from
integer1 through integer2.integer2 must be greater than integer1.
If you omit the PART keyword, the entire table space is reorganized.
If you specify the PART keyword for a LOB table space, DB2 issues an error
message, and utility processing terminates with return code 8.
If you specify a partition range and the high or low partitions in the list are in
a REORG-pending state, the adjacent partition that is outside the specified
range must not be in REORG-pending state; otherwise, the utility terminates
with an error.
REBALANCE
Specifies that REORG TABLESPACE is to set new partition boundaries so that
pages are evenly distributed across the reorganized partitions. If the columns
that are used in defining the partition boundaries have many duplicate values
within the data rows, even balancing is not always possible. Specify
REBALANCE for more than one partition; if you specify a single partition for
rebalancing, REORG TABLESPACE ignores the specification.
YES
Specifies that the data is to be unloaded by a table space scan, and sorted
in clustering order. The default is SORTDATA YES unless you specify
UNLOAD ONLY or UNLOAD EXTERNAL. If you specify one of these
options, the default is SORTDATA NO.
NO
Specifies that the data is to be unloaded in the order of the clustering
index. SORTDATA NO cannot be specified with SHRLEVEL CHANGE.
Specify SORTDATA NO if one of the following conditions is true:
v The data is in or near perfect clustering order, and the REORG utility is
used to reclaim space from dropped tables.
v The data is very large, and an insufficient amount of disk space is
available for sorting.
SORTDATA YES is ignored for some of the catalog and directory table spaces;
see “Reorganizing the catalog and directory” on page 498.
NOSYSREC
Specifies that the output of sorting (if a clustering index exists) is the input to
reloading, without the REORG TABLESPACE utility using an unload data set.
You can specify this option only if the REORG TABLESPACE job includes
SHRLEVEL REFERENCE or SHRLEVEL NONE, and only if you do not specify
UNLOAD PAUSE or UNLOAD ONLY. See “Omitting the output data set” on
page 497 for additional information about using this option.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the primary (ddname1) and backup (ddname2)
copy data sets for the image copy.
ddname1 and ddname2 are the DD names.
The default is SYSCOPY for the primary copy. A full image copy data set is
created when REORG executes. This copy is called an inline copy. (For more
information about inline copies, see “Using inline copy with REORG
TABLESPACE” on page 503.) The name of the data set is listed as a row in the
SYSIBM.SYSCOPY catalog table with ICTYPE=’R’ (as it is for the COPY
SHRLEVEL REFERENCE option). The table space does not remain in
COPY-pending status regardless of which LOG option you specify.
If you specify SHRLEVEL NONE (explicitly or by default) for REORG, and
COPYDDN is not specified, an image copy is not created at the local site.
COPYDDN(SYSCOPY) is assumed, and a DD statement for SYSCOPY is
required if either of the following conditions are true:
v You specify REORG SHRLEVEL REFERENCE or CHANGE, and you do not
specify COPYDDN.
v A table space or partition is in REORG-pending (REORP) status.
v You specify REBALANCE.
RECOVERYDDN (ddname3,ddname4)
Specifies the DD statements for the primary (ddname3) and backup (ddname4)
copy data sets for the image copy at the recovery site.
ddname3 and ddname4are the DD names.
You cannot have duplicate image copy data sets. The same rules apply for
RECOVERYDDN as for COPYDDN.
The RECOVERYDDN keyword specifies either a DD name or a TEMPLATE
name specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the current job
step and a TEMPLATE name, the utility uses the DD name. For more
information about TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on
page 641.
| REORG SHRLEVEL REFERENCE of a LOB table space supports inline copies,
| but REORG SHRLEVEL NONE does not.
SHRLEVEL
Specifies the method that is to be used for the reorganization. The parameter
following SHRLEVEL indicates the type of access that is to be allowed during
the RELOAD phase of REORG.
NONE
Specifies that reorganization is to operate as follows:
v Unloading from the area that is being reorganized (while
applications can read but cannot write to the area)
v Reloading into that area (while applications have no access), and
then allowing read-write access again
The default is NONE.
If you specify NONE (explicitly or by default), you cannot specify the
following parameters:
v MAPPINGTABLE
v MAXRO
v LONGLOG
v DELAY
v DEADLINE
v DRAIN_WAIT
v RETRY
v RETRY_DELAY
To determine which data sets are required when you execute REORG
SHRLEVEL REFERENCE, see “Data sets that REORG TABLESPACE
uses” on page 486.
If you specify CHANGE, you must create a mapping table and specify
the name of the mapping table with the MAPPINGTABLE option.
Restriction:
v You cannot specify SHRLEVEL CHANGE for a LOB table space,
catalog, or directory table space with links.
| v You cannot specify SHRLEVEL CHANGE if the table space has the
| NOT LOGGED attribute.
DEADLINE
Specifies the deadline for the SWITCH phase to begin. If DB2 estimates that
the SWITCH phase will not begin by the deadline, DB2 issues the messages
that the DISPLAY UTILITY command would issue and then terminates the
reorganization.
If REORG SHRLEVEL REFERENCE or SHRLEVEL CHANGE terminates
because of a DEADLINE specification, DB2 issues message DSNU374I with
reason code 2 but does not set a restrictive status.
NONE
Specifies that a deadline by which the SWITCH phase of log
processing must begin does not exist. The default is NONE.
timestamp
Specifies the deadline for the SWITCH phase of log processing to
begin. This deadline must not have already occurred when REORG is
executed.
labeled-duration-expression
Calculates the deadline for the SWITCH phase of log processing to
begin. The calculation is based on either CURRENT TIMESTAMP or
CURRENT DATE. You can add or subtract one or more constant value
to specify the deadline. This deadline must not have already occurred
when REORG is executed.
CURRENT_DATE
Specifies that the deadline is to be calculated based on the
CURRENT DATE.
CURRENT_TIMESTAMP
Specifies that the deadline is to be calculated based on the
CURRENT TIMESTAMP.
constant
Indicates a unit of time and is followed by one of the seven
duration keywords: YEARS, MONTHS, DAYS, HOURS, MINUTES,
SECONDS, or MICROSECONDS. The singular form of these words
is also acceptable: YEAR, MONTH, DAY, HOUR, MINUTE,
SECOND, MICROSECOND.
DRAIN_WAIT integer
| Specifies the number of seconds that the utility waits when draining the table
| space or index. The specified time is the aggregate time for objects that are to
| be reorganized. This value overrides the values that are specified by IRLMRWT
| and UTIMOUT. Valid values for integer are from 0 to 1800. If the keyword is
| omitted or if a value of 0 is specified, the utility uses the value of the lock
| timeout system parameter IRLMRWT.
RETRY integer
Specifies the maximum number of retries that REORG is to attempt. Valid
values for integer are from 0 to 255. If the keyword is omitted, the utility does
not attempt a retry.
Specifying RETRY can lead to increased processing costs and can result in
multiple or extended periods of read-only access. For example, when you
specify RETRY and SHRLEVEL CHANGE, the size of the copy that is taken by
REORG might increase. The default is the value of the UTIMOUT subsystem
parameter.
RETRY_DELAY integer
Specifies the minimum duration, in seconds, between retries. Valid values
for integer are from 1 to 1800.
| If you do not specify RETRY_DELAY, REORG TABLESPACE uses the
| smaller of the following two values:
| v DRAIN_WAIT value × RETRY value
| v DRAIN_WAIT value × 10
MAPPINGTABLE table-name
Specifies the name of the mapping table that REORG TABLESPACE is to use to
map between the RIDs of data records in the original copy of the area and the
corresponding RIDs in the shadow copy. This parameter is required if you
specify SHRLEVEL CHANGE, and you must create a mapping table and an
index for it before running REORG TABLESPACE. See “Before running REORG
TABLESPACE” on page 482 for the columns and the index that the mapping
table must include. Enclose the table name in quotation marks if the name
contains a blank.
MAXRO integer
Specifies the maximum amount of time for the last iteration of log processing.
During that iteration, applications have read-only access.
The actual execution time of the last iteration might exceed the specified value
for MAXRO.
The ALTER UTILITY command can change the value of MAXRO.
The default is the RETRY_DELAY default value.
integer integer is the number of seconds. Specifying a small positive value
reduces the length of the period of read-only access, but it might
increase the elapsed time for REORG to complete. If you specify a
huge positive value, the second iteration of log processing is probably
the last iteration.
DEFER
Specifies that the iterations of log processing with read-write access can
continue indefinitely. REORG never begins the final iteration with
read-only access, unless you change the MAXRO value with ALTER
UTILITY.
If you specify DEFER, you should also specify LONGLOG
CONTINUE.
If you specify DEFER, and DB2 determines that the actual time for an
iteration and the estimated time for the next iteration are both less than
5 seconds, DB2 adds a 5 second pause to the next iteration. This pause
reduces consumption of processor time. The first time this situation
occurs for a given execution of REORG, DB2 sends message DSNU362I
to the console. The message states that the number of log records that
must be processed is small and that the pause occurs. To change the
MAXRO value and thus cause REORG to finish, execute the ALTER
UTILITY command. DB2 adds the pause whenever the situation
occurs; however, DB2 sends the message only if 30 minutes have
elapsed since the last message was sent for a given execution of
REORG.
DRAIN
Specifies drain behavior at the end of the log phase after the MAXRO
threshold is reached and when the last iteration of the log is to be applied.
WRITERS
Specifies the current default action, in which DB2 drains only the
writers during the log phase after the MAXRO threshold is reached
and subsequently issues DRAIN ALL on entering the switch phase.
ALL Specifies that DB2 is to drain all readers and writers during the log
phase, after the MAXRO threshold is reached.
Consider specifying DRAIN ALL if the following conditions are both
true:
v SQL update activity is high during the log phase.
integer is the value that is to be compared and can range from 0 to 65535. The
default value is 10.
INDREFLIMIT integer
Indicates that the specified value is to be compared to the value that DB2
calculates for the specified partitions in SYSIBM.SYSTABLEPART for the
specified table space. The calculation is computed as follows:
(NEARINDREF + FARINDREF) × 100 / CARDF
integer is the value that is to be compared and can range from 0 to 65535. The
default value is 10.
REPORTONLY
Specifies that REORG is only to be recommended, not performed. REORG
produces a report with one of the following return codes:
1 No limit met; no REORG is to be performed or recommended.
2 REORG is to be performed or recommended.
UNLOAD
Specifies whether the utility job is to continue processing or end after the data
is unloaded. Unless you specify UNLOAD EXTERNAL, data can be reloaded
only into the same table and table space (as defined in the DB2 catalog) on the
same subsystem. (This does not preclude VSAM redefinition during UNLOAD
PAUSE.)
You must specify UNLOAD ONLY for the data set to be in a format that is
compatible with the FORMAT UNLOAD option of LOAD. However, with
LOAD, you can load the data only into the same object from which it is
unloaded.
This option is valid for non-LOB table spaces only.
You must specify UNLOAD EXTERNAL for the data set to be in a format that
is usable by LOAD without the FORMAT UNLOAD option. With UNLOAD
EXTERNAL, you can load the data into any table with compatible columns in
any table space on any DB2 subsystem.
CONTINUE
Specifies that, after the data has been unloaded, the utility is to continue
processing. An edit routine can be called to decode a previously encoded
data row if an index key requires extraction from that row.
If you specify DISCARD, rows are decompressed and edit routines are
decoded. If you also specify DISCARD to a file, rows are decoded by field
procedure, and the following columns are converted to DB2 external
format:
v SMALLINT
v INTEGER
v FLOAT
v DECIMAL
v TIME
v TIMESTAMP
Otherwise, edit routines or field procedures are bypassed on both the
UNLOAD and RELOAD phases for table spaces. Validation procedures are
not invoked during either phase.
However, you cannot use UNLOAD PAUSE if you specify the LIST option.
ONLY
Specifies that, after the data has been unloaded, the utility job ends and the
status that corresponds to this utility ID is removed from
SYSIBM.SYSUTIL.
If you specify UNLOAD ONLY with REORG TABLESPACE, any edit
routine or field procedure is executed during record retrieval in the unload
phase.
This option is not allowed for any table space in DSNDB01 or DSNDB06.
The DISCARD and WHEN options are not allowed with UNLOAD ONLY.
EXTERNAL
Specifies that, after the data has been unloaded, the utility job is to end
and the status that corresponds to this utility ID is removed.
The UNLOAD utility has more functions. If you specify UNLOAD
EXTERNAL with REORG TABLESPACE, rows are decompressed, edit
routines are decoded, field procedures are decoded, and SMALLINT,
INTEGER, FLOAT, DECIMAL, DATE, TIME, and TIMESTAMP columns
are converted to DB2 external format. Validation procedures are not
invoked.
| Do not specify the EXTERNAL keyword for:
| v Table spaces in DSNDB01 or DSNDB06
| v Base tables with XML columns
| v XML table spaces
Figure 76. Sample LOAD statement generated by REORG TABLESPACE with the NOPAD
keyword
FROM TABLE
Specifies the tables that are to be reorganized. The table space that is specified
in REORG TABLESPACE can store more than one table. All tables are
column-name < > constant The column is not equal to the constant
or labeled duration expression.
column-name > constant The column is greater than the constant
or labeled duration expression.
column-name < constant The column is less than the constant or
labeled duration expression.
column-name > = constant The column is greater than or equal to
the constant or labeled duration
expression.
column-name < = constant The column is less than or equal to the
constant or labeled duration
expression.
Comparison operators: The following forms of the comparison
operators are also supported in basic and quantified predicates: !=, !<,
and !>, where ! means not. In addition, in code pages 437, 819, and
850, the forms ¬=, ¬<, and ¬> are supported. All these product-specific
forms of the comparison operators are intended only to support
existing REORG statements that use these operators and are not
recommended for use in new REORG statements.
A not sign (¬), or the character that must be used in its place in certain
countries, can cause parsing errors in statements that are passed from
one DBMS to another. The problem occurs if the statement undergoes
character conversion with certain combinations of source and target
CCSIDs. To avoid this problem, substitute an equivalent operator for
any operator that includes a not sign. For example, substitute ’< >’ for
’¬=’, ’<=’ for ’¬>’, and ’>=’ for ’¬<’.
BETWEEN predicate
Indicates whether a given value lies between two other given values
that are specified in ascending order. Each of the predicate’s two forms
(BETWEEN and NOT BETWEEN) has an equivalent search condition,
as shown in Table 76. If relevant, the table also shows any equivalent
predicates.
Table 76. BETWEEN predicates and their equivalent search conditions
Predicate Equivalent predicate Equivalent search condition
column BETWEEN value1 (column >= value1 AND
None
AND value2 column <= value2)
column NOT BETWEEN NOT(column BETWEEN value1 (column < value1 OR column >
value1 AND value2 AND value2) value2)
Note: The values can be constants or labeled duration expressions.
For example, the following predicate is true for any row when salary is
greater than or equal to 10 000 and less than or equal to 20 000:
SALARY BETWEEN 10000 AND 20000
labeled-duration-expression
Specifies an expression that begins with the following special register
values:
v CURRENT DATE (CURRENT_DATE is acceptable.)
v CURRENT TIMESTAMP (CURRENT_TIMESTAMP is acceptable.)
Table 77. Effects of adding durations to and subtracting durations from CURRENT
DATE (continued)
Value that is added or
subtracted Effect
Dates When a positive date duration is added to a date, or a
negative date duration is subtracted from a date, the date
is incremented by the specified number of years, months,
and days.
The order in which labeled date durations are added to and subtracted
from dates can affect the results. When you add labeled date durations
to a date, specify them in the order of YEARS + MONTHS + DAYS.
When you subtract labeled date durations from a date, specify them in
the order of DAYS - MONTHS - YEARS. For example, to add one year
and one day to a date, specify the following code:
CURRENT DATE + 1 YEAR + 1 DAY
To subtract one year, one month, and one day from a date, specify the
following code:
CURRENT DATE − 1 DAY − 1 MONTH − 1 YEAR
For example, the following predicate is true for any row with an
employee in department D11, B01, or C01:
WORKDEPT IN (’D11’, ’B01’, ’C01’)
LIKE predicate
Qualifies strings that have a certain pattern. Specify the pattern by
using a string in which the underscore and percent sign characters can
The pattern string and the string that is to be tested must be of the
same type; that is, both x and y must be character strings, or both x
and y must be graphic strings. When x and y are graphic strings, a
character is a DBCS character. When x and y are character strings and
x is not mixed data, a character is an SBCS character, and y is
interpreted as SBCS data regardless of is subtype. The rules for
mixed-data patterns are described in “Strings and patterns” on page
476.
Within the pattern, a percent sign (%) or underscore character (_) can
represent the literal occurrence of a percent sign or underscore
character. To have a literal meaning, each character must be preceded
by an escape character.
The ESCAPE clause designates a single character. You can use that
character, and only that character, multiple times within the pattern as
an escape character. When the ESCAPE clause is omitted, no character
serves as an escape character and percent signs and underscores in the
pattern can only be used to represent arbitrary characters; they cannot
represent their literal occurrences.
NULL predicate
Specifies a test for null values.
If the value of the column is null, the result is true. If the value is not
null, the result is false. If NOT is specified, the result is reversed.
KEEPDICTIONARY
Prevents REORG TABLESPACE from building a new compression dictionary
when unloading the rows. The efficiency of REORG increases with the
KEEPDICTIONARY option for the following reasons:
v The processing cost of building the compression dictionary is eliminated.
v Existing compressed rows do not need to be compressed again.
For information about data compression, see DB2 Performance Monitoring and
Tuning Guide.
STATISTICS
Specifies that statistics for the table space or associated index, or both, are to be
gathered; the statistics are reported or stored in the DB2 catalog. If statistics are
collected with the default options, only the statistics for the table space are
updated.
If you specify a table space partition or a range of partitions along with the
STATISTICS keyword, DB2 collects statistics only for the specified table space
partitions. This option is valid for non-LOB table spaces only.
| If you specify a base table space with the STATISTICS keyword, DB2 does not
| gather statistics for the related XML table space or its indexes.
You cannot collect inline statistics for indexes on specific catalog and directory
tables. See “Reorganizing the catalog and directory” on page 498 for the list of
unsupported catalog and directory tables.
Restriction:
v If you specify STATISTICS for encrypted data, DB2 might not provide useful
statistics on this data.
| v You cannot specify STATISTICS if you specify the CLONE keyword.
TABLE
Specifies the table for which column information is to be gathered. All tables
must belong to the table space that is specified in the TABLESPACE option.
Do not specify STATISTICS TABLE table-name with REORG TABLESPACE LIST.
Instead, specify STATISTICS TABLE (ALL).
(ALL)
Specifies that information is to be gathered for all columns of all tables in
the table space.
(table-name)
Specifies the tables for which column information is to be gathered. If you
omit the qualifier, the user identifier for the utility job is used. Enclose the
table name in quotation marks if the name contains a blank.
If you specify more than one table, you must repeat the TABLE option.
Multiple TABLE options must be specified entirely before or after any
INDEX keyword that may also be specified. For example, the INDEX
keyword may not be specified between any two TABLE keywords.
SAMPLE integer
Indicates the percentage of rows to be sampled when collecting non-indexed
column statistics. You can specify any value from 1 through 100. The default is
25. The SAMPLE option is not allowed for LOB table spaces.
COLUMN
Specifies columns for which column information is to be gathered.
You can specify this option only if you specify a particular table for which
statistics are to be gathered (TABLE (table-name)). If you specify particular
tables and do not specify the COLUMN option, the default, COLUMN(ALL), is
used. If you do not specify a particular table when using the TABLE option,
you cannot specify the COLUMN option; however, COLUMN(ALL) is
assumed.
(ALL)
Specifies that statistics are to be gathered for all columns in the table.
(column-name, ...)
Specifies the columns for which statistics are to be gathered.
You can specify a list of column names; the maximum is 10. If you specify
more than one column, separate each name with a comma.
INDEX
Specifies indexes for which information is to be gathered. Column information
is gathered for the first column of the index. All the indexes must be associated
with the same table space, which must be the table space that is specified in
the TABLESPACE option.
Do not specify STATISTICS INDEX index-name with REORG TABLESPACE
LIST. Instead, specify STATISTICS INDEX (ALL).
(ALL) Specifies that the column information is to be gathered for all indexes
that are defined on tables that are contained in the table space.
(index-name)
Specifies the indexes for which information is to be gathered. Enclose
the index name in quotation marks if the name contains a blank.
KEYCARD
Indicates that all of the distinct values in all of the 1 to n key column
combinations for the specified indexes are to be collected. n is the number of
columns in the index.
FREQVAL
Specifies that frequent-value statistics are to be collected. If you specify
FREQVAL, you must also specify NUMCOLS and COUNT.
NUMCOLS
Indicates the number of key columns to concatenate together when you
collect frequent values from the specified index. Specifying 3 means that
DB2 is to collect frequent values on the concatenation of the first three key
columns. The default is 1, which means DB2 is to collect frequent values
on the first key column of the index.
COUNT
Indicates the number of frequent values that are to be collected. For
example, specifying 15 means that DB2 is to collect 15 frequent values from
the specified key columns. The default is 10.
REPORT
Specifies whether a set of messages is to be generated to report the collected
statistics.
NO
Indicates that the set of messages is not to be sent as output to SYSPRINT.
The default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT. The
generated messages are dependent on the combination of keywords (such
as TABLESPACE, INDEX, TABLE, and COLUMN) that are specified with
the RUNSTATS utility. However, these messages are not dependent on the
specification of the UPDATE option. REPORT YES always generates a
report of SPACE and ACCESSPATH statistics.
UPDATE
Indicates whether the collected statistics are to be inserted into the catalog
tables. UPDATE also allows you to select statistics that are used for access path
selection or statistics that are used by database administrators.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that only the catalog table columns that provide statistics that
are used for access path selection are to be updated.
SPACE
Indicates that only the catalog table columns that provide statistics to
help database administrators assess the status of a particular table
space or index are to be updated.
NONE
Indicates that no catalog tables are to be updated with the collected
statistics. This option is valid only when REPORT YES is specified.
HISTORY
Specifies that all catalog table inserts or updates to the catalog history tables
are to be recorded.
The default value is whatever value is specified in the STATISTICS HISTORY
field on panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that only the catalog history table columns that provide
statistics that are used for access path selection are to be updated.
SPACE
Indicates that only space-related catalog statistics are to be updated in
catalog history tables.
NONE
Indicates that no catalog history tables are to be updated with the
collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when
RUNSTATS is executed even if statistics have not been gathered on some
partitions; for example, partitions have not had any data loaded. Aggregate
statistics are used by the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some partitions might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all partitions.
If data is not available for all partitions, DSNU623I message is issued if the
installation value for STATISTICS ROLLUP on panel DSNTIPO is set to NO.
PUNCHDDN ddname
Specifies the DD statement for a data set that is to receive the LOAD utility
control statements that are generated by REORG TABLESPACE UNLOAD
EXTERNAL or REORG TABLESPACE DISCARD FROM TABLE ... WHEN.
ddname is the DD name.
The default is SYSPUNCH.
PUNCHDDN is required if the limit key of the last partition of a partitioned
table space has been reduced.
The PUNCHDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the current job
step and a TEMPLATE name, the utility uses the DD name. For more
information about TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on
page 641.
DISCARDDN ddname
Specifies the DD statement for a discard data set, which contains copies of
records that meet the DISCARD FROM TABLE ... WHEN specification.
ddname is the DD name.
If you omit the DISCARDDN option, the utility saves discarded records only if
a SYSDISC DD statement is in the JCL input.
The default is SYSDISC.
The DISCARDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the current job
step and a TEMPLATE name, the utility uses the DD name. For more
information about TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on
page 641.
UNLDDN ddname
Specifies the name of the unload data set.
ddname is the DD name of the unload data set. The default is SYSREC.
DISCARD
Specifies that records that meet the specified WHEN conditions are to be
discarded during REORG TABLESPACE UNLOAD CONTINUE or UNLOAD
PAUSE. If you specify DISCARDDN or a SYSDISC DD statement in the JCL,
discarded records are saved in the associated data set.
You can specify any SHRLEVEL option with DISCARD; however, if you
specify SHRLEVEL CHANGE, modifications that are made during the
reorganization to data rows that match the discard criteria are not permitted.
In this case, REORG TABLESPACE terminates with an error.
If you specify DISCARD, rows are decompressed and edit routines are
decoded. If you also specify DISCARD to a file, rows are decoded by field
procedure, and the following columns are converted to DB2 external format:
v SMALLINT
v INTEGER
v FLOAT
v DECIMAL
v TIME
v TIMESTAMP
Otherwise, edit routines or field procedures are bypassed on both the
UNLOAD and RELOAD phases for table spaces. Validation procedures are not
invoked during either phase.
| You cannot specify DISCARD for a base table with XML columns or for an
| XML table space.
| Region size: The recommended minimum region size is 4096 KB. Region sizes
| greater than 32 MB enable increased parallelism for index builds. Data unload and
| reload parallelism can also benefit from a greater region size value.
The number of rows in the mapping table should not exceed 110% of the number
of rows in the table space or partition that is to be reorganized. The mapping table
must have only the columns and the index that are created by the following SQL
statements:
CREATE TABLE table-name1
(TYPE CHAR(1) NOT NULL,
SOURCE_RID CHAR(5) NOT NULL,
TARGET_XRID CHAR(9) NOT NULL,
LRSN CHAR(6) NOT NULL);
CREATE UNIQUE INDEX index-name1 ON table-name1
(SOURCE_RID ASC, TYPE, TARGET_XRID, LRSN);
The REORG utility removes all rows from the mapping table when the utility
completes.
You must specify the TARGET_XRID column as CHAR(9), even though the RIDs
are 5 bytes long.
You must have DELETE, INSERT, and UPDATE authorization on the mapping
table.
You can run more than one REORG SHRLEVEL CHANGE job concurrently on
| separate table spaces. You can also run also run more than one REORG SHRLEVEL
| CHANGE job concurrently on different partitions of the same table space, but only
| if the table space does not have any NPIs. When you run concurrently with other
jobs, each REORG job must have a separate mapping table. The mapping tables do
not need to reside in separate table spaces. If only one mapping table exists, the
REORG jobs must be scheduled to run serially. If more than one REORG job tries
to access the same mapping table at the same time, one of the REORG jobs fails.
For a sample of using REORG with SHRLEVEL CHANGE and a sample mapping
table and index, see job sample DSNTEJ1 in DB2 Installation Guide.
For example, assume that you create a table space with three partitions. Table 79
shows the mapping that exists between the physical and logical partition numbers.
Table 79. Mapping of physical and logical partition numbers when a table space with three
partitions is created.
Logical partition number Physical partition number
1 1
2 2
3 3
Assume that you then try to execute a REORG TABLESPACE REBALANCE PART
1:2. This statement requests a reorganization and rebalancing of physical partitions
1 and 2. Note that physical partition 1 is logical partition 2, and physical partition
2 is logical partition 4. Thus, the utility is processing logical partitions 2 and 4. If
during the course of rebalancing, the utility needs to move keys from logical
partition 2 to logical partition 3, the job fails, because logical partition 3 is not
within the specified physical partition range.
REORG-pending status: You must allocate a discard data set (SYSDISC) or specify
the DISCARDDN option if the last partition of the table space is in
REORG-pending status.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. Required if you specify DISCARDDN
3. Required you specify PUNCHDDN
4. Required unless NOSYSREC or SHRLEVEL CHANGE is specified.
5. Required if a partition is in REORG-pending status or REBALANCE, COPYDDN,
RECOVERYDDN, SHRLEVEL REFERENCE, or SHRLEVEL CHANGE is specified.
6. Required if NOSYSREC or SHRLEVEL CHANGE is specified, but SORTDEVT is not
specified.
7. Required if any indexes exist and SORTDEVT is not specified.
8. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate
the data set. Otherwise, DFSORT dynamically allocates the temporary data set.
| 9. If you specify the SORTDEVT keyword, the data sets are dynamically allocated. It is
| recommended that you use dynamic allocation by specifying SORTDEVT in the utility
| statement because dynamic allocation reduces the maintenance required of the utility
| job JCL.
| 10. If UTPRINT is allocated to SYSOUT, the data sets are dynamically allocated.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space
Object that is to be reorganized.
Calculating the size of the unload data set: The required size for the unload data
set varies depending on the options that you use for REORG.
1. If you use REORG with UNLOAD PAUSE or CONTINUE and you specify
KEEPDICTIONARY (assuming that a compression dictionary already exists),
the size of the unload data set, in bytes, is the VSAM high-allocated RBA for
the table space. You can obtain the high-allocated RBA from the associated
VSAM catalog.
For SHRLEVEL CHANGE, also add the result of the following calculation (in
bytes) to the VSAM high-used RBA:
number of records * 11
2. If you use REORG with UNLOAD ONLY, UNLOAD PAUSE, or CONTINUE
and you do not specify KEEPDICTIONARY, you can calculate the size of the
unload data set, in bytes, by using the following formula:
maximum row length * number of rows
The maximum row length is the row length, including the 6-byte record prefix,
plus the length of the longest clustering key. If multiple tables exist in the table
space, use the following formula to determine the maximum row length:
| Sum over all tables ((row length + (2 * number of VARBIN
| columns)) * number of rows)
For SHRLEVEL CHANGE, also add the result of the following formula to the
preceding result:
(21 * ((NEARINDREF + FARINDREF) * 1.1))
For certain table spaces in the catalog and directory, the unload data set for the
table spaces have a different format. The calculation for the size of this data set is
as follows:
data set size in bytes = (28 + longrow) * numrows
See “Reorganizing the catalog and directory” on page 498 for more information
about reorganizing catalog and directory table spaces.
Calculating the size of the sort work data sets: Allocating twice the space that is
used by the unload data sets is usually adequate for the sort work data sets. For
compressed data, double again the amount of space that is allocated for the sort
work data sets if you use either of the following REORG options:
v UNLOAD PAUSE without KEEPDICTIONARY
v UNLOAD CONTINUE without KEEPDICTIONARY
Using two or three large SORTWKnn data sets is preferable to using several small
ones. If adequate space is not available, you cannot run REORG.
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
3. The accuracy of the data set size calculation depends on recent information in the SYSTABLEPART catalog table.
Specifying a destination for DFSORT messages: The REORG utility job step must
contain a UTPRINT DD statement that defines a destination for messages that are
issued by DFSORT during the SORT phase of REORG. DB2I, the %DSNU CLIST
command, and the DSNUPROC procedure use the following default DD statement:
//UTPRINT DD SYSOUT=A
Calculating the size of the statistics sort work data sets: To calculate the
approximate size (in bytes) of the ST01WKnn data set, use the following formula:
For user-managed data sets, you must preallocate the shadow data sets before you
execute REORG with SHRLEVEL REFERENCE or SHRLEVEL CHANGE. If a table
space, partition, or index resides in DB2-managed data sets and shadow data sets
do not already exist when you execute REORG, DB2 creates the shadow data sets.
At the end of REORG processing, the DB2-managed shadow data sets are deleted.
Shadow data set names: Each shadow data set must have the following name:
catname.DSNDBx.psname.y0001.Lnnn
To determine the names of existing shadow data sets, execute one of the following
queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
SELECT DBNAME, TSNAME, IPREFIX
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
SELECT DBNAME, IXNAME, IPREFIX
FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
For a partitioned table space, DB2 returns rows from which you select the row for
the partitions that you want to reorganize.
For example, assume that you have a ten-partition table space and you want to
determine a naming convention for the data set in order to successfully execute the
REORG utility with the SHRLEVEL CHANGE PART 2:6 options. The following
queries of the DB2 catalog tables SYSTABLEPART and SYSINDEXPART provide
the required information:
SELECT DBNAME, TSNAME, PARTITION, IPREFIX FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = ’DBDV0701’ AND TSNAME = ’TPDV0701’
ORDER BY PARTITION;
SELECT IXNAME, PARTITION, IPREFIX FROM SYSIBM.SYSINDEXPART
WHERE IXNAME = ’IXDV0701
ORDER BY PARTITION;
The preceding queries produce the information that is shown in Table 84 and
Table 85.
Table 85. Query results from the second preceding query (continued)
IXNAME PARTITION IPREFIX
IXDV0701 7 I
IXDV0701 6 J
IXDV0701 5 J
IXDV0701 4 I
IXDV0701 3 J
IXDV0701 2 I
IXDV0701 1 I
To execute REORG SHRLEVEL CHANGE PART 2:6, you need to preallocate the
following shadow objects. The naming convention for these objects use information
from the query results that are shown in Table 84 on page 491 and Table 85 on
page 491.
vcatnam.DSNDBC.DBDV0701.TPDV0701.J0001.A002
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A003
vcatnam.DSNDBC.DBDV0701.TPDV0701.J0001.A004
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A005
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A006
vcatnam.DSNDBC.DBDV0701.IXDV0701.J0001.A002
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A003
vcatnam.DSNDBC.DBDV0701.IXDV0701.J0001.A004
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A005
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A006
Defining shadow data sets: Consider the following actions when you preallocate
the data sets:
v Allocate the shadow data sets according to the rules for user-managed data sets.
v Define the shadow data sets as LINEAR.
v Use SHAREOPTIONS(3,3).
v Define the shadow data sets as EA-enabled if the original table space or index
space is EA-enabled.
v Allocate the shadow data sets on the volumes that are defined in the storage
group for the original table space or index space.
If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
the SECQTY value for the table space or index space.
Recommendation: Use the MODEL option, which causes the new shadow data set
to be created like the original data set. This method is shown in the following
example:
DEFINE CLUSTER +
(NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
DATA +
(NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’) )
Creating shadow data sets for indexes: When you preallocate data sets for indexes,
create the shadow data sets as follows:
v Create shadow data sets for the partition of the table space and the
corresponding partition in each partitioning index and data-partitioned
secondary index.
| Estimating the size of shadow data sets: If you have not changed the value of
| FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
| comparable to the amount of required space for the original data set.
Preallocating shadow data sets for REORG PART: By creating the shadow data
sets before executing REORG PART, even for DB2-managed data sets, you prevent
possible over-allocation of the disk space during REORG processing. When
reorganizing a partition, you must create the shadow data sets for the partition of
the table space and for the partition of the partitioning index. In addition, before
executing REORG PART with SHRLEVEL REFERENCE or SHRLEVEL CHANGE
on partition mmm of a partitioned table space, you must create a shadow data set
for each nonpartitioning index that resides in user-defined data sets. Each shadow
| data set is to be used for a copy of the index and must be as large as the entire
| original nonpartitioned index. The name for this shadow data set has the form
catname.DSNDBx.dbname.psname.y0mmm.Annn.
Beginning in Version 8, the SORTKEYS option is the default. Therefore, the REORG
TABLESPACE utility does not require SYSUT1 and SORTOUT data sets. The
WORKDDN keyword, which provided the DD names of the SYSUT1 and
SORTOUT data sets in earlier versions of DB2, is not needed and is ignored. The
SORTKEYS keyword is also ignored. You do not need to modify existing control
statements to remove the WORKDDN keyword or the SORTKEYS keyword.
You can determine when to run REORG for non-LOB table spaces and indexes by
using the OFFPOSLIMIT and INDREFLIMIT catalog query options. If you specify
the REPORTONLY option, REORG produces a report that indicates whether a
REORG is recommended; a REORG is not performed.
When you specify the catalog query options along with the REPORTONLY option,
REORG produces a report with one of the following return codes:
1 No limit met; no REORG is performed or recommended.
2 REORG is performed or recommended.
Information from the SYSTABLEPART catalog table can also tell you how well disk
space is being used. If you want to find the number of varying-length rows that
were relocated to other pages because of an update, run RUNSTATS, and then
issue the following statement:
SELECT CARD, NEARINDREF, FARINDREF
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = 'XXX'
AND TSNAME = 'YYY';
A large number (relative to previous values that you have received) for
FARINDREF indicates that I/O activity on the table space is high. If you find that
this number increases over a period of time, you probably need to reorganize the
table space to improve performance, and increase PCTFREE or FREEPAGE for the
table space with the ALTER TABLESPACE statement.
Issue the following statement to determine whether the rows of a table are stored
in the same order as the entries of its clustering index:
Several indicators are available to signal a time for reorganizing table spaces. A
large value for FAROFFPOSF might indicate that clustering is deteriorating. In this
case, reorganizing the table space can improve query performance.
A large value for NEAROFFPOSF might indicate also that reorganization might
improve performance. However, in general NEAROFFPOSF is not as critical a
factor as FAROFFPOSF.
For any table, the REORG utility repositions rows into the sequence of the key of
the clustering index that is defined on that table.
For specific REORG threshold numbers, see DB2 Performance Monitoring and Tuning
Guide.
Recommendation: Run RUNSTATS if the statistics are not current. If you have an
object that should also be reorganized, run REORG with STATISTICS and take
inline copies. If you run REORG PART and nonpartitioning indexes exist,
subsequently run RUNSTATS for each nonpartitioning index.
End of Product-sensitive Programming Interface
REORG with SHRLEVEL NONE, the default, reloads the reorganized data into the
original area that is being reorganized. Applications have read-only access during
unloading and no access during reloading. For data-partitioned secondary indexes,
the option rebuilds the index parts during the BUILD phase. (Rebuilding these
indexes does not create contention between parallel REORG PART jobs.) For
nonpartitioned secondary indexes, the option corrects the indexes. Using REORG
SHRLEVEL NONE is the only access level that resets REORG-pending status.
REORG with SHRLEVEL REFERENCE reloads the reorganized data into a new
(shadow) copy of the area that is being reorganized. Near the end of
reorganization, DB2 switches the future access of the application from the original
REORG with SHRLEVEL CHANGE reloads the reorganized data into a shadow
copy of the area that is being reorganized. For REORG TABLESPACE SHRLEVEL
CHANGE, a mapping table correlates RIDs in the original copy of the table space
or partition with RIDs in the shadow copy; see “Mapping table with SHRLEVEL
CHANGE” on page 483 for instructions on creating the mapping table.
Applications can read from and write to the original area, and DB2 records the
writing in the log. DB2 then reads the log and applies it to the shadow copy to
bring the shadow copy up to date. This step executes iteratively, with each
iteration processing a sequence of log records.
Near the end of reorganization, DB2 switches the future access of the application
from the original data to the shadow copy. Applications have read-write access
during unloading and reloading, a brief period of read-only access during the last
iteration of log processing, and a brief period of no access during switching.
Operator actions: LONGLOG specifies the action that DB2 performs if the pace of
processing log records between iterations is slow. See “Option descriptions” on
page 459 for a description of the LONGLOG options. If no action is taken after
message DSNU377I is sent to the console, the LONGLOG option automatically
goes into effect. Some examples of possible actions that you can take:
v Execute the START DATABASE(database) SPACENAM(tablespace) ... ACCESS(RO)
command and the QUIESCE utility to drain the write claim class. DB2 performs
the last iteration, if MAXRO is not DEFER. After the QUIESCE, you should also
execute the ALTER UTILITY command, even if you do not change any REORG
parameters.
v Execute the START DATABASE(database) SPACENAM(tablespace) ... ACCESS(RO)
command and the QUIESCE utility to drain the write claim class. Then, after
reorganization makes some progress, execute the START DATABASE(database)
SPACENAM(tablespace) ... ACCESS(RW) command. This increases the likelihood
that processing of log records between iterations can continue at an acceptable
rate. After the QUIESCE, you should also execute the ALTER UTILITY
command, even if you do not change any REORG parameters.
DB2 does not take the action that is specified in the LONGLOG phrase if any one
of these events occurs before the delay expires:
v An ALTER UTILITY command is issued.
v A TERM UTILITY command is issued.
v DB2 estimates that the time to perform the next iteration is less than or equal to
the time that is specified in the MAXRO keyword.
v REORG terminates for any reason (including the deadline).
If you specify UNLOAD ONLY, REORG unloads data from the table space and
then ends. You can reload the data at a later date with the LOAD utility, specifying
FORMAT UNLOAD.
Between unloading and reloading, you can add a validation routine to a table.
During reloading, all the rows are checked by the validation procedure.
Do not use REORG UNLOAD ONLY to propagate data. When you specify the
UNLOAD ONLY option, REORG unloads only the data that physically resides in
the base table space; LOB and XML columns are not unloaded. For purposes of
data propagation, you should use UNLOAD or REORG UNLOAD EXTERNAL
instead.
However, if you use REORG SHRLEVEL NONE LOG NO, RECOVER cannot
restore data from the log past the point at which the object was last reorganized
successfully. Therefore, you must take an image copy after running REORG with
LOG NO to establish a level of fallback recovery.
Attention: You must take a full image copy before and after reorganizing any
catalog or directory object. Otherwise, you cannot recover any catalog of directory
objects without the full image copies. When you reorganize the
DSNDB06.SYSCOPY table space with the LOG NO option and omit the
COPYDDN option, DB2 places the table space in COPY-pending status. Take a full
image copy of the table space to remove the COPY-pending status before
continuing to reorganize the catalog or directory table spaces.
The FASTSWITCH YES option is ignored for catalog and directory objects.
When to run REORG on the catalog and directory: You do not need to run
REORG TABLESPACE on the catalog and directory table spaces as often as you do
on user table spaces. RUNSTATS collects statistics about user table spaces which
you use to determine if a REORG is necessary. You can use the same statistics to
determine if a REORG is needed for catalog table spaces. The only difference is the
information in the columns NEAROFFPOSF and FAROFFPOSF in table
SYSINDEXPART. The values in these columns can be double the recommended
value for user table spaces before a reorganization is needed if the table space is
DSNDB06.SYSDBASE, DSNDB06.SYSVIEWS, DSNDB06.SYSPLAN,
DSNDB06.SYSGROUP, or DSNDB06.SYSDBAUT.
Reorganize the whole catalog before a catalog migration or once every couple of
years. Reorganizing the catalog is useful for reducing the size of the catalog table
spaces. To improve query performance, reorganize the indexes on the catalog
tables.
Associated directory table spaces: When certain catalog table spaces are
reorganized, you should also reorganize the associated directory table space. The
associated directory table spaces are listed in Table 86.
– DSNDB06.SYSDBAUT
– DSNDB06.SYSGROUP
– DSNDB06.SYSPLAN
– DSNDB06.SYSVIEWS
– DSNDB01.DBD01
v REORG TABLESPACE with STATISTICS cannot collect inline statistics on the
following catalog and directory table spaces:
– DSNDB06.SYSDBASE
– DSNDB06.SYSDBAUT
– DSNDB06.SYSGROUP
– DSNDB06.SYSPLAN
– DSNDB06.SYSVIEWS
– DSBDB06.SYSSTATS
– DSNDB06.SYSHIST
– DSNDB01.DBD01
Phases for reorganizing the catalog and directory: REORG TABLESPACE processes
certain catalog and directory table spaces differently from other table spaces; it
does not execute the BUILD and SORT phases for the following table spaces:
v DSNDB06.SYSDBASE
v DSNDB06.SYSDBAUT
v DSNDB06.SYSGROUP
v DSNDB06.SYSPLAN
v DSNDB06.SYSVIEWS
v DSNDB01.DBD01
For these table spaces, REORG TABLESPACE reloads the indexes (in addition to
the table space) during the RELOAD phase, rather than storing the index keys in a
work data set for sorting.
For all other catalog and directory table spaces, DB2 uses index build parallelism.
For REORG with SHRLEVEL REFERENCE or CHANGE, you can use the ALTER
STOGROUP command to change the characteristics of a DB2-managed data set. To
change the characteristics of a user-managed data set, specify the desired new
characteristics when you create the shadow data set; see “Shadow data sets” on
page 490 for more information about user-managed data sets. For example, placing
the original and shadow data sets on different disk volumes might reduce
contention and thus improve the performance of REORG and the performance of
applications during REORG execution.
While REORG is interrupted by PAUSE, you can redefine the table space attributes
for user-defined table spaces. PAUSE is not required for STOGROUP-defined table
spaces. Attribute changes are done automatically by a REORG following an ALTER
TABLESPACE.
If the table space contains rows with VARCHAR columns, DB2 might not be able
to accurately estimate the number of rows. If the estimated number of rows is too
high and the sort work space is not available or if the estimated number of rows is
too low, DFSORT might fail and cause an abend. Important: Run RUNSTATS
UPDATE SPACE before the REORG so that DB2 calculates a more accurate
estimate.
You can override this dynamic allocation of sort work space in two ways:
v Allocate the sort work data sets with SORTWKnn DD statements in your JCL.
v Override the DB2 row estimate in FILSZ using control statements that are
passed to DFSORT. However, using control statements overrides size estimates
that are passed to DFSORT in all invocations of DFSORT in the job step,
including sorting keys to build indexes, and any sorts that are done in any other
utility that is executed in the same step. The result might be reduced sort
efficiency or an abend due to an out-of-space condition.
If you use ALTER INDEX to modify the limit keys for partition boundaries, you
must subsequently use REORG TABLESPACE to redistribute data in the
partitioned table spaces based on the new key values and to reset the
REORG-pending status. The following example specifies options that help
maximize performance while performing the necessary rebalancing reorganization:
REORG TABLESPACE DSN8S91E PART 2:3
NOSYSREC
COPYDDN SYSCOPY
STATISTICS TABLE INDEX(ALL)
You can reorganize a range of partitions, even if the partitions are not in
REORG-pending status. If you specify the STATISTICS keyword, REORG collects
data about the specified range of partitions.
For more restrictions when using REBALANCE, see “Restrictions when using
REBALANCE” on page 484.
Rebalancing partitions when the clustering index does not match the
partitioning key: For a table that has a clustering index that does not match the
partitioning key, you must run REORG TABLESPACE twice so that data is
rebalanced and all rows are in clustering order. The first utility execution
rebalances the data and the second utility execution sorts the data.
For example, assume you have a table space that was created with the following
SQL:
------------------------------------------
SQL to create a table and index with
separate columns for partitioning
and clustering
------------------------------------------
CREATE TABLESPACE TS IN DB
USING STOGROUP SG
NUMPARTS 4 BUFFERPOOL BP0;
To rebalance the data across the four partitions, use the following REORG
TABLESPACE control statement:
REORG TABLESPACE DB.TS REBALANCE
After the preceding utility job completes, the table space is placed in AREO* status
to indicate that a subsequent reorganization is recommended to ensure that the
rows are in clustering order. For this subsequent reorganization, use the following
REORG TABLESPACE control statement:
REORG TABLESPACE DB.TS
To create an inline copy, use the COPYDDN and RECOVERYDDN keywords. You
can specify up to two primary copies and two secondary copies. Inline copies are
produced during the RELOAD phase of REORG processing.
The total number of duplicate pages is small, with a negligible effect on the
amount of space that is required for the data set. One exception to this guideline is
the case of running REORG SHRLEVEL CHANGE, in which the number of
duplicate pages varies with the number of records that are applied during the LOG
phase.
Improving performance
To improve REORG performance:
| v Run REORG concurrently on separate partitions of a partitioned table space if
| no nonpartitioned indexes exist. When you run REORG on partitions of a
| partitioned table space, the sum of each job’s processor usage is greater than for
| a single REORG job on the entire table space. However, the elapsed time of
| reorganizing the entire table in parallel can be significantly less than it would be
| for a single REORG job.
v Use parallel index build for table spaces or partitions that have more than one
defined index. For more information, see “Building indexes in parallel for
REORG TABLESPACE” on page 505.
v Specify NOSYSREC on your REORG statement. See “Omitting the output data
set” on page 497 for restrictions.
| v If you are not using NOSYSREC, use an UNLDDN template to enable unload
| parallelism.
v If you are using 3990 caching, and you have the nonpartitioning indexes on
RAMAC®, consider specifying YES on the UTILITY CACHE OPTION field of
installation panel DSNTIPE. This option allows DB2 to use sequential prestaging
when reading data from RAMAC for the following utilities:
– LOAD PART integer RESUME
– REORG TABLESPACE PART
For LOAD PART and REORG TABLESPACE PART utility jobs, prefetch reads
remain in the cache longer, which can lead to possible improvements in the
performance of subsequent writes.
Use inline copy and inline statistics instead of running separate COPY and
RUNSTATS utilities.
When to use DRAIN_WAIT: The DRAIN_WAIT option gives you greater control
over the time that online REORG is to wait for drains. Also because the
DRAIN_WAIT is the aggregate time that online REORG is to wait to perform a
drain on a table space and associated indexes, the length of drains is more
predictable than if each partition and index has its own individual waiting time
limit.
By specifying a short delay time (less than the system timeout value, IRLMRWT),
you can reduce the impact on applications by reducing time-outs. You can use the
RETRY option to give the online REORG more chances to complete successfully. If
you do not want to use RETRY processing, you can still use DRAIN_WAIT to set a
specific and more consistent limit on the length of drains.
RETRY allows an online REORG that is unable to drain the objects that it requires
so that DB2 can try again after a set period (RETRY_DELAY). During the
RETRY_DELAY period, all the objects are available for read-write access in the case
of SHRLEVEL CHANGE. For SHRLEVEL REFERENCE, the objects remain with
the access that existed prior to the attempted drain (that is if the drain fails in the
UNLOAD phase the object remains in read-write access; if the drain fails in the
SWITCH phase, objects remain in read-only access). Because application SQL
statements can be in a queue behind any unsuccessful drain the online REORG has
tried, a reasonable delay is recommended before retrying, to allow this work to
complete; the default is 5 minutes.
When you specify DRAIN WRITERS (the default) with SHRLEVEL CHANGE and
RETRY, multiple read-only log iterations can occur. Generally, online REORG might
need to do more work when RETRY is specified, and this might result in multiple
or extended periods of restricted access. Applications that run alongside online
REORG need to perform frequent commits. During the interval between retries, the
utility is still active, and consequently other utility activity against the table space
and indexes is restricted.
When doing a table space REORG with RETRY and SHRLEVEL CHANGE both
specified, you can increase the size of the COPY that REORG takes.
Figure 77 on page 506 shows the flow of a REORG TABLESPACE job that uses a
parallel index build. DB2 starts multiple subtasks to sort index keys and build
indexes in parallel. If you specify STATISTICS, additional subtasks collect the
sorted keys and update the catalog table in parallel, eliminating the need for a
second scan of the index by a separate RUNSTATS job.
Figure 77. How indexes are built during a parallel index build
REORG TABLESPACE uses parallel index build if more than one index needs to be
built (including the mapping index for SHRLEVEL CHANGE). You can either let
the utility dynamically allocate the data sets that SORT needs for this parallel
index build or provide the necessary data sets yourself.
Select one of the following methods to allocate sort work and message data sets:
Method 2: Control allocation of sort work data sets, while REORG TABLESPACE
allocates message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: Exercise the most control over rebuild processing; specify both sort
work data sets and message data sets.
1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
Data sets used: If you select Method 2 or 3 in the preceding information, define
the necessary data sets by using the information provided here, along with
“Determining the number of sort subtasks” on page 507, “Allocation of sort
subtasks” on page 507, and “Estimating the sort work file size” on page 507.
Each sort subtask must have its own group of sort work data sets and its own
print message data set. Possible reasons to allocate data sets in the utility job JCL
rather than using dynamic allocation are:
v To control the size and placement of the data sets
v To minimize device contention
v To optimally utilize free disk space
v To limit the number of utility subtasks that are used to build indexes
The DD name SWnnWKmm defines the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more
data sets that are to be used by that subtask pair. For example:
SW01WK01 Is the first sort work data set that is used by the subtask that
builds the first index.
SW01WK02 Is the second sort work data set that is used by the subtask that
builds the first index.
SW02WK01 Is the first sort work data set that is used by the subtask that
builds the second index.
SW02WK02 Is the second sort work data set that is used by the subtask that
builds the second index.
The DD name UTPRINnn defines the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
During parallel index build processing, REORG distributes all indexes among the
subtask pairs according to the index creation date, assigning the first created index
to the first subtask pair. For SHRLEVEL CHANGE, the mapping index is assigned
last.
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys that are present in all of the indexes
that are being processed by the subtask in order to calculate each sort work file
size. After you determine which indexes are assigned to which subtask pairs, use
the following formula to calculate the required space:
8 + 128 + 50 + 2 + 2 = 190
Do not count keys that belong to partitioning indexes should not be counted in the
sort work data set size calculation. The space estimation formula might indicate
that 0 bytes are required (because the only index that is processed by a task set is
the partitioning index). In this case, if you allocate your own sort work data set
groups, you still need to allocate sort work data sets for this task set, but you can
use a minimal allocation, such as 1 track.
If the error is on the unloaded data, or if you used the NOSYSREC option,
terminate REORG by using the TERM UTILITY command. Then recover the table
space, using RECOVER, and run the REORG job again.
| To ensure that the REORG utility is able to condense the data into the minimum
| number of required partitions, parallelism for the REORG utility does not apply to
| partition-by-growth table spaces.
| If the partition-by-growth table space contains LOB or XML columns, the REORG
| TABLESPACE utility minimizes partitions by eliminating existing holes, but does
| not move the data from one partition to another.
| When you reorganize a partition-by-growth table space at the partition level, the
| REORG TABLESPACE utility minimizes partitions by eliminating existing holes.
For segmented table spaces, REORG does not normally need to reclaim space from
dropped tables. Space that is freed by dropping tables in a segmented table space
is immediately available if the table space can be accessed when DROP TABLE is
executed. If the table space cannot be accessed when DROP TABLE is executed (for
example, the disk device is offline), DB2 removes the table from the catalog, but
does not delete all table rows. In this case, the space for the dropped table is not
available until REORG reclaims it.
After you run REORG, the segments for each table are contiguous.
For SHRLEVEL NONE, REORG does not unload LOBs, and it does not reclaim
physical space. A LOB table space that is defined with LOG YES or LOG NO
affects logging during the reorganizing a LOB column. Table 43 on page 280 shows
the logging output and LOB table space effect, if any. SYSIBM.SYSCOPY is not
updated.
| For SHRLEVEL REFERENCE, LOBs are unloaded to a shadow data set and
| physical space is reclaimed. If you specify SHRLEVEL REFERENCE, LOG NO and
| an inline image copy are required and no updates are logged during the REORG.
If you terminate REORG TABLESPACE with the TERM UTILITY command during
the RELOAD phase, the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the data records are not erased. The table space and
indexes remain in RECOVER-pending status. After you recover the table space,
rerun the REORG job.
v For SHRLEVEL REFERENCE or CHANGE, the data records are reloaded into
shadow objects, so the original objects have not been affected by REORG. You
can rerun the job.
If you terminate REORG with the TERM UTILITY command during the SORT,
BUILD, or LOG phases, the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the indexes that are not yet built remain in
RECOVER-pending status. You can run REORG with the SORTDATA option, or
you can run REBUILD INDEX to rebuild those indexes.
v For SHRLEVEL REFERENCE or CHANGE, the records are reloaded into shadow
objects, so the original objects have not been affected by REORG. You can rerun
the job.
If you terminate a stopped REORG utility with the TERM UTILITY command
during the SWITCH phase, the following conditions apply:
v All data sets that were renamed to their shadow counterparts are renamed to
their original names, so that the objects remain in their original state, and you
can rerun the job.
v If a problem occurs in renaming the data sets to the original names, the objects
remain in RECOVER-pending status, and you cannot rerun the job.
If the SWITCH phase does not complete, the image copy that REORG created is
not available for use by the RECOVER utility. If you terminate an active REORG
utility during the SWITCH phase with the TERM UTILITY command, during the
rename process, the renaming occurs, and the SWITCH phase completes. The
image copy that REORG created is available for use by the RECOVER utility.
The REORG-pending status is not reset until the UTILTERM execution phase. If the
REORG utility abnormally terminates or is terminated, the objects remain in
REORG-pending status and RECOVER-pending status, depending on the phase in
which the failure occurred. See Appendix C, “Advisory or restrictive states,” on
page 895 for information about resetting either status.
Table 87 lists the restrictive states that REORG TABLESPACE sets according to the
phase in which the utility terminated.
Table 87. Restrictive states that REORG TABLESPACE sets.
Phase Effect on restrictive status
UNLOAD No effect.
RELOAD SHRLEVEL NONE:
v Places table space in RECOVER-pending status at the beginning of the
phase and resets the status at the end of the phase.
v Places indexes in RECOVER-pending status.
v Places the table space in COPY-pending status. If COPYDDN is
specified and SORTKEYS is ignored, the COPY-pending status is reset
at the end of the phase. SORTKEYS is ignored for several catalog and
directory table spaces. For a list of these table spaces, see
“Reorganizing the catalog and directory” on page 498.
SHRLEVEL REFERENCE or CHANGE has no effect.
SORT No effect.
BUILD SHRLEVEL NONE resets RECOVER-pending status for indexes and, if
the utility job includes both COPYDDN and SORTKEYS, resets
COPY-pending status for table spaces at the end of the phase.
SHRLEVEL REFERENCE or CHANGE has no effect.
SORTBLD No effect during the sort portion of the SORTBLD phase. During the
build portion of the SORTBLD phase, the effect is the same as for the
BUILD phase.
LOG No effect.
SWITCH No effect. Under certain conditions, if TERM UTILITY is issued, it must
complete successfully; otherwise, objects might be placed in
RECOVER-pending status.
v Jobs with the SORTKEYS option that are restarted in the RELOAD, SORT,
BUILD, or SORTBLD phase always restart from the beginning of the RELOAD
phase.
v Jobs with the SHRLEVEL REFERENCE, NOSYSREC, and SORTDATA options
use RESTART(PHASE) to restart at the beginning of the UNLOAD phase.
| v Jobs with unload parallelism for REORG TABLESPACE SHRLEVEL NONE use
| RESTART(PHASE) to restart at the beginning of the UNLOAD and RELOAD
| phases.
v Jobs that reorganize the following catalog or directory table spaces use
RESTART(PHASE):
– DSNDB06.SYSDBASE
– DSNDB06.SYSDBAUT
– DSNDB06.SYSGROUP
– DSNDB06.SYSPLAN
– DSNDB06.SYSVIEWS
– DSNDB01.DBD01
If you restart a REORG job of one or more of the catalog or directory table spaces
in the preceding list, you cannot use RESTART(CURRENT).
If you restart REORG in the UTILINIT phase, it re-executes from the beginning of
the phase. If REORG abnormally terminates or system failure occurs while it is in
the UTILTERM phase, you must restart the job with RESTART(PHASE).
For each phase of REORG and for each type of REORG TABLESPACE (with
SHRLEVEL NONE, with SHRLEVEL REFERENCE, and with SHRLEVEL
CHANGE), Table 88 indicates the types of restarts that are allowed (CURRENT and
PHASE). A value of None indicates that no restart is allowed. The ″Data Sets
Required″ column lists the data sets that must exist to perform the specified type
of restart in the specified phase.
Table 88. REORG TABLESPACE utility restart information for SHRLEVEL NONE, REFERENCE, and CHANGE
Type of Type of Type of
restart restart restart
allowed for allowed for allowed for
SHRLEVEL SHRLEVEL SHRLEVEL
Phase NONE REFERENCE CHANGE Required data sets Notes
UNLOAD CURRENT, CURRENT, None SYSREC
| PHASE PHASE6
RELOAD CURRENT, CURRENT, None SYSREC 1, 2
| PHASE PHASE6
SORT CURRENT, CURRENT, None None 2, 3
| PHASE PHASE6
BUILD CURRENT, CURRENT, None None 2, 3, 4
| PHASE PHASE6
SORTBLD CURRENT, CURRENT, None None 2
| PHASE PHASE6
LOG Phase does Phase does None None
| not occur not occur6
Table 88. REORG TABLESPACE utility restart information for SHRLEVEL NONE, REFERENCE, and
CHANGE (continued)
Type of Type of Type of
restart restart restart
allowed for allowed for allowed for
SHRLEVEL SHRLEVEL SHRLEVEL
Phase NONE REFERENCE CHANGE Required data sets Notes
SWITCH Phase does CURRENT, CURRENT, Originals and shadows 3, 5
not occur PHASE PHASE
Notes:
1. For None, if you specify NOSYSREC, restart is not possible, and you must execute the RECOVER TABLESPACE
utility for the table space or partition. For REFERENCE, if the REORG job includes both SORTDATA and
NOSYSREC, RESTART or RESTART(PHASE) restarts at the beginning of the UNLOAD phase.
2. If you specify SHRLEVEL NONE or SHRLEVEL REFERENCE, and the job includes the SORTKEYS option, use
RESTART or RESTART(PHASE) to restart at the beginning of the RELOAD phase.
3. You can restart the utility with RESTART or RESTART(PHASE). However, because this phase does not take
checkpoints, RESTART restarts from the beginning of the phase.
4. If you specify the PART option with REORG TABLESPACE, you cannot restart the utility at the beginning of the
BUILD phase if any nonpartitioning index is in a page set REBUILD-pending (PSRBD) status.
| 5. If you specify REORG TABLESPACE SHRLEVEL REFERENCE PART with one or more nonpartitioned indexes,
| restart is allowed only in the SWITCH phase.
| 6. For REORG TABLESPACE with SHRLEVEL REFERENCE and PART, if a nonpartitioned index is defined on the
| table space, REORG TABLESPACE cannot be restarted before the SWITCH phase.
For instructions on restarting a utility job, see Chapter 3, “Invoking DB2 online
utilities,” on page 17.
This section includes a series of tables that show which claim classes REORG
drains and any restrictive state that the utility sets on the target object.
For SHRLEVEL NONE, Table 90 shows which claim classes REORG drains and any
restrictive state that the utility sets on the target object. For each column, the table
indicates the claim or drain that is acquired and the restrictive state that is set in
the corresponding phase. UNLOAD CONTINUE and UNLOAD PAUSE, unlike
UNLOAD ONLY, include the RELOAD phase and thus include the drains and
restrictive states of that phase.
Table 90. Claim classes of REORG TABLESPACE SHRLEVEL NONE operations
RELOAD phase of RELOAD phase of
REORG if UNLOAD REORG PART if
CONTINUE or UNLOAD
UNLOAD phase of PAUSE UNLOAD phase of CONTINUE or
Target REORG REORG PART PAUSE
Table space, partition, DW/UTRO DA/UTUT DW/UTRO DA/UTUT
or a range of
partitions of a table
space
Partitioning index, DW/UTRO DA/UTUT DW/UTRO DA/UTUT
data-partitioned
secondary index, or
partition of either
type of index1
Nonpartitioned index2 DW/UTRO DA/UTUT None DR
Logical partition of None None DW/UTRO DA/UTUT
nonpartitioning index3
Table 90. Claim classes of REORG TABLESPACE SHRLEVEL NONE operations (continued)
RELOAD phase of RELOAD phase of
REORG if UNLOAD REORG PART if
CONTINUE or UNLOAD
UNLOAD phase of PAUSE UNLOAD phase of CONTINUE or
Target REORG REORG PART PAUSE
Legend:
v DA: Drain all claim classes, no concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v None: Any claim, drain, or restrictive state for this object does not change in this phase.
| Notes:
| 1. Includes document ID indexes and node ID indexes over partitioned XML table spaces.
| 2. Includes document ID indexes and node ID indexes over nonpartitioned XML table spaces and XML indexes.
| 3. Includes logical partitions of an XML index over partitioned XML table spaces.
For SHRLEVEL REFERENCE, Table 91 shows which claim classes REORG drains
and any restrictive state that the utility sets on the target object. For each column,
the table indicates the claim or drain that is acquired and the restrictive state that
is set in the corresponding phase.
Table 91. Claim classes of REORG TABLESPACE SHRLEVEL REFERENCE operations
UNLOAD phase of SWITCH phase of UNLOAD phase of SWITCH phase of
Target REORG REORG REORG PART REORG PART
Table space or DW/UTRO DA/UTUT DW/UTRO DA/UTUT
partition of table
space
Partitioning index, DW/UTRO DA/UTUT DW/UTRO DA/UTUT
data-partitioned
secondary index, or
partition of either1
| Nonpartitioned DW/UTRO DA/UTUT CR/UTRW DA/UTUT
secondary index2
Logical partition of None None DW/UTRO DA/UTUT
nonpartitioning index3
Legend:
v DA: Drain all claim classes, no concurrent SQL access.
v DDR: Dedrain the read claim class, concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v None: Any claim, drain, or restrictive state for this object does not change in this phase.
| Notes:
| 1. Includes document ID indexes and node ID indexes over partitioned XML table spaces.
| 2. Includes document ID indexes and node ID indexes over nonpartitioned XML table spaces and XML indexes.
| 3. Includes logical partitions of an XML index over partitioned XML table spaces.
For REORG of an entire table space with SHRLEVEL CHANGE, Table 92 shows
which claim classes REORG drains and any restrictive state that the utility sets on
the target object.
Table 92. Claim classes of REORG TABLESPACE SHRLEVEL CHANGE operations
Last iteration of LOG
Target UNLOAD phase phase SWITCH phase
1
Table space CR/UTRW DW/UTRO DA/UTUT
1
Index CR/UTRW DW/UTRO DA/UTUT
Legend:
v CR: Claim the read claim class.
v DA: Claim all claim classes, no concurrent SQL access.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
For REORG of a partition with SHRLEVEL NONE, Table 93 shows which claim
classes REORG drains and any restrictive state that the utility sets on the target
object.
Table 93. Claim classes of REORG TABLESPACE SHRLEVEL NONE operations on a partition
Last iteration of LOG
Target UNLOAD phase phase SWITCH phase
4
Partition of table space CR/UTRW DW/UTRO or DA/UTUT DA/UTUT
4
Partition of partitioning CR/UTRW DW/UTRO or DA/UTUT DA/UTUT
index1
Nonpartitioning index2 None None DR
4
Logical partition of CR/UTRW DW/UTRO or DA/UTUT DA/UTUT
nonpartitioning index3
Legend:
v CR: Claim the read claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DDR: Dedrain the read claim class, no concurrent access for SQL repeatable readers.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
v None: Any claim, drain, or restrictive state for this object does not change in this phase.
| Notes:
| 1. Includes document ID indexes and node ID indexes over partitioned XML table spaces.
| 2. Includes document ID indexes and node ID indexes over nonpartitioned XML table spaces and XML indexes.
| 3. Includes logical partitions of an XML index over partitioned XML table spaces.
| 4. DA/UTUT applies if you specify DRAIN ALL.
Table 94 on page 517 shows which utilities can run concurrently with REORG on
the same target object. The target object can be a table space, an index space, or a
partition of a table space or index space. If compatibility depends on particular
options of a utility, that information is also shown.
Table 95 on page 518 shows which DB2 operations can be affected when
reorganizing catalog table spaces.
Table 95. DB2 operations that are affected by reorganizing catalog table spaces
Catalog table space Actions that might not run concurrently
Any table space except SYSCOPY and CREATE, ALTER, and DROP statements
SYSSTR
SYSCOPY, SYSDBASE, SYSDBAUT, Utilities
SYSSTATS, SYSUSER, SYSHIST
SYSDBASE, SYSDBAUT, SYSGPAUT, GRANT and REVOKE statements
SYSPKAGE, SYSPLAN, SYSUSER
SYSDBAUT, SYSDBASE, SYSGPAUT, BIND and FREE commands
SYSPKAGE, SYSPLAN, SYSSTATS, SYSUSER,
SYSVIEWS
SYSPKAGE, SYSPLAN Plan or package execution
When reorganizing a segmented table space, REORG leaves free pages and free
space on each page in accordance with the current values of the FREEPAGE and
PCTFREE parameters. (You can set those values by using the CREATE
TABLESPACE, ALTER TABLESPACE, CREATE INDEX, or ALTER INDEX
statements). REORG leaves one free page after reaching the FREEPAGE limit for
each table in the table space. When reorganizing a nonsegmented table space,
REORG leaves one free page after reaching the FREEPAGE limit, regardless of
whether the loaded records belong to the same or different tables.
– Provide a full image copy for recovery. This action prevents the need to
process the log records that are written during reorganization.
– Permit making incremental image copies later.
You might not need to take an image copy of a table space for which all the
following statements are true:
– The table space is relatively small.
– The table space is used only in read-only applications.
– The table space can be easily loaded again in the event of failure.
See Chapter 11, “COPY,” on page 113 for information about making image
copies.
v If you use COPYDDN, SHRLEVEL REFERENCE, or SHRLEVEL CHANGE, and
the object that you are reorganizing is not a catalog or directory table space for
which COPYDDN is ignored, you do not need to take an image copy.
v Use the RUNSTATS utility on the table space and its indexes if inline statistics
were not collected, so that the DB2 catalog statistics take into account the newly
reorganized data, and SQL paths can be selected with accurate information. You
need to run RUNSTATS on nonpartitioning indexes only if you reorganized a
subset of the partitions.
v If you use REORG TABLESPACE SHRLEVEL CHANGE, you can drop the
mapping table and its index.
v If you use SHRLEVEL REFERENCE or CHANGE, and a table space, partition, or
index resides in user-managed data sets, you can delete the user-managed
shadow data sets.
v If you specify DISCARD on a REORG of a table that is involved in a referential
integrity set, you need to run CHECK DATA for any affected referentially
related objects that were placed in CHECK-pending status.
When you run REORG TABLESPACE, the utility sets all of the rows in the table or
partition to the current object version. The utility also updates the range of used
version numbers for indexes that are defined with the COPY NO attribute. REORG
TABLESPACE sets the OLDEST_VERSION column equal to the
CURRENT_VERSION column in the appropriate catalog column. These updated
values indicate that only one version is active. DB2 can then reuse all of the other
version numbers.
Recycling of version numbers is required when all of the version numbers are
being used. All version numbers are being used when one of the following
situations is true:
v The value in the CURRENT_VERSION column is one less than the value in the
OLDEST_VERSION column.
v The value in the CURRENT_VERSION column is 255 for table spaces or 15 for
indexes, and the value in the OLDEST_VERSION column is 0 or 1.
You can also run LOAD REPLACE, REBUILD INDEX, or REORG INDEX to
recycle version numbers for indexes that are defined with the COPY NO attribute.
To recycle version numbers for indexes that are defined with the COPY YES
attribute or for table spaces, run MODIFY RECOVERY.
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
| If you run REORG on a catalog or directory table space, the catalog or directory
| table space remains in basic row format.
| Notes:
| 1. The table space is set to ICOPY-pending status if the records are discarded and no pending status is the records
| are not discarded.
|
|
Sample REORG TABLESPACE control statements
Example 1: Reorganizing a table space. The following control statement specifies
that the REORG TABLESPACE utility is to reorganize table space DSN8S91D in
database DSN8D91A.
REORG TABLESPACE DSN8D91A.DSN8S91D
Example 2: Reorganizing a table space and specifying the unload data set. The
control statement in Figure 78 specifies that REORG TABLESPACE is to reorganize
table space DSN8D81A.DSN8S81D. The DD name for the unload data set is UNLD,
as specified by the UNLDDN option.
Figure 78. Example REORG TABLESPACE control statement with the UNLDDN option
Example 4: Reorganizing a table and using parallel index build. The control
statement in Figure 79 on page 522 specifies that REORG TABLESPACE is to
reorganize table space DSNDB04.DSN8S81D and to use a parallel index build to
rebuild the indexes. The indexes are built in parallel, because more than one index
needs to be built and the job allocates the data sets that DFSORT needs. Note that
you no longer need to specify SORTKEYS; it is the default.
The job allocates the sort work data sets in two groups, which limits the number of
pairs of utility subtasks to two. This example does not require UTPRINnn DD
statements because it uses DSNUPROC to invoke utility processing. DSNUPROC
includes a DD statement that allocates UTPRINT to SYSOUT.
LOG NO specifies that records are not to be logged during the RELOAD phase.
This option puts the table space in COPY-pending status.
Figure 79. Example REORG TABLESPACE control statement with LOG NO option
Example 10: Reorganizing a table space and reporting table space and index
statistics. The following control statement specifies that REORG TABLESPACE is
to reorganize table space DSN8D91A.DSN8S91E. The SORTDATA option indicates
that the data is to be unloaded and sorted in clustering order. This option is the
default and does not need to be specified. The STATISTICS, TABLE, INDEX, and
REPORT YES options indicate that the utility is also to report catalog statistics for
all tables in the table space and for all indexes that are defined on those tables. The
KEYCARD, FREQVAL, NUMCOLS, and COUNT options indicate that DB2 is to
collect 10 frequent values on the first key column of the index. UPDATE NONE
indicates that the catalog tables are not to be updated. This option requires that
REPORT YES also be specified.
REORG TABLESPACE DSN8D91A.DSN8S91E SORTDATA STATISTICS
TABLE
INDEX(ALL) KEYCARD FREQVAL NUMCOLS 1
COUNT 10 REPORT YES UPDATE NONE
Example 11: Determining whether a table space should be reorganized. The control
statement in Figure 80 on page 524 specifies that REORG TABLESPACE is to report
if the OFFPOSLIMIT and INDREFLIMIT values for partition 11 of table space
DBHR5201.TPHR5201 exceed the specified values (11 for OFFPOSLIMIT and 15 for
INDREFLIMIT).
Figure 81. Sample output showing that REORG limits have been met
//******************************************************************
//* COMMENT: UPDATE STATISTICS
//******************************************************************
//STEP1 EXEC DSNUPROC,UID=’HUHRU252.REORG1’,TIME=1440,
// UTPROC=’’,
// SYSTEM=’DSN’
//SYSREC DD DSN=HUHRU252.REORG1.STEP1.SYSREC,DISP=(MOD,DELETE,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
// SPACE=(4000,(20,20),,,ROUND)
//SYSIN DD *
RUNSTATS TABLESPACE DBHR5201.TPHR5201
UPDATE SPACE
/*
//******************************************************************
//* COMMENT: REORG THE TABLESPACE
//******************************************************************
//STEP2 EXEC DSNUPROC,UID=’HUHRU252.REORG1’,TIME=1440,
// UTPROC=’’,
// SYSTEM=’DSN’
//SYSREC DD DSN=HUHRU252.REORG1.STEP1.SYSREC,DISP=(MOD,DELETE,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSCOPY1 DD DSN=HUHRU252.REORG1.STEP1.SYSCOPY1,
// DISP=(MOD,CATLG,CATLG),UNIT=SYSDA,
// SPACE=(4000,(20,20),,,ROUND)
//SYSIN DD *
REORG TABLESPACE DBHR5201.TPHR5201
SHRLEVEL CHANGE MAPPINGTABLE MAP1
COPYDDN(SYSCOPY1)
OFFPOSLIMIT 9 INDREFLIMIT 9
/*
On successful completion, DB2 returns output for the REORG TABLESPACE job
that is similar to the output in Figure 83 on page 526.
DSNU348I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=36 FOR INDEX ADMF001.IPHR5201 PART 1
DSNU348I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=5 FOR INDEX ADMF001.IPHR5201 PART 2
...
DSNU349I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=6985 FOR INDEX ADMF001.IUHR5210
DSNU258I DSNURBXD - BUILD PHASE STATISTICS - NUMBER OF INDEXES=5
DSNU259I DSNURBXD - BUILD PHASE COMPLETE, ELAPSED TIME=00:00:18
DSNU386I DSNURLGD - LOG PHASE STATISTICS. NUMBER OF ITERATIONS = 1, NUMBER OF LOG
RECORDS = 194
DSNU385I DSNURLGD- LOG PHASE COMPLETE, ELAPSED TIME = 00:01:10
DSNU400I DSNURBID- COPY PROCESSED FOR TABLESPACE DBHR5201.TPHR5201
NUMBER OF PAGES=1073
AVERAGE PERCENT FREE SPACE PER PAGE = 14.72
PERCENT OF CHANGED PAGES =100.00
ELAPSED TIME=00:01:58
DSNU387I DSNURSWT - SWITCH PHASE COMPLETE, ELAPSED TIME = 00:01:05
DSNU428I DSNURSWT - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBHR5201.TPHR5201
Example 13: Reorganizing a table space after waiting for SQL statements to
complete. The control statement in Figure 84 on page 527 specifies that REORG
TABLESPACE is to reorganize the table space in the REORG_TBSP list, which is
defined in the preceding LISTDEF utility control statement. Before reorganizing the
table space, REORG TABLESPACE is to wait for 30 seconds for SQL statements to
finish adding or changing data. This interval is indicated by the DRAIN_WAIT
option. If the SQL statements do not finish, the utility is to retry up to four times,
as indicated by the RETRY option. The utility is to wait 10 seconds between retries,
as indicated by the RETRY_DELAY option.
The TEMPLATE utility control statements define the data set characteristics for the
data sets that are to be dynamically allocated during the REORG TABLESPACE
job. The OPTIONS utility control statement indicates that the TEMPLATE
statements and LISTDEF statement are to run in PREVIEW mode.
Figure 84. Example of reorganizing a table space by using DRAIN WAIT, RETRY, and
RETRY_DELAY
Figure 85. Sample output of REORG TABLESPACE job with DRAIN WAIT, RETRY, and RETRY_DELAY options (Part
1 of 2)
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5706
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5705
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.21.292235
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.22.288665
DSNU393I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IPHR5701 PART 11
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IPHR5701
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5704
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPI - SYSINDEXSTATS CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPD - SYSCOLDISTSTATS CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPC - SYSCOLSTATS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.20.886803
DSNU391I DSNURPTB - SORTBLD PHASE STATISTICS. NUMBER OF INDEXES = 7
DSNU392I DSNURPTB - SORTBLD PHASE COMPLETE, ELAPSED TIME = 00:00:04
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
...
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
DSNU1122I = DSNURLOG - JOB T3161108 PERFORMING REORG
WITH UTILID HUHRU257.REORG UNABLE TO DRAIN DBHR5701.TPHR5701.
RETRY 1 OF 4 WILL BE ATTEMPTED IN 10 SECONDS
DSNU1122I = DSNURLOG - JOB T3161108 PERFORMING REORG
WITH UTILID HUHRU257.REORG UNABLE TO DRAIN DBHR5701.TPHR5701.
RETRY 2 OF 4 WILL BE ATTEMPTED IN 10 SECONDS
DSNU386I DSNURLGD - LOG PHASE STATISTICS. NUMBER OF ITERATIONS = 32, NUMBER OF LOG RECORDS = 2288
DSNU385I DSNURLGD - LOG PHASE COMPLETE, ELAPSED TIME = 00:03:43
DSNU400I DSNURBID - COPY PROCESSED FOR TABLESPACE DBHR5701.TPHR5701
NUMBER OF PAGES=377
AVERAGE PERCENT FREE SPACE PER PAGE = 5.42
PERCENT OF CHANGED PAGES =100.00
ELAPSED TIME=00:04:02
DSNU387I DSNURSWT - SWITCH PHASE COMPLETE, ELAPSED TIME = 00:00:02
DSNU428I DSNURSWT - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBHR5701.TPHR5701
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0
Figure 85. Sample output of REORG TABLESPACE job with DRAIN WAIT, RETRY, and RETRY_DELAY options (Part
2 of 2)
Example 14: Using a mapping table: In the example in Figure 86 on page 530, a
mapping table and mapping table index are created. Then, a REORG TABLESPACE
job uses the mapping table, and finally the mapping table is dropped. Some parts
of this job use the EXEC SQL utility to execute dynamic SQL statements.
The first EXEC SQL control statement contains the SQL statements that create a
mapping table that is named MYMAPPING_TABLE. The second EXEC SQL control
statement contains the SQL statements that create mapping index
MYMAPPING_INDEX on the table MYMAPPING_TABLE. For more information
about the CREATE TABLE and CREATE INDEX statements, see DB2 SQL Reference.
The REORG TABLESPACE control statement then specifies that the REORG
TABLESPACE utility is to reorganize table space DSN8D81P.DSN8S81C and to use
mapping table MYMAPPING_TABLE.
Finally, the third EXEC SQL statement contains the SQL statements that drop
MYMAPPING_TABLE. For more information about the DROP TABLE statement,
see DB2 SQL Reference.
EXEC SQL
CREATE TABLE MYMAPPING_TABLE
(TYPE CHAR( 01 ) NOT NULL,
SOURCE_RID CHAR( 05 ) NOT NULL,
TARGET_XRID CHAR( 09 ) NOT NULL,
LRSN CHAR( 06 ) NOT NULL)
IN DSN8D81P.DSN8S81Q
CCSID EBCDIC
ENDEXEC
EXEC SQL
CREATE UNIQUE INDEX MYMAPPING_INDEX
ON MYMAPPING_TABLE
(SOURCE_RID ASC,
TYPE,
TARGET_XRID,
LRSN)
USING STOGROUP DSN8G710
PRIQTY 120 SECQTY 20
ERASE NO
BUFFERPOOL BP0
CLOSE NO
ENDEXEC
REORG TABLESPACE DSN8D81P.DSN8S81C
COPYDDN(COPYDDN)
SHRLEVEL CHANGE
DEADLINE CURRENT_TIMESTAMP+8 HOURS
MAPPINGTABLE MYMAPPING_TABLE
MAXRO 240 LONGLOG DRAIN DELAY 900
SORTDEVT SYSDA SORTNUM 4
STATISTICS TABLE(ALL)
INDEX(ALL)
EXEC SQL
DROP TABLE MYMAPPING_TABLE
ENDEXEC
Example 15: Discarding records from one table while reorganizing a table space:
The control statement in Figure 87 on page 531 specifies that REORG TABLESPACE
is to reorganize table space DSN8D51A.DSN8S51E. During reorganization, records
in table DSN8510.EMP are discarded if they have the value D11 in the
WORKDEPT field. This discard criteria is specified in the WHEN clause that
The COPYDDN option specifies that during the REORG, DB2 is also to take an
inline copy of the table space. This image copy is to be written to the data set that
is identified by the SYSCOPY DD statement.
Example 16: Discarding records from multiple tables while reorganizing a table
space: The control statement in Figure 88 on page 532 specifies that REORG
TABLESPACE is to reorganize table space DBKC0501.TLKC0501. During
reorganization, the following records are discarded:
v Records in table TBKC0501 that have a value in the QT_INV_TRANSACTION
column that is less than or equal to 700, and a value in the NO_DEPT column
that is equal to X'33303230'.
v Records in table TBKC0502 that have a value in the NO_WORK_CENTER
column that is equal to either X'333031303120' or X'333032303620'.
This discard criteria is specified with the DISCARD option. Any discarded rows
are to be written to the SYSDISC data set, as specified by the DISCARDDN option.
Figure 88. Example REORG statement that specifies discard criteria for several tables
| Example 18: Reorganizing only clone tables. The REORG TABLESPACE control
| statement indicates that REORG TABLESPACE is to reorganize only clone tables
| from the specified table spaces.
| REORG TABLESPACE DBKQBS01.TPKQBS01 CLONE
You use REPAIR to replace invalid data with valid data. Be extremely careful when
using REPAIR. Improper use can damage the data even further.
Output: The output from the REPAIR utility can consist of one or more modified
pages in the specified DB2 table space or index and a dump of the contents.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v REPAIR privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute REPAIR, but only on a
table space in the DSNDB01 or DSNDB06 database.
To execute REPAIR with the DBD option, you must use a privilege set that
includes SYSADM, SYSCTRL, or installation SYSOPR authority.
REPAIR should be used only by a person that is knowledgeable in DB2 and your
data. Grant REPAIR authorization only to the appropriate people.
| REPAIR
CLONE
OBJECT LOG YES
set statement
LOG NO locate block
dbd-statement
level-id statement
versions statement
level-id statement:
versions statement:
index-name-spec:
INDEX index-name
creator-id.
INDEXSPACE index-space-name
database-name.
INDEX
Specifies the index whose level identifier is to be reset (if you specify
LEVELID) or whose version identifier is to be updated (if you specify
VERSIONS).
creator-id.
Specifies the creator of the index. Specifying this qualifier is optional.
index-name
Specifies the name of the index. Enclose the index name in quotation
marks if the name contains a blank.
| contain indexes on clone tables. If you specify CLONE, you cannot specify
| VERSIONS because clone tables do not have versions. Clones cannot be
| created for tables with active versions.
| If you specify SET with CLONE, the status is changed for only the specified
| table spaces and their indexes. The CLONE keyword applies to all SET
| statements and LOCATE statements within the same REPAIR utility control
| statement.
table-space-spec:
TABLESPACE table-space-name
database-name.
SET INDEX
Specifies the index whose RECOVER-pending, CHECK-pending,
REBUILD-pending, or informational COPY-pending status is to be reset.
(index-name)
Specifies the index that is to be processed. Enclose the index name
in quotation marks if the name contains a blank.
(ALL) Specifies that all indexes in the table space will be processed.
NOAUXCHKP
Specifies that the auxiliary CHECK-pending (ACHKP) status of the
specified table space is to be reset. The specified table space must be a base
table space.
NOAREORPENDSTAR
Resets the advisory REORG-pending (AREO*) status of the specified table
space or index.
In any LOCATE block, you can use VERIFY, REPLACE, or DUMP as often as you
like; you can use DELETE only once.
LOCATE
|
table-space-spec table-options spec verify statement
INDEX index-name index-options-spec replace statement SHRLEVEL CHANGE
INDEXSPACE index-space-name index-options-spec delete statement
dump statement
table-space-spec ROWID X'byte-string' VERSION X'byte-string' delete statement
dump statement
table-space-spec:
TABLESPACE table-space-name
database-name.
table-options-spec:
PAGE X'byte-string'
PAGE integer
PART integer
RID X'byte-string'
KEY literal INDEX index-name
index-options-spec:
PAGE X'byte-string'
PAGE integer
PART integer
One LOCATE statement is required for each unit of data that is to be repaired.
Several LOCATE statements can appear after each REPAIR statement.
KEY literal
Specifies that the data that is to be located is a single row, identified by literal.
The specified offsets in subsequent statements are relative to the beginning of
the row. The first byte of the stored row prefix is at offset 0.
literal is any SQL constant that can be compared with the key values of the
named index.
Character constants that are specified within the LOCATE KEY option cannot
be specified as ASCII or Unicode character strings. No conversion of the values
is performed. To use this option when the table space is ASCII or Unicode, you
should specify the values as hexadecimal constants.
If more than one row has the value literal in the key column, REPAIR returns a
list of record identifiers (RIDs) for records with that key value, but does not
perform any other operations (verify, replace, delete, or dump) until the next
LOCATE TABLESPACE statement is encountered. To repair the proper data,
write a LOCATE TABLESPACE statement that selects the desired row, using
the RID option, the PAGE option, or a different KEY and INDEX option. Then
execute REPAIR again.
| SHRLEVEL
| Indicates the type of access that is to be allowed for the index, table space, or
| partition that is to be repaired during REPAIR processing.
| If you do not specify SHRLEVEL and you do specify DUMP or VERIFY,
| applications can read but not write the area.
| If you do not specify SHRLEVEL and you do specify DELETE or REPLACE,
| applications cannot read or write the area.
| CHANGE
| Specifies that applications can read and write during the VERIFY,
| REPLACE, DELETE, and DUMP operation.
ROWID X'byte-string'
Specifies that the data that is to be located is a LOB in a LOB table space.
byte-string is the row ID that identifies the LOB column.
Use the ROWID keyword to repair an orphaned LOB row. You can find the
ROWID in the output from the CHECK LOB utility. If you specify the ROWID
keyword, the specified table space must be a LOB table space.
VERSION X'byte-string'
Specifies that the data that is to be located is a LOB in a LOB table space.
byte-string is the version number that identifies the version of the LOB column.
Use the VERSION keyword to repair an orphaned LOB column. You can find
the VERSION number in the output of the CHECK LOB utility or an
out-of-synch LOB that is reported by the CHECK DATA utility. If you specify
the VERSION keyword, the specified table space must be a LOB table space.
One LOCATE statement is required for each unit of data that is to be repaired.
Multiple LOCATE statements can appear after each REPAIR statement.
OFFSET 0
VERIFY DATA X'byte-string'
OFFSET integer 'character-string'
X'byte-string'
REPLACE RESET
OFFSET 0
DATA X'byte-string'
OFFSET integer 'character-string'
X'byte-string'
before you can access the page. Numbers of pages with inconsistent data are
reported at the time that they are encountered.
The option also resets the PGCOMB flag bit in the first byte of the page to
agree with the bit code in the last byte of the page.
OFFSET
Indicates where data is to be replaced by a relative byte address (RBA) within
the row or page. Only one OFFSET and one DATA specification are acted on
for each REPLACE statement.
integer Specifies the offset as an integer. The default is 0, the first byte of the
area that is identified by the previous LOCATE statement.
X'byte-string'
Specifies the offset as one to four hexadecimal characters. You do not
need to enter leading zeros. Enclose the byte string between
apostrophes, and precede it with X.
DATA
Specifies the new data that is to be entered. Only one OFFSET and one DATA
specification are acted on for each REPLACE statement.
Character constants that are specified within the VERIFY DATA option cannot
be specified as ASCII or Unicode character strings. No conversion of the values
is performed. To use this option when the table space is ASCII or Unicode, you
should specify the values as hexadecimal constants.
X'byte-string'
Specifies an even number, from two to thirty-two, of hexadecimal
characters that are to replace the current data. You do not need to enter
leading zeros. Enclose the byte string between apostrophes, and
precede it with X.
'character-string'
Specifies any character string that is to replace the current data.
The DELETE statement operates without regard for referential constraints. If you
delete a parent row, its dependent rows remain unchanged in the table space.
However, in the DB2 catalog and directory table spaces, where links are used to
reference other tables in the catalog, deleting a parent row causes all child rows to
be deleted, as well. Moreover, deleting a child row in the DB2 catalog tables also
updates its predecessor and successor pointer to reflect the deletion of this row.
Therefore, if the child row has incorrect pointers, the DELETE might lead to an
unexpected result. See “Example 5: Repairing a table space with an orphan row”
on page 557 for a possible method of deleting a child row without updating its
predecessor and successor pointer.
In any LOCATE block, you can include no more than one DELETE option.
If you have coded any of the following options, you cannot use DELETE:
v The LOG NO option on the REPAIR statement
v A LOCATE INDEX statement to begin the LOCATE block
v The PAGE option on the LOCATE TABLESPACE statement in the same LOCATE
block
v A REPLACE statement for the same row of data
When you specify LOCATE ROWID for a LOB table space, the LOB that is
specified by ROWID is deleted with its index entry. All pages that are occupied by
the LOB are converted to free space. The DELETE statement does not remove any
reference to the deleted LOB from the base table space.
DELETE
When you specify LOCATE ROWID for a LOB table space, one or more map or
data pages of the LOB are dumped. The DUMP statement dumps all of the LOB
column pages if you do not specify either the MAP or DATA keyword.
OFFSET 0
DUMP
OFFSET integer LENGTH X'byte-string' PAGES X'byte-string'
X'byte-string' integer integer
*
MAP
pages
DATA
pages
If you specify a number of bytes (with LENGTH) and a number of pages (with
PAGE), the dump contains the same relative bytes from each page. That is,
from each page you see the same number of bytes, beginning at the same
offset.
X'byte-string'
Specifies one to four hexadecimal characters. You do not need to enter
leading zeros. Enclose the byte string between apostrophes, and
precede it with X.
integer Specifies the length as an integer.
PAGES
Optionally, specifies a number of pages that are to be dumped. You can use
this option only if you used PAGE in the preceding LOCATE TABLESPACE
control statement.
X'byte-string'
Specifies one to four hexadecimal characters. You do not need to enter
leading zeros. Enclose the byte string between apostrophes, and
precede it with X.
integer Specifies the number of pages as an integer.
* Specifies that all pages from the starting point to the end of the table
space or partition are to be dumped.
MAP pages
Specifies that only the LOB map pages are to be dumped.
pages specifies the number of LOB map pages that are to be dumped. If you do
not specify pages, all LOB map pages of the LOB that is specified by ROWID
and version are dumped.
DATA pages
Specifies that only the LOB data pages are to be dumped.
pages specifies the number of LOB data pages that are to be dumped. If you do
not specify pages, all LOB data pages of the LOB that is specified by ROWID
and version are dumped.
The REPAIR utility assumes that the links in table spaces DSNDB01.DBD01,
DSNDB06.SYSDBAUT, and DSNDB06.SYSDBASE are intact. Before executing
REPAIR with the DBD statement, run the DSN1CHKR utility on these table spaces
to ensure that the links are not broken. For more information about DSN1CHKR,
see Chapter 38, “DSN1CHKR,” on page 767.
The database on which REPAIR DBD is run must be started for access by utilities
only. For more information about using the DBD statement, see “Using the DBD
statement” on page 551.
You can use REPAIR DBD on declared temporary tables, which must be created in
a database that is defined with the AS TEMP clause. No other DB2 utilities can be
used on a declared temporary table, its indexes, or its table spaces.
compares it with the DBD in the DB2 directory. In addition, DB2 reports any
differences between the two DBDs, and produces hexadecimal dumps of the
inconsistent DBDs.
If the condition code is 0, the information in the DB2 catalog and the DBD in
the DB2 directory is consistent.
If the condition code is 8, the information in the DB2 catalog and the DBD in
the DB2 directory might be inconsistent.
For further assistance in resolving any inconsistencies, you can contact IBM
Software Support.
REBUILD
Specifies that the DBD that is associated with the specified database is to be
rebuilt from the information in the DB2 catalog.
Attention: Use the REBUILD option with extreme care, as you can cause more
damage to your data. For more assistance, you can contact IBM Software
Support.
OUTDDN ddname
Specifies the DD statement for an optional output data set. This data set
contains copies of the DB2 catalog records that are used to rebuild the DBD.
ddname is the name of the DD statement.
Attention: Be extremely careful when using the REPAIR utility to replace data.
Changing data to invalid values by using REPLACE might produce unpredictable
results, particularly when changing page header information. Improper use of
REPAIR can result in damaged data, or in some cases, system failure.
The following objects are named in the utility control statement and do not require
a DD statement in the JCL:
Table space or index
Object that is to be repaired.
Calculating output data set size: Use the following formula to estimate the size of
the output data set:
SPACE = (4096,(n,n))
In this formula, n = the total number of DB2 catalog records that relate to the
database on which REPAIR DBD is being executed.
You can calculate an estimate for n by summing the results of SELECT COUNT(*)
from all of the catalog tables in the SYSDBASE table space, where the name of the
database that is associated with the record matches the database on which REPAIR
DBD is being executed.
To reset the auxiliary warning (AUXW) status for a LOB table space:
1. Update or correct the invalid LOB columns, then
2. Run the CHECK LOB utility with the AUXERROR INVALIDATE option if
invalid LOB columns were corrected.
Consider using the REBUILD INDEX or RECOVER INDEX utility on an index that
is in REBUILD-pending status, rather than running REPAIR SET INDEX
NORBDPEND. RECOVER uses DB2-controlled recovery information, whereas
REPAIR SET INDEX resets the REBUILD-pending status without considering the
recoverability of the index. Recoverability issues include the availability of image
copies, of rows in SYSIBM.SYSCOPY, and of log data sets.
3. If you determine that the page is not really damaged, but merely has the
“inconsistent data” indicator on, reset the indicator by running REPAIR with
the REPLACE RESET control statement.
If the number of active versions is too high, you must reduce the number of
active versions by running REORG on both the source and target objects. Then,
use the COPY utility to take a copy, and run MODIFY RECOVERY to recycle
the version numbers.
5. Run the DSN1COPY utility with the OBIDXLAT option. On the control
statement, specify the proper mapping of table database object identifiers
(OBIDs) for the table space or index from the source to the target subsystem.
6. Run REPAIR VERSIONS on the object on the target subsystem. For table
spaces, the utility updates the following columns:
v OLDEST_VERSION and CURRENT_VERSION in SYSTABLEPART
v VERSION in SYSTABLES
For indexes, the utility updates OLDEST_VERSION and CURRENT_VERSION
in SYSINDEXES. DB2 uses the following formulas to update these columns in
both SYSTABLEPART and SYSINDEXES:
CURRENT_VERSION = MAX(target.CURRENT_VERSION,source.CURRENT_VERSION)
OLDEST_VERSION = MIN(target.OLDEST_VERSION,source.OLDEST_VERSION)
For more information about versions and how they are used by DB2, see Part 2 of
DB2 Administration Guide.
REPAIR cannot be restarted. If you attempt to restart REPAIR, you receive message
DSNU191I, which states that the utility cannot be restarted. You must terminate the
job with the TERM UTILITY command, and rerun REPAIR from the beginning.
Table 99 shows which claim classes REPAIR drains and any restrictive state that
the utility sets on the target object.
Table 99. Claim classes of REPAIR operations
Table space or Index or partition
Action partition
REPAIR LOCATE KEY DUMP or VERIFY DW/UTRO DW/UTRO
REPAIR LOCATE KEY DELETE or DA/UTUT DA/UTUT
REPLACE
REPAIR LOCATE RID DUMP or VERIFY DW/UTRO None
REPAIR LOCATE RID DELETE DA/UTUT DA/UTUT
REPAIR LOCATE RID REPLACE DA/UTUT None
REPAIR LOCATE TABLESPACE DUMP or DW/UTRO None
VERIFY
REPAIR LOCATE TABLESPACE REPLACE DA/UTUT None
REPAIR LOCATE INDEX PAGE DUMP or None DW/UTRO
VERIFY
REPAIR does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Table 100 and Table 101 on page 555 show which utilities can run concurrently
with REPAIR on the same target object. The target object can be a table space, an
index space, or a partition of a table space or index space. If compatibility depends
on particular options of a utility, that information is also shown in the table.
Table 100 shows which utilities can run concurrently with REPAIR LOCATE by
KEY or RID.
Table 100. Utility compatibility with REPAIR, LOCATE by KEY or RID
Utility DUMP or VERIFY DELETE or REPLACE
CHECK DATA No No
CHECK INDEX Yes No
CHECK LOB Yes No
COPY INDEXSPACE Yes No
COPY TABLESPACE Yes No
DIAGNOSE Yes Yes
LOAD No No
MERGECOPY Yes Yes
MODIFY Yes Yes
QUIESCE Yes No
REBUILD INDEX No No
1
RECOVER INDEX No No
RECOVER TABLESPACE No No
2
REORG INDEX No No
REORG TABLESPACE UNLOAD No No
CONTINUE or PAUSE
REORG TABLESPACE UNLOAD Yes No
ONLY or EXTERNAL
3
REPAIR DELETE or REPLACE No No
REPAIR DUMP or VERIFY Yes No
REPORT Yes Yes
RUNSTATS INDEX SHRLEVEL Yes Yes
CHANGE
Table 100. Utility compatibility with REPAIR, LOCATE by KEY or RID (continued)
Utility DUMP or VERIFY DELETE or REPLACE
RUNSTATS INDEX SHRLEVEL Yes No
REFERENCE
RUNSTATS TABLESPACE Yes No
STOSPACE Yes Yes
UNLOAD Yes No
Notes:
1. REORG INDEX is compatible with LOCATE by RID, DUMP, VERIFY, or
REPLACE.
2. RECOVER INDEX is compatible with LOCATE by RID, DUMP, or VERIFY.
3. REPAIR LOCATE INDEX PAGE REPLACE is compatible with LOCATE by RID
or REPLACE.
Table 101 shows which utilities can run concurrently with REPAIR LOCATE by
PAGE.
Table 101. Utility compatibility with REPAIR, LOCATE by PAGE
TABLESPACE TABLESPACE INDEX DUMP or
Utility or action DUMP or VERIFY REPLACE VERIFY INDEX REPLACE
SQL read Yes No Yes No
SQL write No No No No
CHECK DATA No No No No
CHECK INDEX Yes No Yes No
CHECK LOB Yes No Yes No
COPY INDEXSPACE Yes Yes Yes No
COPY TABLESPACE Yes No Yes No
DIAGNOSE Yes Yes Yes Yes
LOAD No No No No
MERGECOPY Yes Yes Yes Yes
MODIFY Yes Yes Yes Yes
QUIESCE Yes No Yes No
REBUILD INDEX Yes No No N/A
RECOVER INDEX Yes No No No
RECOVER TABLESPACE No No Yes Yes
(with no option)
RECOVER TABLESPACE No No Yes Yes
ERROR RANGE
RECOVER TABLESPACE No No No No
TOCOPY or TORBA
REORG INDEX Yes Yes No No
REORG TABLESPACE No No No No
UNLOAD CONTINUE or
PAUSE
Notes:
1. REPAIR LOCATE INDEX PAGE REPLACE is compatible with LOCATE TABLESPACE PAGE.
Error messages: At each LOCATE statement, the last data page and the new page
that are being located are checked for a few common errors, and messages are
issued.
Data checks: Although REPAIR enables you to manipulate both user and DB2 data
by bypassing SQL, it does perform some checking of data. For example, if REPAIR
tries to write a page with the wrong page number, DB2 abnormally terminates
with a 04E code and reason code C200B0. If the page is broken because the broken
page bit is on or the incomplete page flag is set, REPAIR issues the following
message:
DSNU670I + DSNUCBRP - PAGE X’000004’ IS A BROKEN PAGE
v Replace the damaged data with the desired data (0D11), as indicated by the
REPLACE clause.
v Initiate a dump beginning at offset 50, for 4 bytes, as indicated by the DUMP
clause. You can use the generated dump to verify the replacement.
//STEP1 EXEC DSNUPROC,UID=’IUIQU1UH’,UTPROC=’’,SYSTEM=’DSN’
//SYSIN DD *
REPAIR OBJECT
LOCATE TABLESPACE DSN8D91A.DSN8S91D PAGE X’02’
VERIFY OFFSET 50 DATA X’0A00’
REPLACE OFFSET 50 DATA X’0D11’
DUMP OFFSET 50 LENGTH 4
To resolve this error condition, submit the following control statement, which
specifies that REPAIR is to delete the nonindexed row and log the change. (The
LOG keyword is not required; the change is logged by default.) The RID option
identifies the row that REPAIR is to delete.
REPAIR
LOCATE TABLESPACE DSNDB04.TS1 RID (X’0000000503’)
DELETE
Example 3: Reporting whether catalog and directory DBDs differ. The following
control statement specifies that REPAIR is to compare the DBD for DSN8D2AP in
the catalog with the DBD for DSN8D2AP in the directory.
REPAIR DBD TEST DATABASE DSN8D2AP
If the condition code is 0, the DBDs are consistent. If the condition code is not 0,
the DBDs might be inconsistent. In this case, run REPAIR DBD with the
DIAGNOSE option, as shown in example 4, to find out more detailed information
about any inconsistencies.
From a DSN1PRNT of page X'0000000024' and X'0000002541', you identify that RID
X'0000002420' has a forward pointer of X'0000002521'.
1. Submit the following control statement, which specifies that REPAIR is to set
the orphan’s backward pointer to zeros:
REPAIR OBJECT LOG YES
LOCATE TABLESPACE DSNDB06.SYSDBASE RID X’0000002420’
VERIFY OFFSET X’0A’ DATA X’0000002422’
REPLACE OFFSET X’0A’ DATA X’0000000000’
Setting the pointer to zeros prevents the next step from updating link pointers
while deleting the orphan. Updating the link pointers can cause DB2 to
abnormally terminate if the orphan’s pointers are incorrect.
2. Submit the following control statement, which deletes the orphan:
REPAIR OBJECT LOG YES
LOCATE TABLESPACE DSNDB06.SYSDBASE RID X’00002420’
VERIFY OFFSET X’06’ DATA X’00002521’
DELETE
| Example 8: Repairing a table space with clones. The control statement specifies
| that REPAIR is to reset the auxiliary CHECK-pending (ACHKP) status of the
| specified table space and process only the specified objects that are table spaces
| that contain clone tables, indexes on clone tables, or index spaces that contain
| indexes on clone tables.
| REPAIR
| SET TABLESPACE DBKQDB01.TPKQDB01
| NOAUXCHKP CLONE
Output: The output from REPORT TABLESPACESET consists of the names of all
table spaces in the table space set that you specify. It also lists all tables in the table
spaces and all tables that are dependent on those tables.
The output from REPORT RECOVERY consists of the recovery history from the
SYSIBM.SYSCOPY catalog table, log ranges from the SYSIBM.SYSLGRNX directory
table, and volume serial numbers where archive log data sets from the BSDS
reside. In addition, REPORT RECOVERY output includes information about any
indexes on the table space that are in the informational COPY-pending status
because this information affects the recoverability of an index. For more
information about this situation, see 138.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v DBADM or DBCTRL authority for the database. If the object on which the utility
operates is in an implicitly created database, DBADM authority on the implicitly
created database or DSNDB04 is required.
v SYSCTRL or SYSADM authority
An ID with DBCTRL or DBADM authority over database DSNDB06 can run the
REPORT utility on any table space in DSNDB01 (the directory) or DSNDB06 (the
catalog), as can any ID with installation SYSOPR, SYSCTRL, or SYSADM authority.
Phase Description
UTILINIT Performs initialization
REPORT Collects information
UTILTERM Performs cleanup
Syntax diagram
REPORT
| INDEX NONE
RECOVERY TABLESPACE LIST listdef-name
table-space-name-spec INDEX ALL info options
index-list-spec
TABLESPACESET table-space-name-spec
TABLESPACE SHOWDSNS
index-list-spec:
INDEXSPACE index-space-name
database-name.
LIST listdef-name
INDEX index-name
creator-id.
LIST listdef-name
info options:
DSNUM ALL
DSNUM integer CURRENT SUMMARY LOCALSITE RECOVERYSITE
ARCHLOG 1
ARCHLOG 2
ALL
table-space-name-spec:
table-space-name
database-name.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
RECOVERY
Indicates that recovery information for the specified table space or index is to
be reported.
TABLESPACE database-name.table-space-name
For REPORT RECOVERY, specifies the table space (and, optionally, the
database to which it belongs) that is being reported.
For REPORT TABLESPACESET, specifies a table space (and, optionally, the
database to which it belongs) in the table space set.
database-name
Optionally specifies the database to which the table space belongs.
table-space-name
Specifies the table space.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
utility allows one LIST keyword for each control statement of REPORT.
The list must contain only table spaces. Do not specify LIST with the
TABLESPACE...table-space-name specification. The TABLESPACE
keyword is required in order to validate the contents of the list.
REPORT RECOVERY TABLESPACE is invoked once per item in the
list.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 185.
| SHOWDSNS
| Specifies that the VSAM data set names for each table space or index
| space are to be included in the TABLESPACESET report. Data set
| names for base objects are shown in the section titled TABLESPACE
| SET REPORT. Data set names for CLONE objects are shown in the
In this format:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
tsname Is the table space name.
y Is I or J.
nnn Is the data set integer.
CURRENT
Specifies that only the SYSCOPY entries that were written after the last
recovery point of the table space are to be reported. The last recovery point
is the last full image copy, LOAD REPLACE LOG YES image copy, or
REORG LOG YES image copy. If you specify DSNUM ALL, the last
recovery point is a full image copy that was taken for the entire table space
or index space. However, if you specify the CURRENT option, but the last
recovery point does not exist on the active log, DB2 prompts you to mount
archive tapes until this point is found.
CURRENT also reports only the SYSLGRNX rows and archive log volumes
that were created after the last incremental image copy entry. If no
incremental image copies were created, only the SYSLGRNX rows and
archive log volumes that were created after the last recovery point are
reported.
If you do not specify CURRENT or if no last recovery point exists, all
SYSCOPY and SYSLGRNX entries for that table space or index space are
reported, including those on archive logs. If you do not specify CURRENT,
the entries that were written after the last recovery point are marked with
an asterisk (*) in the report.
SUMMARY
Specifies that only a summary of volume serial numbers is to be reported.
It reports the following volume serial numbers:
v Where the archive log data sets from the BSDS reside
v Where the image copy data sets from SYSCOPY reside
If you do not specify SUMMARY, recovery information is reported, in
addition to the summary of volume serial numbers.
LOCALSITE
Specifies that all SYSCOPY records that were copied from a local site
system are to be reported.
RECOVERYSITE
Specifies that all SYSCOPY records that were copied from the recovery site
system are to be reported.
ARCHLOG
Specifies which archive log data sets are to be reported.
1 Reports archive log data set 1 only. The default is 1.
2 Reports archive log data set 2 only.
ALL
Reports both archive log data sets 1 and 2.
TABLESPACESET
Indicates that the names of all table spaces in the table space set, as well as
the names of all indexes on tables in the table space set, are to be reported.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space
Object that is to be reported.
You can also use REPORT to obtain recovery information about the catalog and
directory. When doing so, use the CURRENT option to avoid unnecessary
mounting of archive tapes.
REPORT uses asterisks to denote any non-COPY entries that it finds in the
SYSIBM.SYSCOPY catalog table. For example, an entry that is added by the
QUIESCE utility is marked with asterisks in the REPORT output.
Recommendation: For image copies of partitioned table spaces that are taken with
the DSNUM ALL option, run REPORT RECOVERY DSNUM ALL. If you run
REPORT RECOVERY DSNUM ALL CURRENT, DB2 reports additional historical
information that dates back to the last full image copy that was taken for the entire
table space.
The REPORT RECOVERY utility output indicates whether any image copies are
unusable; image copies that were taken prior to REORG or LOAD events that reset
REORG-pending status are marked as unusable. In the REPORT RECOVERY
output, look at the IC TYPE and STYPE fields to help you determine which image
copies are unusable.
For example, in the sample REPORT RECOVERY output in Figure 92 on page 568,
the value in the first IC TYPE field, *R*, indicates that a LOAD REPLACE LOG
YES operation occurred. The value in the second IC TYPE field, <F> indicates that
a full image copy was taken.
| DSNU582I ) 271 15:02:09.92 DSNUPPCP - REPORT RECOVERY TABLESPACE DBKQAA01.TPKQAA01 SYSCOPY ROWS
| TIMESTAMP = 2006-09-28-15.00.07.773906, IC TYPE = *C*, SHR LVL = , DSNUM = 0000,
| START LRSN =000037940EEC
| DEV TYPE = , IC BACK = , STYPE = L, FILE SEQ = 0000,
| PIT LRSN = 000000000000
| LOW DSNUM = 0000, HIGH DSNUM = 0000, OLDEST VERSION = 0000, LOGICAL PART = 0000,
| LOGGED = Y, TTYPE =
| JOBNAME = , AUTHID = , COPYPAGESF = -1.0E+00
| NPAGESF = -1.0E+00 , CPAGESF = -1.0E+00
| DSNAME = DBKQAA01.TPKQAA01 , MEMBER NAME = ,
| INSTANCE = 01, RELCREATED = M
|
| TIMESTAMP = 2006-09-28-15.00.36.940517, IC TYPE = *R*, SHR LVL = , DSNUM = 0000,
| START LRSN =000037A07DAC
| DEV TYPE = , IC BACK = , STYPE = , FILE SEQ = 0000,
| PIT LRSN = 000000000000
| LOW DSNUM = 0000, HIGH DSNUM = 0000, OLDEST VERSION = 0000, LOGICAL PART = 0000,
| LOGGED = Y, TTYPE =
| JOBNAME = TJI11004, AUTHID = ADMF001 , COPYPAGESF = -1.0E+00
| NPAGESF = -1.0E+00 , CPAGESF = -1.0E+00
| DSNAME = DBKQAA01.TPKQAA01 , MEMBER NAME = ,
| INSTANCE = 01, RELCREATED = M
Figure 92. Sample REPORT RECOVERY output before table space placed in REORG-pending status
After this image copy was taken, assume that an event occurred that put the table
space in REORG-pending status. Figure 93 shows the next several rows of REPORT
RECOVERY output for the same table space. The value in the first ICTYPE field,
*X* indicates that a REORG LOG YES event occurred. In the same SYSCOPY
record, the value in the STYPE field, A, indicates that this REORG job reset the
REORG-pending status. Any image copies that are taken before this status was
reset are unusable. (Thus, the full image copy in the REPORT output in Figure 92
is unusable.) The next record contains an F in the IC TYPE field and an X in the
STYPE field, which indicates that a full image copy was taken during the REORG
job. This image copy is usable.
Figure 93. Sample REPORT RECOVERY output after REORG-pending status is reset
For a complete explanation of the SYSCOPY fields, see DB2 SQL Reference.
You can use REPORT TABLESPACESET on the DB2 catalog and directory table
spaces.
You can restart a REPORT utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
REPORT can run concurrently on the same target object with any utility or SQL
operation.
DSNU000I 270 14:18:14.71 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = REP94
DSNU1044I 270 14:18:14.91 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I 270 14:18:14.92 DSNUGUTC - REPORT TABLESPACESET TABLESPACE DSN8D91A.DSN8S91D
DSNU587I ) 270 14:18:14.94 DSNUPSET - REPORT TABLESPACE SET WITH TABLESPACE DSN8D91A.DSN8S91D
TABLESPACE : DSN8D91A.DSN8S91D
TABLE : DSN8910.DEPT
INDEXSPACE : DSN8D91A.XDEPT1
INDEX : DSN8910.XDEPT11
INDEXSPACE : DSN8D91A.XDEPT2
INDEX : DSN8910.XDEPT22
INDEXSPACE : DSN8D91A.XDEPT3
INDEX : DSN8910.XDEPT33
INDEXSPACE : DSN8D91A.IRDOCIDD
INDEX : DSN8910.I_DOCIDDEPT
DEP TABLE : DSN8910.DEPT
DSN8910.EMP
DSN8910.PROJ
TABLESPACE : DSN8D91A.DSN8S91E
TABLE : DSN8910.EMP
INDEXSPACE : DSN8D91A.XEMP1
INDEX : DSN8910.XEMP11
INDEXSPACE : DSN8D91A.XEMP2
INDEX : DSN8910.XEMP22
DEP TABLE : DSN8910.DEPT
DSN8910.EMPPROJACT
DSN8910.PROJ
TABLESPACE : DSN8D91A.DSN8S91P
TABLE : DSN8910.ACT
INDEXSPACE : DSN8D91A.XACT1
INDEX : DSN8910.XACT11
INDEXSPACE : DSN8D91A.XACT2
INDEX : DSN8910.XACT22
DEP TABLE : DSN8910.PROJACT
TABLE : DSN8910.EMPPROJACT
INDEXSPACE : DSN8D91A.XEMPPROJ
INDEX : DSN8910.XEMPPROJACT1
INDEXSPACE : DSN8D91A.XEMP1AQJ
INDEX : DSN8910.XEMPPROJACT2
TABLE : DSN8910.PROJ
INDEXSPACE : DSN8D91A.XPROJ1
INDEX : DSN8910.XPROJ11
INDEXSPACE : DSN8D91A.XPROJ2
INDEX : DSN8910.XPROJ22
DEP TABLE : DSN8910.PROJ
DSN8910.PROJACT
TABLE : DSN8910.PROJACT
INDEXSPACE : DSN8D91A.XPROJAC1
INDEX : DSN8910.XPROJAC11
DEP TABLE : DSN8910.EMPPROJACT
TABLESPACE : DSN8D91A.DSN8S91D
The report contains three sections, which include the following types of
information:
v Recovery history from the SYSIBM.SYSCOPY catalog table.
For a description of the fields in the SYSCOPY rows, see the table that describes
SYSIBM.SYSCOPY in Appendix D of DB2 SQL Reference.
v Log ranges from SYSIBM.SYSLGRNX.
v Volume serial numbers where archive log data sets from the BSDS reside.
If REPORT has no data to display for one or more of these topics, the
corresponding sections of the report contain the following message:
DSNU588I - NO DATA TO BE REPORTED
|
Figure 95. Example of REPORT RECOVERY in a data sharing environment (Part 1 of 3)
| UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
| 100406 11541904 0000374F86EC 00003752195A BF8110CF0ADF BF8110D0EB95 0001 0000
| 100406 11541916 0000374FB0E1 00003752195A BF8110CF26BE BF8110D0EC6F 0002 0000
| 100406 11541929 0000374FDACA 00003752195A BF8110CF4606 BF8110D0ECEE 0003 0000
| 100406 11541940 000037500483 00003752195A BF8110CF6209 BF8110D0ED64 0004 0000
| 100406 11541952 000037502E23 00003752195A BF8110CF7F04 BF8110D0EE47 0005 0000
| 100406 11541964 00003750582E 00003752195A BF8110CF9AFD BF8110D0EED8 0006 0000
| 100406 11541975 0000375081E7 00003752195A BF8110CFB7D6 BF8110D0EF51 0007 0000
| 100406 11541987 00003750AB87 00003752195A BF8110CFD3C4 BF8110D0EFCC 0008 0000
| 100406 11541998 00003750D540 00003752195A BF8110CFEFD4 BF8110D0F052 0009 0000
| 100406 11542010 00003750FEE0 00003752195A BF8110D00D2F BF8110D0F0D6 0010 0000
| 100406 11542022 0000375128A2 00003752195A BF8110D02A7E BF8110D0F157 0011 0000
| 100406 11542035 00003751525B 00003752195A BF8110D04860 BF8110D0F204 0012 0000
| 100406 11542046 000037517BFB 00003752195A BF8110D06558 BF8110D0F350 0013 0000
| 100406 11542059 00003751A674 00003752195A BF8110D083D9 BF8110D0F413 0014 0000
| 100406 11542074 00003751D02D 00003752195A BF8110D0A7D6 BF8110D0F4DF 0015 0000
| 100406 11542087 00003751FA0B 00003752195A BF8110D0C759 BF8110D0F567 0016 0000
| 100406 11542199 00003752B0F9 0000375C2734 BF8110D1D925 BF8110F9EE17 0001 0000
| 100406 11542201 00003752B4C1 0000375C275E BF8110D1DDFD BF8110F9EF2E 0002 0000
| 100406 11542202 00003752B84D 0000375C27D2 BF8110D1E02B BF8110F9EFC8 0003 0000
| 100406 11542202 00003752BBD9 0000375C2846 BF8110D1E252 BF8110F9F050 0004 0000
| 100406 11542203 00003752BF65 0000375C28BA BF8110D1E495 BF8110F9F0DB 0005 0000
| 100406 11542205 00003752C31E 0000375C292E BF8110D1E75F BF8110F9F160 0006 0000
| 100406 11542205 00003752C6AA 0000375C29A2 BF8110D1E9C9 BF8110F9F1E2 0007 0000
| 100406 11542206 00003752CA36 0000375C2A16 BF8110D1EC01 BF8110F9F27D 0008 0000
| 100406 11542207 00003752CDC2 0000375C2A8A BF8110D1EE6B BF8110F9F2FF 0009 0000
| 100406 11542209 00003752D1A4 0000375C2AFE BF8110D1F14C BF8110F9F390 0010 0000
| 100406 11542210 00003752D530 0000375C2B72 BF8110D1F3C8 BF8110F9F469 0011 0000
| 100406 11542211 00003752D8BC 0000375C2BE6 BF8110D1F65D BF8110F9F4ED 0012 0000
| 100406 11542212 00003752DC48 0000375C2C5A BF8110D1F8B9 BF8110F9F58E 0013 0000
| 100406 11542213 00003752E000 0000375C2CCE BF8110D1FB35 BF8110F9F64A 0014 0000
| 100406 11542214 00003752E38C 0000375C2D42 BF8110D1FE1E BF8110F9F6DF 0015 0000
| 100406 11542215 00003752E718 0000375C2DB6 BF8110D20107 BF8110F9F7A1 0016 0000
| 100406 11555014 000037641512 000037666079 BF811125EB99 BF8111266663 0001 0000
| 100406 11555015 0000376434E7 0000376661A7 BF811125EDEB BF8111266709 0002 0000
| 100406 11555017 00003764A0F2 000037666303 BF811125F276 BF8111266796 0003 0000
| 100406 11555022 00003764C7F9 00003766645F BF811125FD5C BF811126682C 0004 0000
| 100406 11555025 00003764E702 0000376665BB BF8111260503 BF81112668A9 0005 0000
| 100406 11555027 00003765060B 000037666717 BF81112609DA BF8111266922 0006 0000
| 100406 11555028 000037652514 000037666873 BF8111260DA8 BF81112669F5 0007 0000
| 100406 11555031 00003765441D 0000376669CF BF8111261384 BF8111266A77 0008 0000
| 100406 11555032 000037656326 000037666B2B BF81112616B4 BF8111266B08 0009 0000
| 100406 11555033 00003765822F 000037666C87 BF81112619C9 BF8111266BA3 0010 0000
| 100406 11555035 00003765A138 000037666DE3 BF8111261D27 BF8111266C63 0011 0000
| 100406 11555036 00003765C041 000037666F3F BF811126207C BF8111266CE5 0012 0000
| 100406 11555037 00003765E033 0000376670E8 BF8111262398 BF8111266D5F 0013 0000
| 100406 11555039 000037660033 000037667244 BF811126281F BF8111266DD6 0014 0000
| 100406 11555041 000037662033 0000376673A0 BF8111262BA7 BF8111266E50 0015 0000
| 100406 11555042 000037664033 0000376674FC BF8111262F1D BF8111266F25 0016 0000
| 100406 11555264 00003767877D 0000376FB01E BF8111284DEB BF8111357B9B 0001 0000
| 100406 11555266 00003767C8C1 0000376FB452 BF8111285307 BF811135809D 0002 0000
| 100406 11555270 000037682610 0000376FB6C6 BF8111285B65 BF811135868A 0003 0000
| 100406 11555273 0000376856B9 0000376FB93A BF8111286213 BF8111358C26 0004 0000
| 100406 11555276 00003768D63D 0000376FBBAE BF8111286AB6 BF8111359150 0005 0000
| 100406 11555279 0000376936AB 0000376FBE22 BF81112870D0 BF811135970F 0006 0000
| 100406 11555282 00003769A5DE 0000376FC10C BF8111287812 BF8111359D87 0007 0000
| 100406 11555285 00003769F6C1 0000376FC380 BF811128800C BF811135A49D 0008 0000
| 100406 11555287 0000376A3819 0000376FC5F4 BF8111288438 BF811135AA33 0009 0000
|
|
Figure 95. Example of REPORT RECOVERY in a data sharing environment (Part 2 of 3)
Figure 96 on page 575 shows sample output for the statement REPORT
RECOVERY TABLESPACE ARCHLOG. Under message DSNU584I, the archive log
entries after the last recovery point are marked with an asterisk (*). If you code the
CURRENT option, the output from message DSNU584I would include only the
archive logs after the last recovery point and the asterisk (*) would not be included
in the report.
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DB580501.TS580501
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
091702 10025977 00001E4FD319 00001E4FEB91 00001E4FD319 00001E4FEB91 0000 0000 *
091702 10030124 00001E505B93 00001E58BC23 00001E505B93 00001E58BC23 0000 0000 *
091702 10032302 00001E59A637 00001E5A5258 00001E59A637 00001E5A5258 0000 0000 *
091702 10035391 00001E5B26AB 00001E6222F3 00001E5B26AB 00001E6222F3 0000 0000 *
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DB580501.TS580501
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
091702 10025977 00001E4FD319 00001E4FEB91 00001E4FD319 00001E4FEB91 0000 0000
091702 10030124 00001E505B93 00001E58BC23 00001E505B93 00001E58BC23 0000 0000
091702 10032302 00001E59A637 00001E5A5258 00001E59A637 00001E5A5258 0000 0000
091702 10035391 00001E5B26AB 00001E6222F3 00001E5B26AB 00001E6222F3 0000 0000
The preceding statement produces output similar to the output shown in Figure 97.
DSNU000I 270 13:00:51.35 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = REP97
DSNU1044I 270 13:00:51.58 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I 270 13:00:51.60 DSNUGUTC - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E
DSNU581I ) 270 13:00:51.60 DSNUPREC - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E
DSNU593I ) 270 13:00:51.61 DSNUPREC - REPORT RECOVERY ENVIRONMENT RECORD:
MINIMUM RBA: 000000000000
MAXIMUM RBA: FFFFFFFFFFFF
MIGRATING RBA: 000000000000
DSNU582I ) 270 13:00:51.61 DSNUPPCP - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E SYSCOPY ROWS
TIMESTAMP = 2006-09-27-11.40.56.074739, IC TYPE = *C*, SHR LVL = , DSNUM = 0000,
START LRSN =00003697A903
DEV TYPE = , IC BACK = , STYPE = L, FILE SEQ = 0000,
PIT LRSN = 000000000000
LOW DSNUM = 0000, HIGH DSNUM = 0000, OLDEST VERSION = 0000, LOGICAL PART = 0000,
LOGGED = Y, TTYPE =
JOBNAME = , AUTHID = , COPYPAGESF = -1.0E+00
NPAGESF = -1.0E+00 , CPAGESF = -1.0E+00
DSNAME = DSN8D91A.DSN8S91E , MEMBER NAME = ,
INSTANCE = 01, RELCREATED = M
. . .
DSNU583I ) 270 13:00:51.61 DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DSN8D91A.DSN8S91E
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
092706 11405634 00003697B82E 0000369855C3 BF7840C34BF3 BF7840C44D81 0001 0000
092706 11405670 00003697E223 0000369855C3 BF7840C3A2F9 BF7840C44E27 0002 0000
092706 11405707 000036980BC3 0000369855C3 BF7840C3FF60 BF7840C44E92 0003 0000
092706 11405732 000036983674 0000369855C3 BF7840C43C57 BF7840C44F03 0004 0000
092706 11410155 0000369E31B6 000036ADE99C BF7840C8436A BF7840D832E3 0001 0000
092706 11410156 0000369E3ABB 000036A03DB6 BF7840C84546 BF7840D83495 0002 0000
092706 11410156 0000369E3E51 000036A0E15C BF7840C84683 BF7840D8359B 0003 0000
092706 11410159 0000369E4224 000036A5F932 BF7840C84CAA BF7840D83704 0004 0000
092706 11413835 000036C98000 000036D0B672 BF7840EB5CF9 BF7840EBF7A3 0001 0000
092706 11413845 000036CA937C 000036D0B9B6 BF7840EB7562 BF7840EC0150 0002 0000
092706 11413861 000036CC1F1B 000036D0BC2A BF7840EB9B43 BF7840EC0983 0004 0000
092706 11422002 000036FC9A0B 000036FCBA50 BF7841131913 BF7841131F84 0003 0000
092706 11422074 000036FCEB37 000036FD2000 BF784113C93E BF784113E333 0003 0000
092706 11422688 00003701A7B0 000037029A20 BF784119A438 BF78411B9857 0003 0000
092706 11423828 000037091000 0000370930BF BF784124848C BF7841248A06 0005 0000
092706 11424418 0000370DC5B7 0000370E625D BF78412A23C8 BF78412A5DC6 0001 0000
092706 11424419 0000370DE4FC 0000370E63B9 BF78412A2786 BF78412A6101 0002 0000
092706 11424421 0000370E0405 0000370E6515 BF78412A2A82 BF78412A6191 0003 0000
092706 11424427 0000370E230E 0000370E6671 BF78412A39CD BF78412A6210 0004 0000
092706 11424428 0000370E4254 0000370E74C2 BF78412A3CFD BF78412A630C 0005 0000
092706 11424782 0000370F3DF8 0000371086F8 BF78412D9C67 BF78412DFDE7 0001 0000
092706 11424787 0000370F41BA 0000371089A8 BF78412DA8F9 BF78412E02FB 0002 0000
092706 11424791 0000370F44E6 000037108C1C BF78412DB256 BF78412E0B57 0003 0000
092706 11424794 0000370F4812 000037108E90 BF78412DBAC1 BF78412E106B 0004 0000
092706 11424798 0000370F4B3E 00003710919C BF78412DC398 BF78412E14AE 0005 0000
092706 11424871 000037111E5F 00003711222E BF78412E7581 BF78412E7A75 0001 0000
092706 11424880 000037112516 00003711287E BF78412E8CD5 BF78412E910F 0002 0000
092706 11424886 000037112B66 000037112ECE BF78412E9A46 BF78412E9EF3 0003 0000
092706 11424893 0000371131D0 000037113538 BF78412EAAFB BF78412EAF6F 0004 0000
092706 11424898 000037113820 000037113B88 BF78412EB8A5 BF78412EC1C4 0005 0000
DSNU584I ) 270 13:00:51.61 DSNUPPBS - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E ARCHLOG1 BSDS VOLUMES
DSNU588I ) 270 13:00:51.61 DSNUPPBS - NO DATA TO BE REPORTED
Example 2: Reporting table spaces with LOB columns. The following control
statement specifies that REPORT is to provide a list of all table spaces related to
TABLESPACE DSN8D91L.DSN8S91B which contains a table with three LOB
columns. The output includes a separate section titled LOB TABLESPACE SET
REPORT showing a list of related LOB table spaces and their tables, indexes, and
index spaces. The base table and column to which each LOB object is related is
also shown.
REPORT TABLESPACESET TABLESPACE DSN8D91L.DSN8S91B
The preceding statement produces output similar to the output shown in Figure 98.
DSNU000I 277 11:19:09.40 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = REP98
DSNU1044I 277 11:19:09.59 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I 277 11:19:09.59 DSNUGUTC - REPORT TABLESPACESET TABLESPACE DSN8D91L.DSN8S91B
DSNU587I ) 277 11:19:09.62 DSNUPSET - REPORT TABLESPACE SET WITH TABLESPACE DSN8D91L.DSN8S91B
TABLESPACE : DSN8D91L.DSN8S91B
TABLE : DSN8910.EMP_PHOTO_RESUME
INDEXSPACE : DSN8D91L.XEMPRPHO
INDEX : DSN8910.XEMP_PHOTO_RESUME
TABLESPACE : DSN8D91L.DSN8S91B
The preceding statement produces output similar to the output shown in Figure 99.
DSNU000I 271 18:15:27.26 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = REP99
DSNU1044I 271 18:15:27.55 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I 271 18:15:27.55 DSNUGUTC - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E DSNUM 4
DSNU581I ) 271 18:15:27.62 DSNUPREC - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E
DSNU593I ) 271 18:15:27.66 DSNUPREC - REPORT RECOVERY ENVIRONMENT RECORD:
MINIMUM RBA: 000000000000
MAXIMUM RBA: FFFFFFFFFFFF
MIGRATING RBA: 000000000000
DSNU582I ) 271 18:15:27.66 DSNUPPCP - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E SYSCOPY ROWS
TIMESTAMP = 2006-09-27-11.40.56.074739, IC TYPE = *C*, SHR LVL = , DSNUM = 0000,
START LRSN =00003697A903
DEV TYPE = , IC BACK = , STYPE = L, FILE SEQ = 0000,
LOW DSNUM = 0001, HIGH DSNUM = 0005, OLDEST VERSION = 0000, LOGICAL PART = 0000,
LOGGED = Y, TTYPE =
JOBNAME = DSNTEJ1 , AUTHID = SYSADM , COPYPAGESF = 2.0E+01
NPAGESF = 1.6E+01 , CPAGESF = 1.6E+01
DSNAME = DB2V91A.DSN8D91A.DSN8S91E.REORGCPY , MEMBER NAME = ,
INSTANCE = 01, RELCREATED = M
DSNU583I ) 271 18:15:27.66 DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DSN8D91A.DSN8S91E
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
092706 11405732 000036983674 0000369855C3 BF7840C43C57 BF7840C44F03 0004 0000
092706 11410159 0000369E4224 000036A5F932 BF7840C84CAA BF7840D83704 0004 0000
092706 11413861 000036CC1F1B 000036D0BC2A BF7840EB9B43 BF7840EC0983 0004 0000
092706 11424427 0000370E230E 0000370E6671 BF78412A39CD BF78412A6210 0004 0000
092706 11424794 0000370F4812 000037108E90 BF78412DBAC1 BF78412E106B 0004 0000
092706 11424893 0000371131D0 000037113538 BF78412EAAFB BF78412EAF6F 0004 0000
DSNU584I ) 271 18:15:27.66 DSNUPPBS - REPORT RECOVERY TABLESPACE DSN8D91A.DSN8S91E ARCHLOG1 BSDS VOLUMES
DSNU588I ) 271 18:15:27.66 DSNUPPBS - NO DATA TO BE REPORTED
DSNU000I 270 13:51:08.82 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = REP101
DSNU1044I 270 13:51:09.04 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I 270 13:51:09.04 DSNUGUTC - REPORT RECOVERY INDEX DSN8910.XDEPT1
DSNU581I ) 270 13:51:09.05 DSNUPREC - REPORT RECOVERY INDEX DSN8910.XDEPT1
DSNU593I ) 270 13:51:09.05 DSNUPREC - REPORT RECOVERY ENVIRONMENT RECORD:
MINIMUM RBA: 000000000000
MAXIMUM RBA: FFFFFFFFFFFF
MIGRATING RBA: 000000000000
DSNU582I ) 270 13:51:09.05 DSNUPPCP - REPORT RECOVERY INDEX DSN8910.XDEPT1 SYSCOPY ROWS
TIMESTAMP = 2006-09-27-13.50.30.627880, IC TYPE = F , SHR LVL = R, DSNUM = 0000,
START LRSN =00003726ADE3
DEV TYPE = 3390 , IC BACK = , STYPE = , FILE SEQ = 0000,
PIT LRSN = 000000000000
LOW DSNUM = 0001, HIGH DSNUM = 0001, OLDEST VERSION = 0000, LOGICAL PART = 0000,
LOGGED = Y, TTYPE =
JOBNAME = REP101 , AUTHID = SYSADM , COPYPAGESF = 5.0E+00
NPAGESF = 5.0E+00 , CPAGESF = 0.0E0
DSNAME = DSN8D91A.XDEPT1.D2006270.T205030 , MEMBER NAME = ,
INSTANCE = 01, RELCREATED = M
DSNU583I ) 270 13:51:09.05 DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR INDEX DSN8910.XDEPT1
DSNU588I ) 270 13:51:09.05 DSNUPPLR - NO DATA TO BE REPORTED
DSNU584I ) 270 13:51:09.05 DSNUPPBS - REPORT RECOVERY INDEX DSN8910.XDEPT1 ARCHLOG1 BSDS VOLUMES
DSNU588I ) 270 13:51:09.05 DSNUPPBS - NO DATA TO BE REPORTED
| Example 5: Reporting table space set information with XML columns. The
| following control statement specifies that REPORT TABLESPACESET is to show all
| data base objects related to the base table space. In this example, the base table
| includes two XML columns. The report shows the data base objects which were
| implicitly created to store data for the XML columns.
|
The RESTORE SYSTEM utility can be run from any member in a data sharing
group, even one that is normally quiesced when any backups are taken. Any
member in the data sharing group that is active at or beyond the log truncation
point must be restarted, and its logs are truncated to the SYSPITR LRSN point. You
can specify the SYSPITR LRSN point in the CRESTART control statement of the
DSNJU003 (Change Log Inventory) utility. Any data sharing group member that is
normally quiesced at the time the backups are taken and is not active at or beyond
the log truncation point does not need to be restarted.
| To be able to use system-level backups that have been dumped to tape, the level of
| DFSMShsm must be V1R8 or higher.
Restrictions:
RESTORE SYSTEM does not restore logs; the utility only applies the logs. If you
specified BACKUP SYSTEM FULL to create copies of both the data and the logs,
you can restore the logs by another method. For more information about BACKUP
SYSTEM FULL, see Chapter 5, “BACKUP SYSTEM,” on page 45.
Output: Output for RESTORE SYSTEM is the recovered copy of the data volume
or volumes.
Related information: For more information about the use of RESTORE SYSTEM in
system level point-in-time recovery, see Part 4 of DB2 Administration Guide.
Authorization required: To run this utility, you must use a privilege set that
includes SYSADM authority.
When you specify RESTORE SYSTEM, you can specify only the following
statements in the same step:
v DIAGNOSE
v OPTIONS PREVIEW
v OPTIONS OFF
v OPTIONS KEY
v OPTIONS EVENT WARNING
In addition, RESTORE SYSTEM must be the last statement in SYSIN.
Syntax diagram
RESTORE SYSTEM
|
LOGONLY
FROMDUMP TAPEUNITS
DUMPCLASS ( dcl ) RSA ( key-label ) (num-tape-units)
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
LOGONLY
Specifies that the database volumes have already been restored, so the
RESTORE phase is skipped. Use this option when the database volumes have
already been restored outside of DB2. If the subsystem is at a tracker site, you
must specify the LOGONLY option. For more information about using a
tracker site, see Part 4 of DB2 Administration Guide.
| FROMDUMP
| Indicates that you want to dump only the database copy pool to tape during
| the restore.
| DUMPCLASS (dcl)
| Indicates what DFSMShsm dump class to use for the restore.
| RSA (key-label)
| Specifies that the key-label in the utility control statement will be passed to
| DFSMShsm in order to override the key-label that would normally be used
| to read dump tapes. The key-label can be up to 64 characters and must
| start with an alphabetic or national bank character.
| The FROMDUMP and DUMPCLASS options that you specify for the RESTORE
| SYSTEMS utility override the RESTORE/RECOVER FROM DUMP and
| DUMPCLASS NAME install options that you specify on installation panel
| DSNTIP6.
| TAPEUNITS
| Specifies the limit on the number of table drives that the utility should
| dynamically allocate during the restore of the database copy pool from dumps
| on tape.
| The default is the option that you specified on the installation panel DSNTIP6.
| If no default is specified, then the RESTORE SYSTEM utility will try to use all
| of the tape drives in your system.
| (num-tape-units)
| Specifies the maximum number of tape drives to allocate. If you specify
| zero, or you do not specify a value, the utility determines the optimal
| number of tape units to use. RESTORE SYSTEM TAPEUNITS has a max
| value of 255.
By default, RESTORE SYSTEM recovers the data from the database copy pool
during the RESTORE phase and then applies logs to the point in time at which the
existing logs were truncated during the LOGAPPLY phase. The RESTORE utility
never restores logs from the log copy pool.
| 3. Start DB2. When the DB2 restart processing for the conditional restart with the
| SYSPITR option completes, DB2 enters system RECOVER-pending andaccess
| maintenance mode. During system RECOVER-pending mode, you can run only
| the RESTORE SYSTEM utility.
| 4. Ensure that the ICF catalogs for the DB2 data are not active and are not
| allocated. The ICF catalog for the data must be on a separate volume that the
| ICF catalog for the logs. The command to unallocate the catalog is ″F
| CATALOG,UNALLOCATE(catalog-name).
| To determine whether the system level backup will be restored from disk or from
| tape:
| v If FROMDUMP was not specified and the system-level backup resides on disk,
| DB2 uses it for the restore.
| v If you specify YES in the RESTORE/RECOVER FROM DUMP field on
| installation panel DSNTIP6 or you specify the FROMDUMP option in the
| RESTORE utility statement, restore uses only the dumps on tape of the database
| copy pool.
| v If you specify a dump class name on the DUMP CLASS NAME field on
| installation panel DSNTIP6 or you specify the DUMPCLASS option in the
| RESTORE utility statement, DB2 restores the database copy pool from the
| DFSMshsm dump class.
| v If you do not specify a dump class name in the DUMP CLASS NAME field on
| installation panel DSNTIP6 or you do not specify the DUMPCLASS option in
| the RESTORE utility statement, RESTORE SYSTEM issues the DFSMShsm LIST
| COPYPOOL command and uses the first dump class listed in the output.
| The RESTORE SYSTEM utility invokes DFSMSdss to restore the database copy
| pool volumes from a system-level backup on tape.
| To determine whether the system-level backups of the database copy pool reside
| on the disk or tape:
| 1. Run the DFSMShsm LIST COPYPOOL command with the ALLVOLS option.
| 2. Run the DSNJU004 utility output. For data sharing, run the DSNJU004 utility
| output on each member.
| 3. Review the output from the DFSMShsm LIST COPYPOOL command with the
| ALLVOLS option.
| If the system-level backup chosen as the recovery base for the database copy pool
| no longer resides on DASD and the FROMDUMP option has not been specified,
| then the RESTORE SYSTEM utility will fail. You can then specify the RESTORE
| SYSTEM FROMDUMP option, or specify it on install panel DSNTIP6, to direct the
| utility to use the system-level backup that was dumped to tape.
You can restart RESTORE SYSTEM at the beginning of a phase or at the current
system checkpoint. A current system checkpoint occurs during the LOGAPPLY
phase after log records are processed. By default, RESTORE SYSTEM restarts at the
current system checkpoint.
When you restart RESTORE SYSTEM for a data sharing group, the member on
which the restart is issued must be the same member on which the original
RESTORE SYSTEM was issued.
For guidance in restarting online utilities, see “Restarting an online utility” on page
39.
Example 2: Recovering a backup system after the database volumes have already
been restored. The LOGONLY keyword in the following control statement indicates
that RESTORE SYSTEM is to apply any outstanding log changes to the database.
The utility is not to restore the volume copies. In this example, the database
volumes have already been restored outside of DB2. Note that RESTORE SYSTEM
applies log changes; it never restores the log copy pool.
//STEP1 EXEC DSNUPROC,TIME=1440,
// UTPROC=’’,
// SYSTEM=’DSN’
//SYSIN DD *
RESTORE SYSTEM LOGONLY
/*
| Example 3: Recovering a dump on tape of the database copy pool. The following
| control statement specifies that the RESTORE SYSTEM utility is to only consider
| dumps on tape of the database copy pool for restore. During the restore, the utility
| will dynamically allocate a maximum of 4 tape units.
| //SYSOPRB JOB (ACCOUNT),’NAME’,CLASS=K
| //UTIL EXEC DSNUPROC,SYSTEM=V91A,UID=’TEMB’,UTPROC=’’
| //*
| //*
| //DSNUPROC.SYSUT1 DD DSN=SYSOPR.SYSUT1,
| // DISP=(MOD,DELETE,CATLG),
| // SPACE=(16384,(20,20),,,ROUND),
| // UNIT=SYSDA
| //DSNUPROC.SYSIN DD *
| RESTORE SYSTEM FROMDUMP TAPEUNITS 4
| //
The two formats for the RUNSTATS utility are RUNSTATS TABLESPACE and
RUNSTATS INDEX. RUNSTATS TABLESPACE gathers statistics on a table space
and, optionally, on tables, indexes or columns; RUNSTATS INDEX gathers statistics
| only on indexes. RUNSTATS does not collect statistics for clone tables or index
| spaces.
When you run RUNSTATS TABLESPACE, you can use the COLGROUP option to
collect frequency and cardinality statistics on any column group. You can also
collect frequency and cardinality statistics on any single column. When you run
RUNSTATS INDEX, you can collect frequency statistics on the leading column of
an index and multi-column frequency and cardinality statistics on the leading
concatenated columns of an index.
| When you run RUNSTATS TABLESPACE, you can use the HISTOGRAM option,
| with the COLGROUP option, to indicate that histogram statistics are to be
| gathered for the specified group of columns. RUNSTATS TABLESPACE will ignore
| HISTOGRAM when processing XML table spaces and indexes. When you run
| RUNSTATS INDEX, histogram statistics can only be collected on the prefix
| columns with the same order. Key columns with a mixed order are not allowed for
| histogram statistics. RUNSTATS INDEX will ignore HISTOGRAM when processing
| XML NODEID or VALUES indexes.
Output: RUNSTATS updates the DB2 catalog with table space or index space
statistics, prints a report, or both. See “Reviewing RUNSTATS output” on page 615
for a list of all the catalog tables and columns that are updated by RUNSTATS.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
An ID with installation SYSOPR authority can also execute the RUNSTATS utility,
but only on a table space in the DSNDB06 database.
To use RUNSTATS with the REPORT YES option, you must have the SELECT
privilege on the reported tables. RUNSTATS does not report values from tables
that the user is not authorized to see.
To gather statistics on a LOB table space, you must have SYSADM or DBADM
authority for the LOB table space.
RUNSTATS TABLESPACE
LIST listdef-name
table-space-name
database-name. FORCEROLLUP NO
PART integer
FORCEROLLUP YES
(1) SAMPLE 25
TABLE (table-name) column-spec
SAMPLE integer colgroup-spec
( ALL )
INDEX correlation-stats-spec
,
( index-name correlation-stats-spec )
PART integer
(2)
SHRLEVEL REFERENCE REPORT NO UPDATE ALL HISTORY NONE
SHRLEVEL CHANGE REPORT YES UPDATE ACCESSPATH HISTORY ALL
SPACE ACCESSPATH
NONE SPACE
SORTDEVT device-type SORTNUM integer
Notes:
1 The TABLE keyword is not valid for a LOB table space.
2 You can change the default HISTORY value by modifying the STATISTICS HISTORY subsystem
parameter. By default, this value is NONE.
column-spec:
COLUMN ( ALL )
,
COLUMN ( column-name )
colgroup-spec:
|
,
| colgroup-stats-spec:
|
|
MOST NUMQUANTILES 100
FREQVAL COUNT integer HISTOGRAM
BOTH NUMQUANTILES integer
LEAST
|
||
correlation-stats-spec:
|
FREQVAL NUMCOLS 1 COUNT 10 MOST
KEYCARD MOST
FREQVAL NUMCOLS integer COUNT integer
BOTH
LEAST
NUMCOLS 1 NUMQUANTILES 100
HISTOGRAM
NUMQUANTILES 100
NUMCOLS integer
NUMQUANTILES integer
TABLESPACE database-name.table-space-name
Specifies the table space (and, optionally, the database to which it belongs) on
which table space and table statistics are to be gathered. This keyword must
not identify a table space in DSNDB01 or DSNDB07.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. You can
specify one LIST keyword for each RUNSTATS control statement.
When you specify this keyword with RUNSTATS TABLESPACE, the
list must contain only table spaces. Do not specify LIST with keywords
from the TABLE...(table-name) specification. Instead, specify LIST with
TABLE (ALL). Likewise, do not specify LIST with keywords from the
INDEX...(index-name) specification. You cannot specify index names
with a list. Use INDEX(ALL) instead.
If you specify LIST, you cannot specify the PART option. Instead, use
the PARTLEVEL option on the LISTDEF statement. The TABLESPACE
keyword is required in order to validate the contents of the list.
RUNSTATS TABLESPACE is invoked once for each item in the list.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 185.
database-name
Identifies the name of the database to which the table space belongs.
The default is DSNDB04.
table-space-name
Identifies the name of the table space on which statistics are to be
gathered.
If the table space that is specified by the TABLESPACE keyword is a LOB table
space, you can specify only the following additional keywords: SHRLEVEL
REFERENCE or CHANGE, REPORT YES or NO, and UPDATE ALL or NONE.
PART integer
Identifies a table space partition on which statistics are to be collected.
integer is the number of the partition and must be in the range from 1 to the
number of partitions that are defined for the table space. The maximum is
4096.
You cannot specify PART with LIST.
TABLE
Specifies the table on which column statistics are to be gathered. All tables
must belong to the table space that is specified in the TABLESPACE option.
You cannot specify the TABLE option for a LOB table space.
(ALL) Specifies that column statistics are to be gathered on all columns of all
tables in the table space. The default is ALL.
(table-name)
Specifies the tables on which column statistics are to be gathered. If
you omit the qualifier, RUNSTATS uses the user identifier for the
utility job as the qualifier. Enclose the table name in quotation marks if
the name contains a blank.
If you specify more than one table, you must repeat the TABLE option.
Multiple TABLE options must be specified entirely before or after any
INDEX keyword that may also be specified. For example, the INDEX
keyword may not be specified between any two TABLE keywords.
SAMPLE integer
Indicates the percentage of rows that RUNSTATS is to sample when collecting
statistics on non-indexed columns. You can specify any value from 1 through
100. The default is 25.
You cannot specify SAMPLE for LOB table spaces.
COLUMN
Specifies columns on which column statistics are to be gathered.
You can specify this option only if you specify a particular table on which
statistics are to be gathered. (Use the TABLE (table-name) option to specify a
particular table.) If you specify particular tables and do not specify the
COLUMN option, RUNSTATS uses the default, COLUMN(ALL). If you do not
specify a particular table when using the TABLE option, you cannot specify the
COLUMN option; however, in this case, COLUMN(ALL) is assumed.
(ALL)
Specifies that statistics are to be gathered on all columns in the table.
The COLUMN (ALL) option is not allowed for LOB table spaces.
(column-name, ...)
Specifies the columns on which statistics are to be gathered. You can
specify a list of column names. If you specify more than one column,
separate each name with a comma.
The more columns that you specify, the longer the job takes to complete.
COLGROUP (column-name, ...)
Indicates that the specified set of columns are to be treated as a group. This
option enables RUNSTATS to collect a cardinality value on the specified
| column group. RUNSTATS TABLESPACE will ignore COLGROUP when
| processing XML table spaces and indexes.
When you specify the COLGROUP keyword, RUNSTATS collects correlation
statistics for the specified column group. If you want RUNSTATS to also collect
distribution statistics, specify the FREQVAL option with COLGROUP.
(column-name, ...) specifies the names of the columns that are part of the
column group.
To specify more than one column group, repeat the COLGROUP option.
FREQVAL
Indicates, when specified with the COLGROUP option, that frequency statistics
are also to be gathered for the specified group of columns. (COLGROUP
indicates that cardinality statistics are to be gathered.) One group of statistics is
gathered for each column. You must specify COUNT integer with COLGROUP
| FREQVAL. RUNSTATS TABLESPACE will ignore FREQVAL
| MOST/LEAST/BOTH when processing XML table spaces and indexes.
COUNT integer
Indicates the number of frequently occurring values to be collected from
the specified column group. For example, COUNT 20 means that DB2
collects 20 frequently occurring values from the column group. You must
specify a value for integer; no default value is assumed.
Be careful when specifying a high value for COUNT. Specifying a value of
1000 or more can increase the prepare time for some SQL statements.
MOST
Indicates that the utility is to collect the most frequently occurring values
for the specified set of columns when COLGROUP is specified. The default
is MOST.
BOTH
Indicates that the utility is to collect the most and the least frequently
occurring values for the specified set of columns when COLGROUP is
specified.
LEAST
Indicates that the utility is to collect the least frequently occurring values
for the specified set of columns when COLGROUP is specified.
| HISTOGRAM
| Indicates, when specified with the COLGROUP option, that histogram statistics
| are to be gathered for the specified group of columns. RUNSTATS
| TABLESPACE will ignore HISTOGRAM when processing XML table spaces
| and indexes.
| NUMQUANTILES integer
| Indicates how many quantiles that the utility is to collect. The integer value
| must be equal to or greater than one. The number of quantiles that you
| specify should never exceed the total number of distinct values in the
| column or the column group. The maximum number of quantiles allowed
| is 100.
| When the NUMQUANTILES keyword is omitted, NUMQUANTILES takes
| a default value of 100. Based on the number of records in the table, the
| number of quantiles is readjusted down to an optimal number.
INDEX
Specifies indexes on which statistics are to be gathered. RUNSTATS gathers
column statistics for the first column of the index, and possibly additional
index columns depending on the options that you specify. All the indexes must
be associated with the same table space, which must be the table space that is
specified in the TABLESPACE option.
INDEX can be used on auxiliary tables to gather statistics on an index.
(ALL) Specifies that column statistics are to be gathered for all indexes that
are defined on tables that are contained in the table space. The default
is ALL.
(index-name, ...)
Specifies the indexes for which statistics are to be gathered. You can
specify a list of index names. If you specify more than one index,
separate each name with a comma. Enclose the index name in
quotation marks if the name contains a blank.
PART integer
Identifies an index partition on which statistics are to be collected.
integer is the number of the partition.
KEYCARD
Collects all of the distinct values in all of the 1 to n key column combinations
for the specified indexes. n is the number of columns in the index. For
example, suppose that you have an index defined on three columns: A, B, and
C. If you specify KEYCARD, RUNSTATS collects cardinality statistics for
column A, column set A and B, and column set A, B, and C.
FREQVAL
Controls, when specified with the INDEX option, the collection of
frequent-value statistics. If you specify FREQVAL with INDEX, this keyword
must be followed by the NUMCOLS and COUNT keywords.
NUMCOLS integer
Indicates the number of columns in the index for which RUNSTATS is to
collect frequently occurring values. integer can be a number between 1 and
the number of indexed columns. If you specify a number greater than the
number of indexed columns, RUNSTATS uses the number of columns in
the index.
For example, suppose that you have an index defined on three columns: A,
B, and C. If you specify NUMCOLS 1, DB2 collects frequently occurring
values for column A. If you specify NUMCOLS 2, DB2 collects frequently
occurring values for the column set A and B. If you specify NUMCOLS 3,
DB2 collects frequently occurring values for the column set A, B, and C.
The default is 1, which means that RUNSTATS is to collect frequently
occurring values on the first key column of the index.
COUNTinteger
Indicates the number of frequently occurring values that are to be collected
from the specified key columns. For example, specifying 15 means that
RUNSTATS is to collect 15 frequently occurring values from the specified
key columns. The default is 10.
| HISTOGRAM
| Indicates, when specified with the INDEX option, that histogram statistics are
| to be gathered for the specified key columns. Histogram statistics can only be
| collected on the prefix columns with the same order. Key columns for
| histogram statistics with a mixed order are not allowed.
| When RUNSTATS collects histogram statistics for partition table spaces, it will
| aggregate them into SYSCOLDIST.
| NUMQUANTILES integer
| Indicates how many quantiles that the utility is to collect. The integer value
| must be equal to or greater than one. The number of quantiles that you
| specify should never exceed the total number of distinct values in the key
| columns specified. The maximum number of quantiles allowed is 100.
| When the NUMQUANTILES keyword is omitted, NUMQUANTILES takes
| a default value of 100. Based on the number of keys in the index, the
| number of quantiles is readjusted down to an optimal number.
SHRLEVEL
Indicates whether other programs that access the table space while RUNSTATS
is running must use read-only access or can change the table space.
REFERENCE
Allows only read-only access by other programs. The default is
REFERENCE.
CHANGE
Allows other programs to change the table space or index. With
SHRLEVEL CHANGE, RUNSTATS might collect statistics on
uncommitted data.
REPORT
Specifies whether RUNSTATS is to generate a set of messages that report the
collected statistics.
NO
Indicates that RUNSTATS is not to generate the set of messages. The
default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT. The
messages that RUNSTATS generates are dependent on the combination of
keywords in the utility control statement. However, these messages are not
dependent on the value of the UPDATE option. REPORT YES always
generates a report of space and access path statistics.
UPDATE
Indicates which collected statistics are to be inserted into the catalog tables.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that DB2 is to update the catalog with only those statistics
that are used for access path selection.
SPACE
Indicates that DB2 is to update the catalog with only space-related
statistics.
NONE
Indicates that no catalog tables are to be updated with the collected
statistics.
Executing RUNSTATS always invalidates the dynamic cache; however,
when you specify UPDATE NONE REPORT NO, RUNSTATS
invalidates statements in the dynamic statement cache without
collecting statistics, updating catalogs tables, or generating reports.
HISTORY
Indicates which statistics are to be recorded in the catalog history tables. The
value that you specify for HISTORY does not depend on the value that you
specify for UPDATE.
The default is the value of the STATISTICS HISTORY subsystem parameter on
the DSNTIPO installation panel. By default, this parameter value is NONE.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that DB2 is to update the catalog history tables with only
those statistics that are used for access path selection.
SPACE
Indicates that DB2 is to update the catalog history tables with only
space-related statistics.
NONE
Indicates that no catalog history tables are to be updated with the
collected statistics.
SORTDEVT
| Specifies the device type that DFSORT uses to dynamically allocate the sort
| work data sets that are required.
| device-type
| Specifies any device type that is acceptable for the DYNALLOC parameter
| of the SORT or OPTIONS option of DFSORT. For information about valid
| device types, see DFSORT Application Programming Guide.
| If you omit SORTDEVT, a sort is required, and you have not provided the DD
| statements that the SORT program requires for the temporary data sets,
| SORTDEVT will default to SYSALLDA and the temporary data sets will be
| dynamically allocated.
RUNSTATS INDEX
LIST listdef-name
,
( index-name correlation-stats-spec )
PART integer
( ALL ) TABLESPACE tablespace-name correlation-stats-spec
database-name.
(1)
HISTORY NONE FORCEROLLUP NO
SORTNUM integer HISTORY ALL FORCEROLLUP YES
ACCESSPATH
SPACE
Notes:
1 You can change the default HISTORY value by modifying the STATISTICS HISTORY subsystem
parameter. By default, this value is NONE.
correlation-stats-spec:
|
FREQVAL NUMCOLS 1 COUNT 10 MOST
KEYCARD MOST
FREQVAL NUMCOLS integer COUNT integer
BOTH
LEAST
NUMCOLS 1 NUMQUANTILES 100
HISTOGRAM
NUMQUANTILES 100
NUMCOLS integer
NUMQUANTILES integer
SHRLEVEL
Indicates whether other programs that access the table space while RUNSTATS
is running must use read-only access or can change the table space.
REFERENCE
Allows only read-only access by other programs. The default is
REFERENCE.
CHANGE
Allows other programs to change the table space or index. With
SHRLEVEL CHANGE, RUNSTATS might collect statistics on
uncommitted data.
REPORT
Specifies whether RUNSTATS is to generate a set of messages that report the
collected statistics.
NO
Indicates that RUNSTATS is not to generate the set of messages. The
default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT. The
messages that RUNSTATS generates are dependent on the combination of
keywords in the utility control statement. However, these messages are not
dependent on the value of the UPDATE option. REPORT YES always
generates a report of space and access path statistics.
UPDATE
Indicates which collected statistics are to be inserted into the catalog tables.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that DB2 is to update the catalog with only those statistics
that are used for access path selection.
SPACE
Indicates that DB2 is to update the catalog with only space-related
statistics.
NONE
Indicates that no catalog tables are to be updated with the collected
statistics.
Executing RUNSTATS always invalidates the dynamic cache; however,
when you specify UPDATE NONE REPORT NO, RUNSTATS
invalidates statements in the dynamic statement cache without
collecting statistics, updating catalogs tables, or generating reports.
SORTDEVT
Specifies the device type that DFSORT uses to dynamically allocate the sort
work data sets that are required.
| device-type
| Specifies any device type that is acceptable for the DYNALLOC parameter
| of the SORT or OPTIONS option of DFSORT. For information about valid
| device types, see DFSORT Application Programming Guide.
If you omit SORTDEVT, a sort is required, and you have not provided the DD
statements that the SORT program requires for the temporary data sets,
SORTDEVT will default to SYSALLDA and the temporary data sets will be
dynamically allocated.
Notes:
1. Required when collecting distribution statistics for column groups.
2. Required when collecting statistics on at least one data-partitioned secondary index.
3. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
data set. Otherwise, DFSORT dynamically allocates the temporary data set.
| 4. It is recommended that you use dynamic allocation by specifying SORTDEVT in the
| utility statement because dynamic allocation reduces the maintenance required of the
| utility job JCL.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or index
Object that is to be scanned.
Calculating the size of the sort work data sets: Depending on the type of statistics
that RUNSTATS collects, the utility uses the ST01WKnn data sets, the STATWK01
data set, both types of data sets, or neither.
The ST01WKnn data sets are used when collecting statistics on at least one
data-partitioned secondary index. To calculate the approximate size (in bytes) of
the ST01WKnn data set, use the following formula:
The STATWK01 data set is used when collecting distribution statistics. To calculate
the approximate size (in bytes) of the STATWK01 data set, use the following
formula:
#colgroupsn
Number of column groups that are specified for the nth table
#rows Number of rows for the nth table
| DB2 utilities uses DFSORT to perform sorts. Sort work data sets cannot span
| volumes. Smaller volumes require more sort work data sets to sort the same
| amount of data; therefore, large volume sizes can reduce the number of needed
| sort work data sets. It is recommended that at least 1.2 times the amount of data to
| be sorted be provided in sort work data sets on disk. For more information about
| DFSORT, see DFSORT Application Programming Guide.
You should recollect frequency statistics when either of the following situations is
true:
v The distribution of the data changes
v The values over which the data is distributed change
One common situation in which old statistics can affect query performance is when
a table has columns that contain data or ranges that are constantly changing (for
example, dates and timestamps). These types of columns can result in old values in
the HIGH2KEY and LOW2KEY columns in the catalog. You should periodically
collect column statistics on these changing columns so that the values in
HIGH2KEY and LOW2KEY accurately reflect the true range of data, and range
predicates can obtain accurate filter factors.
If you need to control the size or placement of the data sets, use the JCL
statements to allocate STATWK01. To estimate the size of this sort work data set,
use the formula for STATWK01 in “Data sets that RUNSTATS uses” on page 608.
To let the work data set be dynamically allocated, remove the STATWK01 DD
statements from the job and allocate the UTPRINT statement to SYSOUT. If you let
the SORT program dynamically allocate this data set, you must specify the
SORTDEV option in the RUNSTATS control statement.
Figure 103. Example RUNSTATS output from a job on a catalog table space
DB2 uses the collected statistics on the catalog to determine the access path for
user queries of the catalog.
Improving performance
You can improve the performance of RUNSTATS on table spaces that are defined
with the LARGE option by specifying the SAMPLE option, which reduces the
number of rows that are scanned for statistics.
Run RUNSTATS on only the columns or column groups that might be used as
search conditions in a WHERE clause of queries. Use the COLGROUP option to
identify the column groups. Collecting additional statistics on groups of columns
that are used as predicates improves the accuracy of the filter factor estimate and
leads to improved query performance. Collecting statistics on all columns of a table
is costly and might not be necessary.
In some cases, you can avoid running RUNSTATS by specifying the STATISTICS
keyword in LOAD, REBUILD INDEX, or REORG utility statements. When you
specify STATISTICS in one of these utility statements, DB2 updates the catalog
with table space or index space statistics for the objects on which the utility is run.
However, you cannot collect column group statistics with the STATISTICS
keyword. You can collect column group statistics only by running the RUNSTATS
utility. If you restart a LOAD or REBUILD INDEX job that uses the STATISTICS
keyword, DB2 does not collect inline statistics. For these cases, you need to run the
RUNSTATS utility after the restarted utility job completes. For information about
restarting a REORG job that uses the STATISTICS keyword, see“Restarting REORG
STATISTICS” on page 513.
sets be dynamically allocated through the SORT program, or you can allocate the
data sets through DD statements in the job JCL. The DD name is ST01WKnn.
If you need to control the size or placement of the data sets, use the JCL
statements to allocate ST01WKnn. To estimate the size of this sort work data set,
use the formula for ST01WKnn in “Data sets that RUNSTATS uses” on page 608.
To let the sort work data sets be dynamically allocated, remove the ST01WKnn DD
statements from the job and allocate the UTPRINT statement to SYSOUT. If you let
the SORT program dynamically allocate these data sets, you must specify the
SORTDEV option in the RUNSTATS control statement to specify the device type
for the temporary data sets. Optionally, you can also use the SORTNUM option to
specify the number of temporary data sets to use.
You can restart a RUNSTATS utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
Table 105 on page 614 shows which claim classes RUNSTATS claims and drains
and any restrictive state that the utility sets on the target object.
Table 106 shows which utilities can run concurrently with RUNSTATS on the same
target object. The target object can be a table space, an index space, or a partition
of a table space or index space. If compatibility depends on particular options of a
utility, that information is also shown in the table.
Table 106. Compatibility of RUNSTATS with other utilities
RUNSTATS RUNSTATS RUNSTATS RUNSTATS
TABLESPACE TABLESPACE INDEX INDEX
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Utility REFERENCE CHANGE REFERENCE CHANGE
CHECK DATA DELETE NO Yes Yes Yes Yes
CHECK DATA DELETE YES No No No No
CHECK INDEX Yes Yes Yes Yes
CHECK LOB Yes Yes Yes Yes
COPY INDEXSPACE Yes Yes Yes Yes
COPY TABLESPACE Yes Yes Yes Yes
DIAGNOSE Yes Yes Yes Yes
LOAD No No No No
LOAD SHRLEVEL CHANGE No Yes No Yes
MERGECOPY Yes Yes Yes Yes
MODIFY RECOVERY Yes Yes Yes Yes
QUIESCE Yes Yes Yes Yes
REBUILD INDEX Yes Yes No No
RECOVER ERROR RANGE No No Yes Yes
RECOVER INDEX Yes Yes No No
| RECOVER INDEX TOCOPY or No No No No
| TOLOGPOINT
RECOVER TABLESPACE (no No No Yes Yes
options)
RUNSTATS sets the following columns to -1 for table spaces that are defined as
LARGE:
v CARD in SYSTABLES
v CARD in SYSINDEXPART
v FAROFFPOS in SYSINDEXPART
v NEAROFFPOS in SYSINDEXPART
v FIRSTKEYCARD in SYSINDEXES
v FULLKEYCARD in SYSINDEXES
Index statistics and table space statistics: Table 107 on page 616 shows the catalog
tables that RUNSTATS updates depending on the value of the UPDATE option, the
value of the HISTORY option, and the source of the statistics (table space,
partition, index or LOB table space).
Notes:
1. Not applicable if the specified table space is a LOB table space.
2. Only updated for partitioned objects. When you run RUNSTATS against single partitions of an object, RUNSTATS
uses the partition-level statistics to update the aggregate statistics for the entire object. These partition-level
statistics are contained in the following catalog tables:
v SYSCOLSTATS
v SYSCOLDISTSTATS
v SYSTABSTATS
v SYSINDEXSTATS
3. Applicable only when the specified table space is a LOB table space.
4. When HISTORY NONE is specified, none of the catalog history tables are updated.
5. Only the SPACEF and STATSTIME columns are updated.
| 6. Applicable only when the target object is an index on expression.
These tables do not describe information about LOB columns because DB2 does
not use those statistics for access path selection. For information about what values
in these columns indicate for LOBs, see Appendix F of DB2 SQL Reference.
A value in the “Use” column indicates whether information about the DB2 catalog
column is General-use Programming Interface and Associated Guidance
Table 108 lists the columns in SYSTABLES that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options unless the statistics in the SYSTABSTATS table have been
manually updated to -1. In this case, the columns in SYSTABLES are not updated
after RUNSTATS PART UPDATE ALL is run.
Table 108. SYSTABLES catalog columns that DB2 uses to select access paths
SYSTABLES Column
name Column description Use
CARDF Total number of rows in the table. S
NPAGES Total number of pages on which rows of this table are S
included.
NPAGESF Total number of pages that are used by the table. S
PCTROWCOMP Percentage of rows compressed within the total S
number of active rows in the table.
Table 109 lists the columns in SYSTABSTATS that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 109. SYSTABSTATS catalog columns that DB2 uses to select access paths
SYSTABSTATS
Column name Column description Use
CARDF Total number of rows in the partition. S
NPAGES Total number of pages on which rows of this partition S
are included.
Table 110 lists the columns in SYSCOLUMNS that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 110. SYSCOLUMNS catalog columns that DB2 uses to select access paths
SYSCOLUMNS
Column name Column description Use
COLCARDF Estimated number of distinct values for the column. S
For an indicator column, this value is the number of
LOBs that are not null and whose lengths are greater
than zero. The value is -1 if statistics have not been
gathered. The value is -2 for columns of an auxiliary
table, XML column indicator, NODEID column, and
XML table.
HIGH2KEY Second highest value of the column. Blank if statistics S
have not been gathered or if the column is an indicator
column, NODEID column, or a column of an auxiliary
table or XML table. If the column has a non-character
data type, the data might not be printable. This column
can be updated.
Table 110. SYSCOLUMNS catalog columns that DB2 uses to select access
paths (continued)
SYSCOLUMNS
Column name Column description Use
LOW2KEY Second lowest value of the column. Blank if statistics S
have not been gathered or if the column is an indicator
column, NODEID column, or a column of an auxiliary
table or XML table. If the column has a non-character
data type, the data might not be printable. This column
can be updated.
Table 111 lists the columns in SYSCOLDIST that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 111. SYSTCOLDIST catalog columns that DB2 uses to select access paths
SYSCOLDIST
Column name Column description Use
CARDF The number of distinct values for the column group. S
This number is valid only for cardinality key column
statistics. (A C in the TYPE column indicates that
cardinality statistics were gathered.)
COLGROUPCOLNO Identifies the set of columns that are associated with S
the key column statistics.
COLVALUE Actual index column value that is being counted for S
distribution index statistics.
FREQUENCYF Percentage of rows, multiplied by 100, that contain the S
values that are specified in COLVALUE.
HIGHVALUE The high bound column value.
LOWVALUE The low bound column value.
NUMCOLUMNS The number of columns that are associated with the G
key column statistics.
Table 112 lists the columns in SYSTABLESPACE that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 112. SYSTABLESPACE catalog columns that DB2 uses to select access paths
SYSTABLESPACE
Column name Column description Use
NACTIVE or Number of active pages in the table space; shows the S
NACTIVEF number of pages that are accessed if a record cursor is
used to scan the entire file. The value is -1 if statistics
have not been gathered.
Table 113 on page 622 lists the columns in SYSINDEXES that DB2 uses to select
access paths. These columns are updated by RUNSTATS with the UPDATE
ACCESSPATH or UPDATE ALL options.
Table 113. SYSINDEXES catalog columns that DB2 uses to select access paths
SYSINDEXES
Column name Column description Use
CLUSTERRATIOF A number between 0 and 1 that, when multiplied by S
100, gives the percentage of rows that are in clustering
order. For example, a value of 1 indicates that all rows
are in clustering order. A value of .87825 indicates that
87.825% of the rows are in clustering order.
CLUSTERING Indicates whether CLUSTER was specified when the G
index was created.
DATAREPEAT- Number of data pages touched following index key S
FACTORF order.
FIRSTKEYCARDF Number of distinct values of the first key column. S
FULLKEYCARDF Number of distinct values of the full key. S
NLEAF Number of leaf pages in the index. S
NLEVELS Number of levels in the index tree. S
Table 114 lists the columns in SYSKEYTARGETS that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 114. SYSKEYTARGETS catalog columns that DB2 uses to select access paths
SYSKEYTARGETS
Column name Column description Use
IXNAME Name of the index G
IXSCHEMA Qualifier of the index G
KEYSEQ Numeric position of the key-target in the index G
COLNO Numeric position of the column in the table if the G
expression is a column name. 0 otherwise. For XML
index, it is always 0
ORDERING Order of the key G
A Ascending
TYPESCHEMA Schema of the data type G
TYPENAME Name of the data type G
DATATYPEID For a built-in data type, the internal ID of the builtin G
type. For a distinct data type, the internal ID of the
distinct type.
SOURCETYPEID For a built-in data type, zero. For a distinct data type, G
the internal ID of the built-in type upon which the
distinct type is sourced.
LENGTH Length attribute of the key-target or, in the case of a G
decimal key-target, its precision. The number does not
include the internal prefixes that are used to record the
actual length and null state, where applicable.
LENGTH2 Maximum length of the data retrieved from the G
column.
SCALE Scale of decimal data. Zero if not a decimal key G
NULLS Whether the key can contain null values: N, Y G
Table 114. SYSKEYTARGETS catalog columns that DB2 uses to select access
paths (continued)
SYSKEYTARGETS
Column name Column description Use
CCSID CCSID of the key. 0 for non-character type G
SUBTYPE Applies to character keys only. Indicates the subtype of G
the data.
CREATEDTS Timestamp when the key-target was created G
RELCREATED Release when the key-target was created G
IBMREQD A value of Y indicates that the row came from the basic G
machine-readable material (MRM) tape. For all other
values, see ″Release dependency indicators″
DERIVED_FROM For an index on a scalar expression, this is the text of G
the scalar expression used to generate the key-target
value. For an XML index, this is the XML pattern used
to generate the key-target value. Empty string
otherwise
STATSTIME Timestamp of RUNSTATS. The default value is G
’0001-01-01.00.00.00.000000’. This is an updatable
column.
CARDF Number of distinct values for the key-target. The value G
is -2 if the index is a node ID index or an XML index.
HIGH2KEY Second highest key value. This is an updatable column. G
LOW2KEY Second lowest key value. This is an updatable column. G
STATS_FORMAT The type of statistics gathered: G
Table 115 lists the columns in SYSKEYTGTDIST that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 115. SYSKEYTGTDIST catalog columns that DB2 uses to select access paths
SYSKEYTGTDIST
Column name Column description Use
STATSTIME If RUNSTATS updated the statistics, this is the date G
and time when the last invocation of RUNSTATS
updated the statistics.
IBMREQD A value of Y indicates that the row came from the G
basic machine-readable material (MRM) tape.
IXSCHEMA Qualifier of the index. G
IXNAME Name of the index. G
KEYSEQ Numeric position of the key-targeted in the index. G
KEYVALUE Contains the data of a frequently occurring value. If G
the value has a non-character data type, the data may
not be printable.
CARDF For TYPE='C', this is the number of distinct values for G
the key group. For TYPE='H', this is the number of
distinct values for the key group in a quantile
indicated by QUANTILENO.
Table 115. SYSKEYTGTDIST catalog columns that DB2 uses to select access
paths (continued)
SYSKEYTGTDIST
Column name Column description Use
FREQUENCYF For TYPE='F' or 'N', this is the percentage of entries in G
the index with the value specified in the KEYVALUE
when the number is multiplied by 100. For TYPE='H',
this is the percentage of entries in the index that fall
in the quantile indicated by QUANTILENO.
LOWVALUE For TYPE='H', this is the low bound for the quantile G
indicated by QUANTILENO.
HIGHVALUE For TYPE='H', this is the high bound for the quantile G
indicated by QUANTILENO.
A value in the ″Use″ column indicates whether information about the DB2 catalog
column is General-use Programming Interface and Associated Guidance
Information (G) or Product-sensitive Programming Interface and Associated
Guidance Information (S), as defined in “Programming interface information” on
page 940.
Table 116 lists the columns in SYSTABLESPACE that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options
Table 116. SYSTABLESPACE catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options.
SYSTABLESPACE
Column name Column description Use
AVGROWLEN Average length of rows for the tables in the table space. G
Table 117 lists the columns in SYSTABLES that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options.
Table 117. SYSTABLES catalog columns that are updated by RUNSTATS with the UPDATE
SPACE or UPDATE ALL options
SYSTABLES Column
name Column description Use
AVGROWLEN Average length of rows for the tables in the table space. G
Table 118 on page 625 lists the columns in SYSTABLES_HIST that are updated by
RUNSTATS with the UPDATE SPACE or UPDATE ALL options.
Table 118. SYSTABLES_HIST catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSTABLES_HIST
Column name Column description Use
AVGROWLEN Average length of rows for the tables in the table space. G
Table 119 lists the columns in SYSTABLEPART that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options.
Table 119. SYSTABLEPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSTABLEPART
column name Column description Use
AVGROWLEN Average length of rows for the tables in the table G
space.
CARDF Total number of rows in the table space or partition, G
or number of LOBs in the table space if the table
space is a LOB table space. The value is -1 if statistics
have not been gathered.
DSNUM Number of data sets. G
EXTENTS Number of data set extents. G
NEARINDREF Number of rows that are relocated near their original S
page.
Table 119. SYSTABLEPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSTABLEPART
column name Column description Use
PAGESAVE Percentage of pages that are saved in the table space S
or partition as a result of using data compression. For
example, a value of 25 indicates a savings of 25%, so
that the required pages are only 75% of what would
be required without data compression. The value is 0
if no savings from using data compression are likely,
or if statistics have not been gathered. The value can
be negative if using data compression causes an
increase in the number of pages in the data set.
Table 120 on page 627 lists the columns in SYSTABLEPART_HIST that are updated
by RUNSTATS with the UPDATE SPACE or UPDATE ALL options.
Table 120. SYSTABLEPART_HIST that are updated by RUNSTATS with the UPDATE SPACE
or UPDATE ALL options
SYSTABLEPART_HIST
Column name Column description Use
AVGROWLEN Average length of rows for the tables in the table G
space.
Table 121 lists the columns in SYSINDEXES that are updated by RUNSTATS with
the UPDATE SPACE or UPDATE ALL options.
Table 121. SYSINDEXES catalog columns that are updated by RUNSTATS with the UPDATE
SPACE or UPDATE ALL options
SYSINDEXES
column name Column description Use
AVGKEYLEN Average length of keys within the index. The value is G
−1 if statistics have not been gathered.
Table 122 lists the columns in SYSINDEXES_HIST that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options.
Table 122. SYSINDEXES_HIST catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSINDEXES_HIST
column name Column description Use
AVGKEYLEN Average length of keys within the index. The G
value is −1 if statistics have not been gathered.
Table 123 lists the columns in SYSINDEXPART that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options.
Table 123. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSINDEXPART
column name Column description Use
AVGKEYLEN Average length of keys within the index. The value is G
−1 if statistics have not been gathered.
CARDF Number of rows that the index or partition refers to. S
DSNUM Number of data sets. G
EXTENTS Number of data set extents. G
Table 123. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSINDEXPART
column name Column description Use
FAROFFPOSF Number of times that accessing a different, “far-off” S
page would be necessary when accessing all the data
records in index order.
Table 123. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSINDEXPART
column name Column description Use
NEAROFFPOSF Number of times that accessing a different, “near-off” S
page would be necessary when accessing all the data
records in index order.
Table 123. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSINDEXPART
column name Column description Use
SQTY The secondary space allocation in 4-KB blocks for the G
(user-managed) data set, in small integer format.
SECQTYI The secondary space allocation in 4-KB blocks for the G
(user-managed) data set, in integer format.
Table 125 lists the columns in SYSLOBSTATS that are updated by RUNSTATS with
the UPDATE SPACE or UPDATE ALL options.
Table 125. SYSLOBSTATS catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSLOBSTATS
column name Column description Use
AVGSIZE The average size of a LOB in the LOB table space. G
FREESPACE The number of kilobytes of available space in the S
LOB table space, up to the highest used RBA.
ORGRATIO The percentage of organization in the LOB table S
space. A value of 100 indicates perfect organization of
the LOB table space. A value of 1 indicates that the
LOB table space is disorganized.
sample 25% of the rows. The SHRLEVEL change option indicates that DB2 is to
permit other processes to make changes while this utility is executing.
//STEP1 EXEC DSNUPROC,UID=’IUJQU225.RUNSTA’,TIME=1440,
// UTPROC=’’,
// SYSTEM=’DSN’
//UTPRINT DD SYSOUT=*
//SYSIN DD *
RUNSTATS TABLESPACE DSN8D91A.DSN8S91E
TABLE(ALL) SAMPLE 25
INDEX(ALL)
SHRLEVEL CHANGE
Example 4: Updating statistics for columns in several tables. The following control
statement specifies that RUNSTATS is to update the catalog statistics for the
following columns in table space DSN8D91P.DSN8S91C:
v All columns in the TCONA and TOPTVAL tables
v The LINENO and DSPLINE columns in the TDSPTXT table
RUNSTATS TABLESPACE(DSN8D91P.DSN8S91C)
TABLE (TCONA)
TABLE (TOPTVAL) COLUMN(ALL)
TABLE (TDSPTXT) COLUMN(LINENO,DSPLINE)
Example 5: Updating all statistics for a table space. The following control
statement specifies that RUNSTATS is to update all catalog statistics (table space,
tables, columns, and indexes) for table space DSN8D81P.DSN8S81C.
RUNSTATS TABLESPACE(DSN8D91P.DSN8S91C) TABLE INDEX
Example 6: Updating statistics that are used for access path selection and
generating a report. The following control statement specifies that RUNSTATS is to
update the catalog with only the statistics that are collected for access path
selection. The utility is to report all statistics for the table space and route the
report to SYSPRINT.
RUNSTATS TABLESPACE DSN8D91A.DSN8S91E
REPORT YES
UPDATE ACCESSPATH
Example 7: Updating all statistics and generating a report. The following control
statement specifies that RUNSTATS is to update the catalog with all statistics
(access path and space) for table space DSN8D81A.DSN8S81E. The utility is also to
report the collected statistics and route the report to SYSPRINT.
RUNSTATS TABLESPACE DSN8D91A.DSN8S91E
REPORT YES
UPDATE ALL
Example 10: Updating catalog and history tables and reporting all statistics. The
following control statement specifies that RUNSTATS is to update the catalog
tables and history catalog tables with all statistics for table space
DB0E0101.TL0E0101 (including related indexes and columns). The utility is to
report the collected statistics and route the statistics to SYSPRINT.
RUNSTATS TABLESPACE DBOE0101.TLOE0101
INDEX
TABLE
REPORT YES
UPDATE ALL
HISTORY ALL
Example 11: Updating statistics on frequently occurring values. Assume that the
SYSADM.IXNP1 index is defined on four columns: NP1, NP2, NP3, and NP4. The
following control statement specifies that RUNSTATS is to update the statistics for
index SYSADM.IXNPI.
The KEYCARD option indicates that the utility is to collect cardinality statistics for
column NP1, column set NP1 and NP2, and column set NP1, NP2, and NP3, and
column set NP1, NP2, NP3, and NP4. The FREQVAL option and its associated
parameters indicate that RUNSTATS is also to collect the 5 most frequently
occurring values on column NP1 (the first key column of the index), and the 10
most frequently occurring values on the column set NP1 and NP2 (the first two
key columns of the index). The utility is to report the collected statistics and route
the statistics to SYSPRINT.
RUNSTATS INDEX (SYSADM.IXNPI)
KEYCARD
FREQVAL NUMCOLS 1 COUNT 5
FREQVAL NUMCOLS 2 COUNT 10
REPORT YES
Example 13: Updating distribution statistics for specific columns and retrieving
the most frequently occurring values. The following control statement specifies that
RUNSTATS is to update statistics for the columns EMPLEVEL, EMPGRADE, and
EMPSALARY in table DSN8810.DEPT. The FREQVAL and COUNT options
indicate that RUNSTATS is to collect the 10 most frequently occurring values for
each column. The values are to be stored in the SYSCOLDIST and
SYSCOLDISTSTATS catalog tables.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
TABLE(DSN8810.DEPT)
COLGROUP(EMPLEVEL,EMPGRADE,EMPSALARY) FREQVAL COUNT 10
Example 14: Updating distribution statistics for specific columns in a table and
retrieving the least frequently occurring values. The following control statement
specifies that RUNSTATS is to update statistics for the columns EMPLEVEL,
EMPGRADE, and EMPSALARY in table DSN8810.DEPT. The FREQVAL and
COUNT options indicate that RUNSTATS is to collect the 15 least frequently
occurring values for each column. The values are to be stored in the SYSCOLDIST
and SYSCOLDISTSTATS catalog tables.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
TABLE(DSN8810.DEPT)
COLGROUP(EMPLEVEL,EMPGRADE,EMPSALARY) FREQVAL COUNT 15 LEAST
Example 15: Updating distribution statistics for specific columns in a table space
and retrieving the most and least frequently occurring values. The following
control statement specifies that RUNSTATS is to update statistics for the columns
EMPLEVEL, EMPGRADE, and EMPSALARY in table DSN8810.DEPT. The
FREQVAL and COUNT options indicate that RUNSTATS is to collect the 10 most
frequently occurring values for each column and the 10 least frequently occurring
values for each column. The values are to be stored in the SYSCOLDIST and
SYSCOLDISTSTATS catalog tables.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
TABLE(DSN8810.DEPT)
COLGROUP(EMPLEVEL,EMPGRADE,EMPSALARY) FREQVAL COUNT 10 BOTH
Example 16: Updating statistics for an index and retrieving the most and least
frequently occurring values. The following control statement specifies that
RUNSTATS is to collect the 10 most frequently occurring values and the 10 least
frequently occurring values for the first key column of index ADMF001.IXMA0101.
The KEYCARD option indicates that the utility is also to collect all the distinct
values in all the key column combinations. A set of messages is sent to SYSPRINT
and all collected statistics are updated in the catalog.
RUNSTATS INDEX(ADMF001.IXMA0101)
KEYCARD
FREQVAL NUMCOLS 1 COUNT 10 BOTH
REPORT YES UPDATE ALL
Example 17: Invalidating statements in the dynamic statement cache for a table
space without generating report statistics. The following control statement
specifies that RUNSTATS is to invalidate statements in the dynamic statement
cache for table space DSN8D81A.DSN8S81E. However, RUNSTATS is not to collect
or report statistics or update the catalog.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
REPORT NO
UPDATE NONE
Output: The output from STOSPACE consists of new values in a number of catalog
tables. See “Reviewing STOSPACE output” on page 639 for a list of columns and
tables that STOSPACE updates.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STOSPACE privilege
v SYSCTRL or SYSADM authority
Syntax diagram
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
STOGROUP
Identifies the storage groups that are to be processed.
(stogroup-name, ...)
Specifies the name of a storage group. You can use a list of
from one to 255 storage group names. Separate items in the
list by commas, and enclose them in parentheses.
* Indicates that all storage groups are to be processed.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Storage group
Object that is to be reported.
When DB2 storage groups are used in the creation of table spaces and indexes,
DB2 defines the data sets for them. The STOSPACE utility permits a site to monitor
the disk space that is allocated for the storage group.
STOSPACE does not accumulate information for more than one storage group. If a
partitioned table space or index space has partitions in more than one storage
group, the information in the catalog about that space comes from only the group
for which STOSPACE was run.
When you run the STOSPACE utility, the SPACEF column of the catalog represents
the high-allocated RBA of the VSAM linear data set. Use the value in the SPACEF
column to project space requirements for table spaces, table space partitions, index
spaces, and index space partitions over time. Use the output from the Access
Method Services LISTCAT command to determine which table spaces and index
spaces have allocated secondary extents. When you find these, increase the
primary quantity value for the data set, and run the REORG utility.
For information about space utilization in the DSN8S91E table space in the
DSN8D91A database, first run the STOSPACE utility, and then execute the
following SQL statement:
Alternatively, you can use TSO to look at data set and pack descriptions.
You can restart a STOSPACE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
STOSPACE can run concurrently with any utility on the same target object.
However, because STOSPACE updates the catalog, concurrent STOSPACE utility
jobs or other concurrent applications that update the catalog might cause timeouts
and deadlocks.
You can use the STOSPACE utility on storage groups that have objects within
temporary databases.
Example 2: Specifying a storage group name that contains spaces. If the name of
the storage group that you want STOSPACE to process contains spaces, enclose the
entire storage group name in single quotation marks. Parentheses are optional. The
following statements are correct ways to specify a storage group with the name
THIS IS STOGROUP.1.ON.E:
STOSPACE STOGROUP(’THIS IS STOGROUP.1.ONE’)
4. If the value is too large to fit in the SPACE column, the SPACEF column is updated.
Example 3: Updating catalog SPACE columns for all storage groups. The following
control statement specifies that the STOSPACE utility is to update the catalog
SPACE or SPACEF columns for all storage groups.
STOSPACE STOGROUP *
Example 4: Updating catalog SPACE columns for several storage groups. The
following control statement specifies that the STOSPACE utility is to update the
catalog SPACE or SPACEF columns for storage groups DSN8G910 and DSN8G81U.
STOSPACE STOGROUP(DSN8G810, DSN8G81U)
Templates enable you to standardize data set names across the DB2 subsystem and
to easily identify the data set type when you use variables in the data set name.
These variables are listed in “Option descriptions” on page 644.
The TEMPLATE control statement uses the z/OS DYNALLOC macro (SVC 99) to
perform data set allocation. Therefore, the facility is constrained by the limitations
of this macro and by the subset of DYNALLOC that is supported by TEMPLATE.
See z/OS MVS Programming: Assembler Services Guide for more details.
Syntax diagram
|
SUBSYS name LRECL int RECFM
F
FB
V
VB
name-expression:
.
(1)
qualifier-expression
(parenthetical-expression)
Notes:
1 The entire name-expression represents one character string and cannot contain any blanks.
qualifier-expression:
character-expression
(2)
&variable .
(1)
(start )
,length
Notes:
1 If you use substring notation, the entire DSN operand must be enclosed in single quotation
marks. For example, the DSN operand 'P&PA(4,2).' uses substring notation, so it is enclosed in
single quotation marks.
2 The &PA. variable cannot be used more than once.
common-options:
MGMTCLAS name STORCLAS name RETPD integer ,
EXPDL' date'
VOLUMES ( volser )
GDGLIMIT 99
VOLCNT integer UNCNT integer GDGLIMIT integer
|
DISP ( NEW , DELETE , DELETE ) LIMIT(n CYL ,new_template)
OLD KEEP KEEP GB
SHR CATLG CATLG MB
MOD UNCATLG UNCATLG
disk-options:
NBRSECND 10
NBRSECND integer DIR integer DSNTYPE LIBRARY
PDS
HFS
NULL
tape-options:
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
TEMPLATE template-name
Defines a data set allocation template and assigns to the template a
name, template-name, for subsequent reference on a DB2 utility
control statement. The template-name can have up to eight
alphanumeric characters and must begin with an alphabetic
character.
The template-name is followed by keywords that control the
allocation of tape and disk data sets. A single TEMPLATE
statement cannot have both disk options and tape options. The
UNIT keyword specifies a generic unit name that is defined on
your system. This value is used to determine if a disk or tape data
set is being allocated. All other keywords specified on the
TEMPLATE control statement must be consistent with the specified
unit type.
DSN name-expression
Specifies the template for the z/OS data set name. You can specify
the data set name, name-expression, by using symbolic variables,
non-variable alphanumeric, or national characters, or any
combination of these characters. The resulting name must adhere
to the z/OS data set naming rules, including those rules about
name length, valid characters, name structure and qualifier length.
| You must specify a DSN expression that is unique for each data set
| allocated by the utility and to each invocation of the utility.
Data set names consists of a series of qualifiers, qualifier-expression,
that are separated by a period (.) and an optional parenthetical
expression. No imbedded blanks are allowed.
If the DSN name operand contains any special characters, it must
be enclosed in single quotation marks. For example, in the
following TEMPLATE statement, the DSN operand contains the
parentheses special character, so the entire operand is enclosed in
single quotation marks:
TEMPLATE X DSN ’A.GDG.VERSION(+1)’
Parentheses around the DSN name operand are optional. They are
used in the following DSN specification:
DSN(&DB..&TS..D&DATE.)
SUBSYSname Specifies the MVS BATCHPIPES SUBSYSTEM name. The SUBSYS
operand must be a valid BATCHPIPES SUBSYSTEM name and
must not exceed eight characters in length. When SUBSYS is
specified, LRECL and RECFM are required.
LRECLint Specifies the record length of the MVS BATCHPIPES SUBSYSTEM
file. There is no default value and this option is required when
SUBSYS is specified.
RECFMxx Specifies the record format of the MVS BATCHPIPES SUBSYSTEM
file. The valid values are F, FB, V, or VB. There is no default value
and this option is required when SUBSYS is specified.
character-expression
Specifies the data set name or part of the data set name by using
non-variable alphanumeric or national characters.
&variable. Specifies the data set name or part of the data set name by using
symbolic variables. See Table 128 on page 646, Table 129 on page
646, Table 130 on page 646, and Table 131 on page 647 for a list of
variables that can be used.
Each symbolic variable is substituted with its related value at
execution time to form a specific data set name. When used in a
DSN expression, substitution variables begin with an ampersand
sign (&) and end with a period (.), as in the following example:
DSN &DB..&TS..D&JDATE..COPY&ICTYPE.
You can also use substring notation for the data set name. This
notation can help you keep the data set name from exceeding the
44 character maximum. If you use substring notation, the entire
DSN operand must be enclosed in single quotation marks. To
specify a substring, use the form &variable(start). or
&variable(start,length).
start
Specifies the substring’s starting byte location within the
current variable base value at the time of execution. start must
be an integer from 1 to 128.
length
Specifies the length of the substring. If you specify start but do
not specify length, length, by default, is the number of
characters from the start character to the last character of the
variable value at the time of execution. For example, given a
five-digit base value, &PART(4). specifies the fourth and fifth
digits of the value. length must be an integer that does not
cause the substring to extend beyond the end of the base
value. For more examples of variable substring notation, see
“Sample TEMPLATE control statements” on page 658.
Table 128 on page 646 contains a list of JOB variables and their
descriptions.
Notes:
1. When you specify the &TS., &IS., or &SN. variables in a template that is used by an
UNLOAD statement with BLOBF, CLOBF, or DBCLOBF, DB2 substitutes the name of the
table space that stores the LOB column value, not the base table space name. This
substitution enables DB2 to generate unique data set names for each LOB column with
partitioned table spaces.
2. Use the &PA. variable when processing LISTDEF lists with the PARTLEVEL keyword or
data-partitioned secondary indexes. Otherwise, DB2 could generate duplicate data set
names.
Table 131 contains a list of DATE and TIME variables. and their
descriptions.
Table 131. DATE and TIME variables
Variable Description
&DATE. or &DT. YYYYDDD
&TIME. or &TI. HHMMSS
&JDATE. or &JU. YYYYDDD
&YEAR. or &YE. YYYY portion of &DATE.
&MONTH. or &MO. MM
&DAY. or &DA. DD
&JDAY. or &JD. DDD portion of &DATE.
&HOUR. or &HO. HH portion of &TIME.
&MINUTE. or &MI. MM portion of &TIME.
&SECOND. or &SC. SS portion of &TIME.
&UNIQ. or &UQ. Unique eight characters that DB2 derives from the system
clock. This set of characters begins with an alphabetical
character and is followed by seven alphabetical or numeric
characters.
Note: All date and time values are set by using the STCK instruction, and they reflect the
date and time value in Greenwich Mean Time (GMT). DATE and TIME values are captured
in the UTILINIT phase of each utility and remain constant until the utility terminates.
parenthetical-expression
Specifies part of the data set name by using non-variable
alphanumeric or national characters that are enclosed in
parentheses. For example, the expressions Q1.Q2.Q3(member) and
Q1.Q2.Q3(+1) use valid parenthetical expressions.
for the data set. All other TEMPLATE keywords are validated
based on the specified type of unit (disk or tape). The default is
SYSALLDA.
MODELDCB dsname
Specifies the name of the data set on which the template is based.
DCB information is read from this model data set.
BUFNO integer
Specifies the number of BSAM buffers. The specified value must be
in the range from 0 to 99. The default is 30.
DATACLAS name
Specifies the SMS data class. The name value must be a valid SMS
data class and must not exceed eight characters in length.
The data set is cataloged if DATACLAS is specified. If this option
is omitted, no DATACLAS is specified to SMS.
MGMTCLAS name
Specifies the SMS management class. The name value must be a
valid SMS management class and must not exceed eight characters
in length.
The data set is cataloged if MGMTCLAS is specified. If this option
is omitted, no MGMTCLAS is specified to SMS.
STORCLAS name
Specifies the SMS storage class. The name value must be a valid
SMS storage class and must not exceed eight characters in length.
The data set is cataloged if STORCLAS is specified. If this option is
omitted, no STORCLAS is specified to SMS.
RETPD integer Specifies the retention period in days for the data set. The integer
value must be in the range from 0 to 9999.
If DATACLAS, MGMTCLAS, or STORCLAS is specified, the class
definition might control the retention. RETPD cannot be specified
with EXPDL, and it is not valid with the JES3DD option.
EXPDL 'date' Specifies the expiration date for the data set, in the form
YYYYDDD, where YYYY is the four-digit year, and DDD is the
three-digit Julian day. The 'date' value must be enclosed by single
quotation marks.
If DATACLAS, MGMTCLAS, or STORCLAS is specified, the class
definition might control the retention. EXPDL cannot be specified
with RETPD, and it is not valid with the JES3DD option.
VOLUMES (vol1,vol2,...)
Specifies a list of volume serial numbers for this allocation. If the
data set is not cataloged the list is truncated, if necessary, when it
is stored in SYSIBM.SYSCOPY. The specified number of volumes
cannot exceed the specified or default value of VOLCNT.
The first volume must contain enough space for the primary space
allocation.
If an individual volume serial-number contains leading zeros, it
must be enclosed in single quotation marks.
VOLCNT (integer)
Specifies the maximum number of volumes that an output data set
might require. The specified value must be between 0 and 255. The
default for tape templates is 95. For disk templates, the utility does
not set a default value. Operating system defaults apply.
UNCNT integer
Specifies the number of devices that are to be allocated. The
specified value must in the range from 0 to 59.
If UNIT specifies a specific device number, the value of UNCNT
must either be 1 or be omitted.
GDGLIMIT (integer)
Specifies the number of entries that are to be created in a GDG
base if a GDG DSN is specified and the base does not already
exist. If a GDG base does not already exist and you do not want to
define one, specify a GDGLIMIT of zero (0).
The default value is 99. The integer value must be in the range
from 0 to 255.
DISP integer Specifies the data set disposition by using three positional
parameters: status, normal-termination, and abnormal-termination.
All three parameters must be specified.
status
Standard z/OS values are allowed: NEW, OLD, SHR, MOD.
normal-termination
Standard z/OS values are allowed: DELETE, KEEP, CATLG,
UNCATLG.
abnormal-termination
Standard z/OS values are allowed: DELETE, KEEP, CATLG,
UNCATLG.
Default values for DISP vary, depending on the utility and the data
set that is being allocated. Defaults for restarted utilities also differ
from default values for new utility executions. Default values are
shown in Table 132 and Table 133 on page 650.
Table 132. Data dispositions for dynamically allocated data sets for new utility executions (continued)
CHECK REORG
CHECK INDEX or COPY- MERGE- REBUILD REORG TABLE-
ddname DATA CHECK LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSRCPY1 Ignored Ignored NEW Ignored NEW NEW Ignored Ignored NEW Ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSRCPY2 Ignored Ignored NEW Ignored NEW NEW Ignored Ignored NEW Ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSUT1 NEW NEW Ignored Ignored NEW Ignored NEW NEW NEW Ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG CATLG
SORTOUT NEW Ignored Ignored Ignored NEW Ignored Ignored NEW NEW Ignored
DELETE DELETE DELETE DELETE
CATLG CATLG CATLG CATLG
SYSMAP Ignored Ignored Ignored Ignored NEW Ignored Ignored Ignored Ignored Ignored
CATLG
CATLG
SYSERR NEW Ignored Ignored Ignored NEW Ignored Ignored Ignored Ignored Ignored
CATLG CATLG
CATLG CATLG
FILTERDDS Ignored Ignored NEW Ignored Ignored Ignored Ignored Ignored Ignored Ignored
DELETE
DELETE
Table 133. Data dispositions for dynamically allocated data sets on RESTART (continued)
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
ddname DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
FILTERDDS Ignored Ignored NEW Ignored Ignored Ignored Ignored Ignored Ignored Ignored
DELETE
DELETE
| Restrictions:
| v You cannot switch to a DD card.
| v The template control statement that LIMIT references must exist
| in SYSIN or SYSTEMPL and it cannot refer to itself.
| v Switching can only be performed a single time per allocation.
| Multiple switching cannot take place.
| v The utility PREVIEW function ignores the LIMIT keyword, only
| the original TEMPLATE control statement is previewed. The
| LIMIT keyword is ignored for new templates.
disk-options
SPACE (primary,secondary)
Specifies the z/OS disk space allocation parameters in the range
from 1 to 1677215. If you specify (primary,secondary) value, these
values are used instead of the DB2-calculated values. When
tape-options
specified data set. ″No″ indicates that the specified utility does
not support tape stacking for the specified data set. ″Ignored″
indicates that the specified data set does not apply to the
specified utility.
Table 134. Supported data sets for tape stacking
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
ddname DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSREC Ignored Ignored Ignored Ignored No Ignored Ignored Ignored Yes Yes
SYSDISC Ignored Ignored Ignored Ignored No Ignored Ignored Ignored Yes Ignored
SYSPUNCH Ignored Ignored Ignored Ignored Ignored Ignored Ignored Ignored Yes Yes
SYSCOPY Ignored Ignored Yes Yes No Yes Ignored Ignored Yes Ignored
SYSCOPY2 Ignored Ignored Yes Yes No Yes Ignored Ignored Yes Ignored
SYSRCPY1 Ignored Ignored Yes Yes No Yes Ignored Ignored Yes Ignored
SYSRCPY2 Ignored Ignored Yes Yes No Yes Ignored Ignored Yes Ignored
SYSUT1 No No Ignored Ignored No Ignored No No No Ignored
SORTOUT No Ignored Ignored Ignored No Ignored Ignored No No Ignored
SYSMAP Ignored Ignored Ignored Ignored No Ignored Ignored Ignored Ignored Ignored
SYSERR No Ignored Ignored Ignored No Ignored Ignored Ignored Ignored Ignored
FILTERDDS Ignored Ignored No Ignored Ignored Ignored Ignored Ignored Ignored Ignored
JES3DD ddname
Specifies the JCL DD name that is to be used at job initialization
time for the tape unit. JES3 requires that all required tape units be
pre-allocated by DD statement. Use the JES3DD to specify which
unit the utility is to use for this template.
TRTCH Specifies the track recording technique for magnetic tape drives
that have improved data recording capability.
NONE
Specifies that the TRTCH specification is to be eliminated from
dynamic allocation. The default is NONE.
COMP
Specifies that data is to be written in compacted format.
NOCOMP
Specifies that data is to be written in standard format.
End of tape-options
As an alternative to using JCL to specify the data sets, you can use the TEMPLATE
utility control statement to dynamically allocate utility data sets. Options of the
TEMPLATE utility allow you to specify the following information:
v The data set naming convention
v DFSMS parameters
v Disk or tape allocation parameters
You can specify a template in the SYSIN data set, immediately preceding the utility
control statement that references it, or in one or more TEMPLATE libraries.
A TEMPLATE library is a data set that contains only TEMPLATE utility control
statements. You can specify a TEMPLATE data set DD name by using the
TEMPLATEDD option of the OPTIONS utility control statement. This specification
applies to all subsequent utility control statements until the end of input or until
DB2 encounters a new OPTIONS TEMPLATEDD(ddname) specification.
Any template that is defined within SYSIN overrides another template definition of
the same name in a TEMPLATE data set.
TEMPLATE utility control statements enable you to standardize data set allocation
and the utility control statements that reference those data sets, which reduces the
need to customize and alter utility job streams.
The required TEMPLATE statement might look something like the following
TEMPLATE statement:
TEMPLATE tmp1 DSN(DB2.&TS..D&JDATE..COPY&ICTYPE.&LOCREM.&PRIBAC.)
VOLUMES(vol1,vol2,vol3)
LISTDEF payroll INCLUDE TABLESPACE PAYROLL.*
INCLUDE INDEXSPACE PAYROLL.*IX
EXCLUDE TABLESPACE PAYROLL.TEMP*
EXCLUDE INDEXSPACE PAYROLL.TMPIX*
COPY LIST payroll COPYDDN(tmp1,tmp1) RECOVERYDDN(tmp1,tmp1)
See “Syntax and options of the TEMPLATE control statement ” on page 641 for
details.
DB2 usually estimates the size of a data set based on the size of other existing data
sets; however, if any of the required data sets are on tape, DB2 is unable to
estimate the size. When DB2 is able to calculate size, it calculates the maximum
size. This action can result in overly large data sets. DB2 always allocates data set
size with the RLSE (release) option so that unused space is released on
deallocation. However in some cases, the calculated size of required data sets is too
large for the DYNALLOC interface to handle. In this case, DB2 issues error
message DSNU1034I, you must allocate the data set by a DD statement. If the
object is part of a LISTDEF list, you might need to remove it from the list and
process it individually.
PCTPRIME
100% of the required space estimated by DB2 is allocated as a PRIMARY
quantity. If this amount of space is typically not available on a single
volume, decrease PCTPRIME.
MAXPRIME
If you want an upper limit based on size, not on percentage, use
MAXPRIME.
NBRSECND
After the restrictions on the PRIMARY quantity have been applied, a
SECONDARY quantity equal to the estimated required space is divided
into the specified number of secondary extents.
If you omit the SPACE option quantities, current data set space estimation
formulas that are shown in the ″Data sets that utility uses″ sections for each online
utility are implemented as default values for disk data sets.
| Template switching
| Template switching is most commonly used to direct small data sets to disk and
| large data sets to tape, but it can also be used to switch to templates that differ in
| DSNs or in HSN classes. The decision to switch is made based on the estimated
| output data set size, which may differ from the actual final size of the output data
| set. This difference is particularly true for incremental image copies that are
| estimated at 10% of the space required for a full image copy.
You can restart a TEMPLATE utility job, but it starts from the beginning again. If
you are restarting this utility as part of a larger job in which TEMPLATE
completed successfully, but a later utility failed, do not change the TEMPLATE
utility control statement, if possible. If you must change the TEMPLATE utility
control statement, use caution; any changes can cause the restart processing to fail.
For example, if you change the template name of a temporary work data set that
was opened in an earlier phase and closed but is to be used later, the job fails. For
guidance in restarting online utilities, see “Restarting an online utility” on page 39.
Example 2: Using variable substring notation to specify data set names. The
following control statement defines template CP2. Variable substring notation is
used in the DSN option to define the data set naming convention.
Assume that in the year 2003 you make a full image copy of partition 00004 of
table space DSN8S81D. Assume that you specify the template CP2 for the data set
for the local primary copy. DB2 gives the following name to the image copy data
set: DH173001.DSN8S81D.Y03.COPYLP.P004
Notice that every variable in the DSN option begins with an ampersand (&) and
ends with a period (.). These ampersands and periods are not included in the data
set name. Only periods that do not signal the end of a variable are included in the
data set name.
Example 3: Using COPY with TEMPLATE with variable substring notation. The
following TEMPLATE utility control statement defines template SYSCOPY. Variable
substring notation is used in the DSN option to define the data set naming
convention. The subsequent COPY utility control statement specifies that DB2 is to
make a local primary copy of the first partition of table space
DSN8D81A.DSN8S81E. COPY is to write this image copy to a data set that is
dynamically allocated according to the SYSCOPY template. In this case, the
resulting data set name is DSN8D81A.DSN8S81E.P001
TEMPLATE SYSCOPY DSN ’&DB..&TS..P&PA(3).’
Notice that you can change the part variable in the DSN operand from P&PA(3). to
P&PA(3,3). The resulting data set name is the same, because the length value of 3
is implied in the first specification.
Example 4: Specifying a template for tape data sets with an expiration date. The
following control statement defines the TAPEDS template. Any data sets that are
defined with this template are to be allocated on device number 3590-1, as
indicated by the UNIT option, and are to expire on 1 January 2100, as indicated by
the EXPDL option. The DSN option indicates that these data set names are to have
the following three parts: database name, table space name, and date.
TEMPLATE TAPEDS DSN(&DB..&TS..D&DATE.)
UNIT 3590-1 EXPDL ’2100001’
Example 5: Specifying a disk template that gives space allocation parameters. The
following control statement defines the DISK template. Any data sets that are
defined with this template are to have 100 cylinders of primary disk space and 10
cylinders of secondary disk space, as indicated by the SPACE and CYL options.
The DSN option indicates that the data set names are to have the following three
parts: database name, table space name, and time.
TEMPLATE DISK DSN &DB..&TS..T&TIME.
SPACE(100,10) CYL
Example 6: Specifying a disk template that uses a default size with constraints.
The following control statement defines the DISK template. Because the SPACE
option does not specify quantities for primary and secondary space allocation, DB2
calculates these values with the following constraint: the maximum allowable
primary space allocation is 1000 cylinders. This constraint is indicated by the
MAXPRIME option. The DSN option indicates that the data set names are to have
the following three parts: database name, table space name, and time.
TEMPLATE DISK DSN(&DB..&TS..T&TIME.)
SPACE CYL MAXPRIME 1000
Example 7: Using TEMPLATE with LISTDEF and COPY. In the following example,
the LISTDEF utility control statement defines the CPY1 list. The TEMPLATE
control statement then defines the TMP1 template. The COPY utility control
statement then specifies that DB2 is to make local copies of the objects in the CPY1
list. DB2 is to write these copies to data sets that are dynamically allocated
according to the characteristics that are defined in the TMP1 template.
LISTDEF CPY1 INCLUDE TABLESPACES TABLESPACE DBA906*.T*A906*
INCLUDE INDEXSPACES COPY YES INDEXSPACE ADMF001.I?A906*
TEMPLATE TMP1 UNIT SYSDA
DSN (DH109006.&STEPNAME..&SN..T&TIME.)
DISP (MOD,CATLG,CATLG)
COPY LIST CPY1 COPYDDN (TMP1) PARALLEL (2) SHRLEVEL REFERENCE
//************************************************************
//* COMMENT: Define a model data set. *
//************************************************************
//STEP1 EXEC PGM=IEFBR14
//SYSCOPX DD DSN=JULTU225.MODEL,DISP=(NEW,CATLG,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20)),VOL=SER=SCR03,
// DCB=(RECFM=FB,BLKSIZE=4000,LRECL=100)
//***********************************************************
//* COMMENT: GDGLIMIT(6)
//***********************************************************
//STEP2 EXEC DSNUPROC,UID=’JULTU225.GDG’,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSIN DD *
TEMPLATE COPYTEMP
UNIT SYSDA
DSN ’JULTU225.GDG(+1)’
MODELDCB JULTU225.MODEL
GDGLIMIT(6)
COPY TABLESPACE DBLT2501.TPLT2501
FULL YES
COPYDDN (COPYTEMP)
SHRLEVEL REFERENCE
/*
Figure 104. Example TEMPLATE and COPY statements for writing a local copy to a data set
that is dynamically allocated according to the characteristics of the template.
Example 9: Using a template to copy a GDG data set to tape. In the example in
Figure 105 on page 661, the OPTIONS utility control statement causes the
subsequent TEMPLATE statement to run in PREVIEW mode. In this mode, DB2
checks the syntax of the TEMPLATE statement. If DB2 determines that the syntax
is valid, it expands the data set names. The OPTIONS OFF statement ends
PREVIEW mode processing. The subsequent COPY utility control statement
executes normally. The COPY statement specifies that DB2 is to write a local image
copy of the table space DBLT4301.TPLT4301 to a data set that is dynamically
allocated according to the characteristics that are defined in the COPYTEMP
template. According to the COPYTEMP template, this data set is to be named
JULTU243.GDG(+1) (as indicated by the DSN option) and is to be stacked on the
tape volume 99543 (as indicated by the UNIT, STACK, and VOLUMES options).
The data set dispositions are specified by the DISP option. The GDGLIMIT option
specifies that 50 entries are to be created in a GDG base.
/*
//*************************************************
//* COMMENT: COPY GDG DATA SET TO TAPE
//*************************************************
//STEP1 EXEC DSNUPROC,UID=’JULTU243.GDG’,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSIN DD *
OPTIONS PREVIEW
TEMPLATE COPYTEMP
UNIT TAPE
DSN ’JULTU243.GDG(+1)’
VOLUMES (99543)
GDGLIMIT(50)
DISP(NEW,CATLG,CATLG)
STACK YES
OPTIONS OFF
COPY TABLESPACE DBLT4301.TPLT4301
FULL YES
COPYDDN (COPYTEMP)
SHRLEVEL REFERENCE
/*
Figure 105. Example job that uses OPTIONS, TEMPLATE, and COPY statements to copy a
GDG data set to tape.
Example 10: Creating a template that can be used for unloading LOB objects The
TEMPLATE control statement in Figure 106 defines a template called LOBFRV. The
subsequent UNLOAD statement specifies that each CLOB in the RESUME column
is to be unloaded to files that are dynamically allocated according to the
characteristics defined for the LOBFRV template. In this case, those files are to be
partitioned data sets, as specified by the DSNTYPE option. Each data set is to have
the name UNLODTEST.database-name.LOB-table-space-name.RESUME, as specified by
the DSN option. The names of each CLOB PDS is written to the unload data set.
By default, the unload data set is defined by the SYSREC DD statement or
template.
UNLOAD DATA
FROM TABLE DSN8910.EMP_PHOTO_RESUME
(EMPNO CHAR(6),
RESUME VARCHAR(255) CLOBF LOBFRV)
SHRLEVEL CHANGE
Figure 106. Example job that creates a template that can be used for unloading LOB objects.
The output records that the UNLOAD utility writes are compatible as input to the
LOAD utility; as a result, you can reload the original table or different tables.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v Ownership of the tables
v SELECT privilege on the tables
| v DBADM authority for the database. If the object on which the utility operates is
| in an implicitly created database, DBADM authority on DSNDB04 or the
| implicitly created database is sufficient.
v SYSADM authority
v SYSCTRL authority (catalog tables only)
If you use RACF access control with multilevel security and UNLOAD is to
process a table space that contains a table that has multilevel security with
row-level granularity, you must be identified to RACF and have an accessible valid
security label. Each row is unloaded only if your security label dominates the data
security label. If your security label does not dominate the data security label, the
row is not unloaded, but DB2 does not issue an error message. For more
information about multilevel security and security labels, see Part 3 of DB2
Administration Guide.
Syntax diagram
source-spec
from-table-spec
LIST listdef-name
source-spec:
TABLESPACE tablespace-name
database-name. PART integer
int1 : int2
FROMCOPY data-set-name
FROMVOLUME CATALOG
vol-ser
(1)
FROMSEQNO n
FROMCOPYDDN ddname
Notes:
1 The FROMSEQNO option is required if you are unloading an image copy from a tape data set
that is not cataloged.
unload-spec:
, NOSUBS NOPAD
CCSID( integer )
FLOAT S390
COLDEL ',' CHARDEL '"' DECPT '.' FLOAT IEEE
DELIMITED
COLDEL coldel CHARDEL chardel DECPT decpt
|
DECFLOAT_ROUNDMODE ROUND_CEILING
ROUND_DOWN
ROUND_FLOOR
ROUND_HALF_DOWN
ROUND_HALF_EVEN
ROUND_HALF_UP
ROUND_UP
FROM-TABLE-spec:
For the syntax diagram and the option descriptions of the FROM TABLE
specification, see “FROM-TABLE-spec ” on page 674.
Option descriptions
“Control statement coding rules” on page 18 provides general information about
specifying options for DB2 utilities.
DATA Identifies the data that is to be selected for unloading with table-name in
the from-table-spec. The DATA keyword is mutually exclusive with
TABLESPACE, PART, and LIST keywords.
When you specify the DATA keyword, or you omit either the
TABLESPACE or the LIST keyword, you must also specify at least one
FROM TABLE clause.
Chapter 32. UNLOAD 665
UNLOAD
TABLESPACE
Specifies the table space (and, optionally, the database to which it belongs)
from which the data is to be unloaded.
database-name
The name of the database to which the table space belongs. The name
cannot be DSNDB01 or DSNDB07. The default is DSNDB04.
tablespace-name
The name of the table space from which the data is to be unloaded.
The specified table space must not be a LOB or XML table space.
PART
Identifies a partition or a range of partitions from which the data is to
be unloaded. This keyword applies only if the specified table space is
partitioned. You cannot specify PART with LIST. The maximum is 4096.
integer
Designates a single partition. integer must identify an existing
partition number within the table space.
int1:int2
Designates a range of partitions from int1 to int2. int1 must be a
positive integer that is less than the highest partition number
within the table space. int2 must be an integer that is greater than
int1 and less than or equal to the highest partition number.
If the specified image copy data set is a full image copy, either
compressed or uncompressed records can be unloaded.
only when the same data set contains the dictionary pages for
decompression. If an image copy data set contains a compressed row
and a dictionary is not available, DB2 issues an error message. See
“MAXERR” on page 672 for more information about specifying
error-tolerance conditions.
The UNLOAD utility associates a single table space with one output data
set, except when partition-parallelism is activated. When you use the LIST
option with a LISTDEF that represents multiple table spaces, you must also
define a data set TEMPLATE that corresponds to all of the table spaces and
specify the template-name in the UNLDDN option.
If you want to generate the LOAD statements, you must define another
TEMPLATE for the PUNCHDDN data set that is similar to UNLDDN. DB2
then generates a LOAD statement for each table space. This utility will
only process clone data if the CLONE keyword is specified. The use of
CLONED YES on the LISTDEF statement is not sufficient.
PUNCHDDN
Specifies the DD name for a data set or a template name that defines one
or more data set names that are to receive the LOAD utility control
statements that the UNLOAD utility generates.
ddname
Specifies the DD name. The default is SYSPUNCH.
template-name
Identifies the name of a data set template that is defined by a
TEMPLATE utility control statement.
If the specified name is defined both as a DD name (in the JCL) and as a
template name (in a TEMPLATE statement), it is treated as the DD name.
When you run the UNLOAD utility for multiple table spaces and you
want to generate corresponding LOAD statements, you must have multiple
output data sets that correspond to the table spaces so that DB2 retains all
of the generated LOAD statements. In this case, you must specify an
appropriate template name to PUNCHDDN. If you omit the PUNCHDDN
specification, the LOAD statements are not generated.
If the specified name is defined both as a DD name (in the JCL) and as a
template name (in a TEMPLATE statement), it is treated as the DD name.
When you run the UNLOAD utility for a partitioned table space, the
selected partitions are unloaded in parallel if the following conditions are
true:
1. You specify a template name for UNLDDN.
2. The template data set name contains the partition as a variable (&PART.
or &PA.) without substring notation. This template name is expanded
into multiple data sets that correspond to the selected partitions.
3. The TEMPLATE control statement does not contain all of the following
options:
v STACK(YES)
v UNIT(TAPE)
v An UNCNT value that is less than or equal to one.
If conditions 1 and 2 are true, but condition 3 is false, partition parallelism
is not activated and all output data sets are stacked on one tape.
Similarly, when you run the UNLOAD utility for multiple table spaces, the
output records are placed in data sets that correspond to the respective
table spaces. Therefore the output data sets must be physically distinctive,
and you must specify an appropriate template name to UNLDDN. If you
omit the UNLDDN specification, the SYSREC DD name is not used, and
an error occurs.
If you specify both FORMAT DELIMITED and UNICODE, all output data
is in CCSID 1208, UTF-8; any other specified CCSID is ignored.
The following specifications are also valid:
CCSID(integer1)
Indicates that only an SBCS CCSID is specified.
CCSID(integer1,integer2)
Indicates that an SBCS CCSID and a mixed CCSID are specified.
integer
Specifies either a valid CCSID or 0.
If you specify a value of 0 for one of the arguments or omit a value, the
encoding scheme that is specified by EBCDIC, ASCII, or UNICODE is
assumed for the corresponding data type (SBCS, MIXED, or DBCS). If you
do not specify EBCDIC, ASCII, or UNICODE:
v If the source data is of character type, the original encoding scheme is
preserved.
v For character strings that are converted from numeric, date, time, or
timestamp data, the default encoding scheme of the table is used. For
more information, see the CCSID option of the CREATE TABLE
statement in Chapter 5 of DB2 SQL Reference.
If multiple table spaces are being processed, the number of records in error
is counted for each table space. If the LIST option is used, you can add
OPTION utility control statement (EVENT option with ITEMERROR)
before the UNLOAD statement to specify that the table space in error is to
be skipped and the subsequent table spaces are to be processed.
SHRLEVEL
Specifies whether other processes can access or update the table space or
partitions while the data is being unloaded.
UNLOAD ignores the SHRLEVEL specification when the source object is
an image copy data set.
The default is SHRLEVEL CHANGE ISOLATION CS.
CHANGE
Specifies that rows can be read, inserted, updated, and deleted from
the table space or partition while the data is being unloaded.
ISOLATION
Specifies the isolation level with SHRLEVEL CHANGE.
CS
Indicates that the UNLOAD utility is to read rows in cursor
stability mode. With CS, the UNLOAD utility assumes
CURRENTDATA(NO).
UR
Indicates that uncommitted rows, if they exist, are to be
unloaded. The unload operation is performed with minimal
interference from the other DB2 operations that are applied to
the objects from which the data is being unloaded.
| SKIP LOCKED DATA
| Specifies that the UNLOAD utility is to skip rows on which
| incompatible locks are held by other transactions. This option applies
| to row level or page level lock.
REFERENCE
Specifies that during the unload operation, rows of the tables can be
read, but cannot be inserted, updated, nor deleted by other DB2
threads.
When you specify SHRLEVEL REFERENCE, the UNLOAD utility
drains writers on the table space from which the data is to be
unloaded. When data is unloaded from multiple partitions, the drain
lock is obtained for all of the selected partitions in the UTILINIT
phase.
| DECFLOAT_ROUNDMODE
| Specifies the rounding mode to be used when DECFLOATs are
| manipulated. The following rounding modes are supported:
| ROUND_CEILING
| Round toward +infinity. The discarded digits are removed if they are
| all zero or if the sign is negative. Otherwise, the result coefficient
| should be incremented by 1 (rounded up).
| ROUND_DOWN
| Round toward 0 (truncation). The discarded digits are ignored.
| ROUND_FLOOR
| Round toward -infinity. The discarded digits are removed if they are
| all zero or positive. Otherwise, the sign is negative and the result
| coefficient should be incremented by 1 (rounded up).
| ROUND_HALF_DOWN
| Round to the nearest number. If equidistant, round down. If the
| discarded digits are greater than 0.5, the result coefficient should be
| incremented by 1 (rounded up). The discarded digits are ignored if
| they are 0.5 or less.
| ROUND_HALF_EVEN
| Round to the nearest number. If equidistant, round so that the final
| digit is even. If the discarded digits are greater than .05, the result
| coefficient should be incremented by 1 (rounded up). The discarded
| digits are ignored if they are less than 0.5. If the result coefficient is .05
| and the rightmost digit is even, the result coefficient is not altered. If
| the result coefficient is .05 and the rightmost digit is odd, the result
| coefficient should be incremented by 1 (rounded up).
| ROUND_HALF_UP
| Round to nearest. If equidistant, round up. If the discarded digits are
FROM-TABLE-spec
More than one table or partition for each table space can be unloaded with a single
invocation of the UNLOAD utility. One FROM TABLE statement for each table that
is to be unloaded is required to identify:
v A table name from which the rows are to be unloaded
v A field to identify the table that is associated with the rows that are to be
unloaded from the table by using the HEADER option
v Sampling options for the table rows
v A list of field specifications for the table that is to be used to select columns that
are to be unloaded
v Selection conditions, specified in the WHEN clause, that are to be used to
qualify rows that are to be unloaded from the table
All tables that are specified by FROM TABLE statements must belong to the same
table space. If rows from specific tables are to be unloaded, a FROM TABLE clause
must be specified for each source table. If you do not specify a FROM TABLE
clause for a table space, all the rows of the table space are unloaded.
If you omit a list of field specifications, all columns of the source table are
unloaded in the defined column order for the table. The default output field types
that correspond to the data types of the columns are used.
In a FROM TABLE clause, you can use parentheses in only two situations: to
enclose the entire field selection list, and in a WHEN selection clause. This usage
avoids potential conflict between the keywords and field-names that are used in
the field selection list. A valid sample of a FROM TABLE clause specification
follows:
UNLOAD ...
FROM TABLE tablename SAMPLE x (c1,c2) WHEN (c3>0)
You cannot specify FROM TABLE if the LIST option is already specified.
FROM-TABLE-spec:
FROM-TABLE-spec
HEADER OBID
FROM TABLE table-name
HEADER NONE SAMPLE decimal LIMIT integer
CONST 'string'
X'hex-string'
, WHEN (selection-condition)
( field-specification )
field-specification:
| POSITION(*)
field-name
POSITION(start) CHAR
(length) TRUNCATE
BLOBF template-name
CLOBF
DBCLOBF
VARCHAR
(length) strip-spec
BLOBF template-name
CLOBF
DBCLOBF
GRAPHIC
EXTERNAL (length) TRUNCATE
VARGRAPHIC strip-spec
(length)
SMALLINT
INTEGER
EXTERNAL
(length)
BIGINT
BINARY
(length) TRUNCATE
VARBINARY
BINARY VARYING
strip-spec
PACKED
DECIMAL
ZONED ,0
EXTERNAL (length )
,scale
FLOAT
EXTERNAL (length)
DOUBLE
REAL
DATE EXTERNAL
(length)
TIME EXTERNAL
(length)
TIMESTAMP EXTERNAL
(length)
CONSTANT 'string'
X'hex-string'
ROWID
BLOB
(length) TRUNCATE
CLOB
(length) TRUNCATE
DBCLOB
(length) TRUNCATE
(34)
DECFLOAT
(16)
EXTERNAL
(length)
XML
strip spec:
BOTH TRUNCATE
STRIP
TRAILING (1)
LEADING 'strip-char'
X'strip-char'
Notes:
| 1 If you specify VARGRAPHIC, BINARY, or VARBINARY, you cannot specify 'strip-char'. You can
| specify only X'strip-char'.
selection condition:
predicate AND
(selection-condition) OR predicate
(selection-condition)
predicate:
basic predicate
BETWEEN predicate
IN predicate
LIKE predicate
NULL predicate
basic predicate:
column-name = constant
<> labeled-duration-expression
>
<
>=
<=
BETWEEN predicate:
IN predicate:
column-name IN ( constant )
NOT
LIKE predicate:
NULL predicate:
column-name IS NULL
NOT
labeled-duration-expression:
table-name
Identifies a DB2 table from which the rows are to be unloaded and to
which the options in the FROM TABLE clause are to be applied.
| If the table name is not qualified by a schema name, the authorization ID
| of the invoker of the utility job step is used as the schema qualifier of the
| table name. Enclose the table name in quotation marks if the name
| contains a blank.
HEADER
Specifies a constant header field, at the beginning of the output records,
that can be used to associate an output record with the table from which it
was unloaded.
If you specify a header field, it is used as the field selection criterion of the
WHEN clause (a part of the INTO-TABLE specification) in the LOAD
statement that is generated.
OBID
Specifies that the object identifier (OBID) for the table (a 2-byte binary
value) is to be placed in the first 2 bytes of the output records that are
unloaded from the table.
If you omit the HEADER option, HEADER OBID is the default, except
for delimited files.
With HEADER OBID, the first 2 bytes of the output record cannot be
used by the unloaded data. For example, consider the following
UNLOAD statement:
UNLOAD ...
FROM TABLE table-name HEADER OBID ...
The sampling is applied for each individual table. If the rows from
multiple tables are unloaded with sampling enabled, the referential
integrity between the tables might be lost.
LIMIT integer
Specifies the maximum number of rows that are to be unloaded from a
table. If the number of unloaded rows reaches the specified limit, message
DSNU1201 is issued for the table, and no more rows are unloaded from
the table. The process continues to unload qualified rows from the other
tables.
When partition parallelism is activated, the LIMIT option is applied to each
partition instead of to the entire table.
integer
Indicates the maximum number of rows that are to be unloaded from a
table. If the specified number is less than or equal to zero, no row is
unloaded from the table.
Like the SAMPLE option, if multiple tables are unloaded with the LIMIT
option, the referential integrity between the tables might be lost.
field-name
Identifies a column name that must exist in the source table.
POSITION(start)
Specifies the field position in the output record. You can specify
the position parameter as follows:
* An asterisk, indicating that the field starts at the first byte after the
last position of the previous field.
start A positive integer that indicates the start column of the data field.
If the source table column can be null, the utility places a NULL indicator
byte at the beginning of the data field in the output record. For BLOBF,
CLOBF, or DBCLOBF columns, null values are indicated by a byte at the
beginning of the file name. The start parameter (or *) points to the position
of the NULL indicator byte. In the generated LOAD statement, start is
shifted by 1 byte to the right (as start+1) so that, in the LOAD statement,
the start parameter of the POSITION option points to the next byte past
the NULL indicator byte.
For a varying-length field, a length field precedes the actual data field
(after the NULL indicator byte, if applicable). For BLOBF, CLOBF, or
DBCLOBF columns, the length of the file name is indicated by two bytes at
the beginning of the file name. If the value cannot be null, the start
parameter (or *) points to the first byte of the length field. The size of the
length field is either 4 bytes (BLOB, CLOB, or DBCLOB) or 2 bytes
(VARCHAR or VARGRAPHIC).
When you explicitly specify the output field positions by using start
parameters (or using the * format) of the POSITION option, you must
consider the following items as a part of the output field:
v For a field whose value can be null, a space for the NULL indicator byte
v For varying-length data, a space for the length field (either 2 bytes or 4
bytes)
“Determining the layout of output fields” on page 706 illustrates the field
layout in conjunction with the POSITION option, NULL indicator byte, the
length field for a varying-length field, the length parameter, and the actual
data length.
The POSITION option is useful when the output fields must be placed at
desired positions in the output records. The use of the POSITION
parameters, however, can restrict the size of the output data fields. Use
care when explicitly specifying start parameters for nullable and
If you omit the POSITION option for the first field, the field starts from
position 1 if HEADER NONE is specified. Otherwise, the field starts from
the next byte position past the record header field. If POSITION is omitted
for a subsequent field, the field is placed next to the last position of the
previous field without any gap.
page 711 for the truncation rules that are used in the UNLOAD utility.
Without TRUNCATE, an error occurs when the output field size is too
small for the data.
VARCHAR
Specifies that the output field type is character of varying length. A 2-byte
binary field indicating the length of data in bytes is prepended to the data
field. If the table column can be null, a NULL indicator byte is placed
before this length field for a non-delimited output file.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the
output data is encoded in the CCSID corresponding to the specified option,
depending on the subtype of the source data (SBCS or MIXED). If the
subtype is BIT, no conversion is applied.
(length)
Specifies the maximum length of the actual data field in bytes. If you
also specify NOPAD, it indicates the maximum allowable space for the
data in the output records; otherwise, the space of the specified length
is reserved for the data.
If the length parameter is omitted, the default is the smaller of 255 and
the maximum length that is defined on the source table column.
| BLOBF
| Specifies that the output field is to contain the name of the file to
| which the BLOB or XML is to be unloaded without CCSID conversion.
| CLOBF
| Specifies that the output field is to contain the name of the file to
| which the CLOB or XML is to be unloaded with any required CCSID
| conversion.
| DBCLOBF
| Specifies that the output field is to contain the name of the file to
| which the DBCLOBF or XML is to be unloaded with any required
| CCSID conversion.
STRIP
| Specifies that UNLOAD is to remove binary zeroes (the default) or the
| specified string from the beginning, the end, or both ends of the data.
| UNLOAD adjusts the VARCHAR length field (for the output field) to
| the length of the stripped data.
The STRIP option is applicable if the subtype of the source data is BIT.
In this case, no CCSID conversion is performed on the specified strip
character (even if it is given in the form 'strip-char').
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning and end of the data.
The default is BOTH.
TRAILING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the end of the data.
LEADING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning of the data.
'strip-char'
Specifies a single-byte character that is to be stripped. Specify this
character value in EBCDIC. Depending on the output encoding
scheme, UNLOAD applies SBCS CCSID conversion to the strip-char
value before it is used in the strip operation. If you want to specify
a strip-char value in an encoding scheme other than EBCDIC, use
the hexadecimal form. UNLOAD does not perform CCSID
conversion if the hexadecimal form is used.
X'strip-char'
Specifies a single-byte character that is to be stripped. It can be
specified in the hexadecimal form, X'hex-string', where hex-string is
two hexadecimal characters that represent a single SBCS character.
If the strip-char operand is omitted, the default is the blank
character, which is coded as follows:
v X'40', for the EBCDIC-encoded output case
v X'20' for the ASCII-encoded output case
v X'20' the Unicode-encoded output case
The strip operation is applied after the character code conversion,
if the output character encoding scheme is different from the one
that is defined on the source data. Therefore, if a strip character is
specified in the hexadecimal format, you must specify the character
in the encoding scheme that is used for output.
TRUNCATE
Indicates that a character string (encoded for output) is to be truncated
from the right, if the data does not fit in the available space for the
field in the output records. Truncation occurs at a character boundary.
See “Specifying TRUNCATE and STRIP options for output data” on
page 711 for the truncation rules that are used in the UNLOAD utility.
Without TRUNCATE, an error occurs when the output field size is too
small for the data.
GRAPHIC
Specifies that the output field is of the fixed-length graphic type. If the
table column can be null, a NULL indicator byte is placed before the actual
data field for any non-delimited output file.
If the output is in EBCDIC, the shift-in and shift-out characters are not
included at the beginning and at the end of the data.
(length)
Specifies the number of DBCS characters (the size of the output data in
bytes is twice the given length). If the given length is larger than the
source data length, the output field is padded with the default pad
character.
TRUNCATE
Indicates that a graphic character string (encoded for output) is to be
truncated from the right, if the data does not fit in the available space
for the field in the output records. Truncation occurs at a character
(DBCS) boundary. Without TRUNCATE, an error occurs when the
output field size is too small for the data.
GRAPHIC EXTERNAL
Specifies that the data is to be written in the output records as a
fixed-length field of the graphic type with the external format; that is, the
shift-out (SO) character is placed at the starting position, and the shift-in
(SI) character is placed at the ending position. The byte count of the output
field is always an even number.
GRAPHIC EXTERNAL is supported only in the EBCDIC output mode (by
default or when the EBCDIC keyword is specified).
If the start parameter of the POSITION option is used to specify the output
column position, it points to the (inserted) shift-out character at the
beginning of the field. The shift-in character is placed at the next byte
position past the last double-byte character of the data.
(length)
Specifies a number of DBCS characters, excluding the shift characters
(as in the graphic type column definition that is used in a CREATE
TABLE statement) nor the NULL indicator byte if the source column
can be null. If the length parameter is omitted, the default output field
size is the length that is defined on the corresponding table column,
plus two bytes (shift-out and shift-in characters).
If the specified length is larger than the size of the data, the field is
padded on the right with the default DBCS padding character.
TRUNCATE
Indicates that a graphic character string is to be truncated from the
right by the DBCS characters, if the data does not fit in the available
space for the field in the output records. Without TRUNCATE, an error
occurs when the output field size is too small for the data. An error
can also occur with the TRUNCATE option if the available space is less
than 4 bytes (4 bytes is the minimum size for a GRAPHIC EXTERNAL
field; shift-out character, one DBCS, and shift-in character); or fewer
than 5 bytes if the field is can be null (the 4 bytes plus the NULL
indicator byte).
VARGRAPHIC
Specifies that the output field is to be of the varying-length graphic type. A
2-byte binary length field is prepended to the actual data field. If the table
column can be null, a NULL indicator byte is placed before this length
field for any non-delimited output file.
(length)
Specifies the maximum length of the actual data field in the number of
DBCS characters. If you also specify NOPAD, it indicates the maximum
allowable space for the data in the output records; otherwise, the space
of the specified length is reserved for the data.
If the length parameter is omitted, the default is the smaller of 127 and
the maximum defined length of the source table column.
STRIP
| Indicates that UNLOAD is to remove binary zeroes (the default) or the
| specified string from the unloaded data. UNLOAD adjusts the
| VARGRAPHIC length field (for the output field) to the length of the
| stripped data (the number of DBCS characters).
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning and end of the data.
The default is BOTH.
TRAILING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the end of the data.
LEADING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning of the data.
X'strip-char'
Specifies a DBCS character that is to be stripped in the
hexadecimal format, X'hhhh', where hhhh is four hexadecimal
characters that represent a DBCS character. If this operand is
omitted, the default is a DBCS blank in the output encoding
scheme (for example, X'4040' for the EBCDIC-encoded output or
X'8140' for CCSID 301).
The strip operation is applied after the character code conversion,
if the output character encoding scheme is different from the one
that is defined on the source data. Therefore, if you specify a strip
character, it must be in the encoding scheme that is used for the
output.
TRUNCATE
Indicates that a graphic character string (encoded for output) is to be
truncated from the right, if the data does not fit in the available space
for the field in the output records. Truncation occurs at a DBCS
character boundary. Without TRUNCATE, an error occurs when the
output field size is too small for the data.
SMALLINT
Specifies that the output field is a 2-byte binary integer (a negative number
is in two’s complement notation). To use the external format, specify
INTEGER EXTERNAL.
| If the source data type is INTEGER, DECIMAL, FLOAT, BIGINT, or
| DECFLOAT (either 4-byte or 8-byte format), an error occurs when the data
| is greater than 32 767 or less than -32 768.
A SMALLINT output field requires 2 bytes, and the length option is not
available.
INTEGER
Specifies that the output field is a 4-byte binary integer (a negative number
is in two’s complement notation).
| If the original data type is DECIMAL, FLOAT, BIGINT, or DECFLOAT
| (either 4-byte or 8-byte format), an error occurs when the original data is
| greater than 2 147 483 647 or less than -2 147 483 648.
An INTEGER output field requires 4 bytes, and the length option is not
available.
INTEGER EXTERNAL
Specifies that the output field is to contain a character string that
represents an integer number.
(length)
Indicates the size of the output data in bytes, including a space for the
sign character. When the length is given and the character notation
does not fit in the space, an error occurs. The default is 11 characters
(including a space for the sign).
If the value is negative, a minus sign precedes the numeric digits. If the
output field size is larger than the length of the data, the output data is left
justified and blanks are padded on the right.
| LEADING
| Indicates that UNLOAD is to remove occurrences of binary zeroes
| or the specified strip character from the beginning of the data.
| X'strip-char'
| Specifies a single-byte character that is to be stripped. It can be
| specified only in the hexadecimal form, X'hex-string', where
| hex-string is two hexadecimal characters that represent a single
| SBCS character.
| TRUNCATE
| Indicates that a binary string (encoded for output) is to be truncated
| from the right, if the data does not fit in the available space for the
| field in the output records. Without TRUNCATE, an error occurs when
| the output field size is too small for the data.
DECIMAL
Specifies that the output data is a number that is represented by the
indicated decimal format (either PACKED, ZONED, or EXTERNAL). If you
specify the keyword DECIMAL by itself, packed-decimal format is
assumed.
PACKED
Specifies that the output data is a number that is represented by the
packed-decimal format. You can use DEC or DEC PACKED as an
abbreviated form of the keyword.
The packed-decimal representation of a number is of the form ddd...ds,
where d is a decimal digit that is represented by 4 bits, and s is a 4-bit
sign character (hexadecimal A, C, E, or F for a positive number, and
hexadecimal B or D for a negative number).
length
Specifies the number of digits (not including the sign digit) that are
to be placed in the output field. The length must be between 1 and
31. If the length is odd, the size of the output data field is
(length+1) ⁄ 2 bytes; if even, (length ⁄ 2)+1 byte.
If the source data type is DECIMAL and the length parameter is
omitted, the default length is determined by the column attribute
defined on the table. Otherwise, the default length is 31 digits (16
bytes).
scale
Specifies the number of digits to the right of the decimal point.
(Note that, in this case, a decimal point is not included in the
output field.) The number must be an integer that is greater than
or equal to zero and less than or equal to the length.
The default depends on the column attribute that is defined on the
table. If the source data type is DECIMAL, the defined scale value
is the default value; otherwise, the default is 0.
If you specify the output field size as less than the length of the data,
an error occurs. If the specified field size is greater than the length of
data, X'0' is padded on the left.
ZONED
Specifies that the output data is a number that is represented by the
zoned-decimal format. You can use DEC ZONED as an abbreviated
form of the keyword.
If you specify the output field size as less than the length of the data,
an error occurs. If the specified field size is greater than the length of
data, X'F0' is padded on the left.
EXTERNAL
Specifies that the output data is a character string that represents a
number in the form of ±dd...d.ddd...d, where d is a numeric character
0-9. (The plus sign for a positive value is omitted.)
length
Specifies the overall length of the output data (the number of
characters including a sign, and a decimal point if scale is
specified).
If the source data type is DECIMAL and the length parameter is
omitted, the default length is determined by the column attribute
that is defined on the table. Otherwise, the default length is 33 (31
numeric digits, plus a sign and a decimal point). The minimum
value of length is 3 to accommodate the sign, one digit, and the
decimal point.
scale
Specifies the number of digits to the right of the decimal point. The
number must be an integer that is greater than or equal to zero
and less than or equal to length - 2 (to allow for the sign character
and the decimal point).
If the source data type is DECIMAL and the length parameter is
omitted, the default scale is determined by the column attribute
that is defined on the table. Otherwise, the default is 0.
An error occurs if the character representation of a value does not
fit in the given or default field size (precision). If the source data
type is floating point and a data item is too small for the precision
that is defined by scale, the value of zero (not an error) is returned.
FLOAT(length)
Specifies that the output data is a binary floating-point number (32-bit or
single-precision FLOAT if the length is between one and 21 inclusive; 64-bit
or double-precision FLOAT if the length is between 22 and 53 inclusive). If
the length parameter is omitted, the 64-bit format is assumed (output field
size is 8 bytes). Note that the length parameter for the FLOAT type does
not represent the field size in bytes.
The format of the binary floating-point output is controlled by the global
FLOAT option. The default is S/390 format (Hexadecimal Floating Point or
HFP). If you specify FLOAT(IEEE), all the binary floating-point output is in
IEEE format (Binary Floating Point or BFP). When you specify
FLOAT(IEEE) and the source data type DOUBLE is unloaded as REAL, an
error occurs if the source data cannot be expressed by the IEEE (BFP) 32-bit
notation.
EXTERNAL(length)
Specifies that the output data is a number that is represented by a
character string in floating-point notation, ±d.ddd...dddE±nn, where d is
a numeric character (0-9) for the significant digits; nn after the
character E, and the sign consists of two numeric characters for the
exponent.
(length)
Specifies the total field length in bytes, including the first sign
character, the decimal point, the E character, the second sign
character, and the two-digit exponent. If the number of characters
in the result is less than the specified or the default length, the
result is padded to the right with blanks. The length, if specified,
must be greater than or equal to 8.
The default output field size is 14 if the source data type is the
32-bit FLOAT; otherwise, the default is 24.
(length)
Specifies the size of the data field in bytes in the output record. A
TIME EXTERNAL field requires a space of at least eight characters. If
the space is not available, a conversion error occurs. If the specified
length is larger than the size of the data, blanks are padded on the
right.
TIMESTAMP EXTERNAL
Specifies that the output field is for a character string representation of a
timestamp.
(length)
Specifies the size of the data field in bytes in the output record. A
TIMESTAMP EXTERNAL field requires a space of at least 19
characters. If the space is not available, an error occurs. The length
parameter, if specified, determines the output format of the
TIMESTAMP. If the specified length is larger than the size of the data,
the field is padded on the right with the default padding character.
CONSTANT
Specifies that the output records are to have an extra field containing a
constant value. The field name that is associated with the CONSTANT
keyword must not coincide with a table column name (the field name is
for clarification purposes only). A CONSTANT field always has a fixed
length that is equal to the length of the given string.
'string'
Specifies the character string that is to be inserted in the output records
at the specified or default position. A string is the required operand of
the CONSTANT option. If the given string is in the form 'string', it is
assumed to be an EBCDIC SBCS string. However, the output string for
a CONSTANT field is in the specified or default encoding scheme.
(That is, if the encoding scheme used for output is not EBCDIC, the
SBCS CCSID conversion is applied to the given string before it is
placed in output records.)
X'hex-string'
Specifies the character string in hexadecimal form, X'hex-string', that is
to be inserted in the output records at the specified or default position.
If you want to specify a CONSTANT string value in an encoding
scheme other than SBCS EBCDIC, use the hexadecimal form. No
CCSID conversion is performed if the hexadecimal form is used.
ROWID fields have varying length and a 2-byte binary length field is
prepended to the actual data field.
For the ROWID type, no data conversion nor truncation is applied. If the
output field size is too small to unload ROWID data, an error occurs.
If the source is an image copy and a ROWID column is selected, and if the
page set header page is missing in the specified data set, the UNLOAD
utility terminates with the error message DSNU1228I. This situation can
occur when the source is an image copy data set of DSNUM that is greater
than one for a nonpartitioned table space that is defined on multiple data
sets.
BLOB Indicates that the column is to be unloaded as a binary large object
(BLOB). No data conversion is applied to the field.
When you specify the BLOB field type, a 4-byte binary length field is
placed in the output record prior to the actual data field. If the source table
column can be null, a NULL indicator byte is placed before the length
field.
(length)
Specifies the maximum length of the actual data field in bytes. If you
specify NOPAD, it indicates the maximum allowable space for the data
in the output records; otherwise, the space of the specified length is
reserved for the data.
The default is the maximum length that is defined on the source table
column.
TRUNCATE
Indicates that a BLOB string is to be truncated from the right, if the
data does not fit in the available space for the field in the output
record. For BLOB data, truncation occurs at a byte boundary. Without
TRUNCATE, an error occurs when the output field size is too small for
the data.
CLOB Indicates that the column is to be unloaded as a character large object
(CLOB).
When you specify the CLOB field type, a 4-byte binary length field is
placed in the output record prior to the actual data field. If the source table
column can be null, a NULL indicator byte is placed before the length
field.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the
output data is encoded in the CCSID corresponding to the specified option,
depending on the subtype of the source data (SBCS or MIXED). No
conversion is applied if the subtype is BIT.
(length)
Specifies the maximum length of the actual data field in bytes. If you
specify NOPAD, it indicates the maximum allowable space for the data
in the output records; otherwise, the space of the specified length is
reserved for the data.
The default is the maximum length that is defined on the source table
column.
TRUNCATE
Indicates that a CLOB string (encoded for output) is to be truncated
from the right, if the data does not fit in the available space for the
| XML Specifies that an XML column is being unloaded directly to the output
| record.
WHEN
Indicates which records in the table space are to be unloaded. If no WHEN
clause is specified for a table in the table space, all of the records are
unloaded.
The option following WHEN describes the conditions for unloading
records from a table.
Data in the table can be in EBCDIC, ASCII, or Unicode. If the target table
is in Unicode and the character constants are specified in the utility control
statement as EBCDIC, the UNLOAD utility converts these constants to
Unicode. To use a constant when the target table is ASCII, specify the
hexadecimal form of the constant (instead of the character string form) in
the condition for the WHEN clause.
selection condition
Specifies a condition that is true, false, or unknown about a given row.
When the condition is true, the row qualifies for UNLOAD. When the
condition is false or unknown, the row does not qualify.
The result of a selection condition is derived by application of the specified
logical operators (AND and OR) to the result of each specified predicate. If
logical operators are not specified, the result of the selection condition is
the result of the specified predicate.
Selection conditions within parentheses are evaluated first. If the order of
evaluation is not specified by parentheses, AND is applied before OR.
If the control statement is in the same encoding scheme as the input data,
you can code character constants in the control statement. Otherwise, if the
control statement is not in the same encoding scheme as the input data,
you must code the condition with hexadecimal constants. For example, if
the table space is in EBCDIC and the control statement is in UTF-8, use
(1:1) = X'31' in the condition rather than (1:1) = ’1’.
Restriction: UNLOAD cannot filter rows that contain encrypted data.
predicate
Specifies a condition that is true, false, or unknown about a row.
| A DECFLOAT column or DECFLOAT constant cannot be specified in the
| predicate.
basic predicate
Specifies the comparison of a column with a constant. If the value of
the column is null, the result of the predicate is unknown. Otherwise,
the result of the predicate is true or false.
column = constant The column is equal to the constant or
labeled duration expression.
column < > constant The column is not equal to the constant
or labeled duration expression.
column > constant The column is greater than the constant
or labeled duration expression.
column < constant The column is less than the constant or
labeled duration expression.
For example, the following predicate is true for any row when salary is
greater than or equal 10000 and less than or equal to 20000:
SALARY BETWEEN 10000 AND 20000
IN predicate
Specifies that a value is to be compared with a set of values. In the IN
predicate, the second operand is a set of one or more values that are
specified by constants. Each of the predicate’s two forms (IN and NOT
IN) has an equivalent search condition, as shown in Table 136.
Table 136. IN predicates and their equivalent search conditions
Predicate Equivalent search condition
value1 IN (value1, value2,..., valuen) (value1 = value2 OR ... OR value1 = valuen)
value1 NOT IN (value1, value2,..., valuen) value1 ¬= value2 AND ... AND value1 ¬= valuen)
Note: The values can be constants or labeled duration expressions.
For example, the following predicate is true for any row whose
employee is in department D11, B01, or C01:
WORKDEPT IN (’D11’, ’B01’, ’C01’)
LIKE predicate
Specifies the qualification of strings that have a certain pattern.
Let x denote the column that is to be tested and y the pattern in the
string constant. The following rules apply to predicates of the form ″x
LIKE y...″. If NOT is specified, the result is reversed.
v When x and y are both neither empty nor null, the result of the
predicate is true if x matches the pattern in y and false if x does not
match the pattern in y.
v When x or y is null, the result of the predicate is unknown.
v When y is empty and x is not empty, the result of the predicate is
false.
v When x is empty and y is not empty, the result of the predicate is
false unless y consists only of one or more percent signs.
v When x and y are both empty, the result of the predicate is true.
The pattern string and the string that is to be tested must be of the
same type. That is, both x and y must be character strings, or both x
and y must be graphic strings. When x and y are graphic strings, a
character is a DBCS character. When x and y are character strings and
x is not mixed data, a character is an SBCS character and y is
interpreted as SBCS data regardless of its subtype. The rules for
mixed-data patterns are described under “Strings and patterns” on
page 697.
NULL predicate
Specifies a test for null values.
If the value of the column is null, the result is true. If the value is not
null, the result is false. If NOT is specified, the result is reversed. (That
is, if the value is null, the result is false, and if the value is not null, the
result is true.)
labeled duration expression
Specifies an expression that begins with special register CURRENT
DATE or special register CURRENT TIMESTAMP (the forms
CURRENT_DATE and CURRENT_TIMESTAMP are also acceptable).
This special register can be followed by arithmetic operations of
addition or subtraction. These operations are expressed by using
numbers that are followed by one of the seven duration keywords:
YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS, or
Chapter 32. UNLOAD 697
UNLOAD
To subtract one year, one month, and one day from a date, specify the
following code:
CURRENT DATE - 1 DAY - 1 MONTH - 1 YEAR
Notes:
1. Required if you request that UNLOAD generate LOAD statements by specifying
PUNCHDDN in the utility control statement.
The following object is named in the utility control statement and does not require
a DD statement in the JCL:
Table space
Table space that is to be unloaded. (If you want to unload only one
partition of a table space, you must specify the PART option in the control
statement.)
Unloading partitions
If the source table space is partitioned, use one of the following mutually exclusive
methods to select the partitions to unload:
v Use the LIST keyword with a LISTDEF that contains PARTLEVEL specifications.
Partitions can be either included or excluded by the use of the INCLUDE and
the EXCLUDE features of LISTDEF.
v Specify the PART keyword to select a single partition or a range of partitions.
With either method, the unloaded data can be stored in a single data set for all
selected partitions or in one data set for each selected partition. If you want to
unload to a single output data set, specify a DD name to UNLDDN. If you want to
unload into multiple output data sets, specify a template name that is associated
with the partitions. You can process multiple partitions in parallel if the
TEMPLATE definition contains the partition as a variable, for example &PA.
You cannot specify multiple output data sets with the FROMCOPY or the
FROMCOPYDDN option.
| XML column is handled like a variable character with a 2-byte length preceding
| the XML value. For a delimited format there are no length bytes present.
| v The XML column can be unloaded to a separate file whether the XML column
| length is less than 32K or not.
| Specify XML as the output field type. If the output is a non-delimited format, a
| 2-byte length will precede the value of the XML. For delimited output, no length
| field is present. XML is the only acceptable field type when unloading the XML
| directly to the output record. No data type conversion applies and you cannot
| specify FROMCOPY.
Within a FROM TABLE clause, you can specify one or more of the following
criteria:
v Row and column selection criteria by using the field specification list
v Row selection conditions by using the WHEN specification clause
v Row sampling specifications
Important: When an incremental image copy is taken of a table space, rows might
be updated or moved if the SHRLEVEL CHANGE option is specified. As a result,
data that is unloaded from such a copy might contain duplicates of these rows.
You can specify a format conversion option for each field in the field specification
list.
If you select a LOB column in a list of field specifications or select a LOB column
by default (by omitting a list of field specifications), LOB data is materialized in
the output. However, you cannot select LOB columns from image copy data sets.
Unload rows from a single image copy data set by specifying the FROMCOPY
option in the UNLOAD control statement. Specify the FROMCOPYDDN option to
unload data from one or more image copy data sets that are associated with the
specified DD name. Use an image copy that contains the page set header page
when you are unloading a ROWID column; otherwise the unload fails.
The source image copy data set must have been created by one of the following
utilities:
v COPY
v COPYTOCOPY
v LOAD inline image copy
v MERGECOPY
v REORG TABLESPACE inline image copy
v DSN1COPY
UNLOAD accepts full image copies, incremental image copies, and a copy of
pieces as valid input sources.
The UNLOAD utility supports image copy data sets for a single table space. The
table space name must be specified in the TABLESPACE option. The specified table
space must exist when you run the UNLOAD utility. (That is, the table space
cannot have been dropped since the image copy was taken.)
Use the FROMCOPYDDN option to concatenate the copy of table space partitions
under a DD name to form a single input data set image. When you use the
FROMCOPYDDN option, concatenate the data sets in the order of the data set
number; the first data set must be concatenated first. If the data sets are
concatenated in the wrong order or if different generations of image copies are
concatenated, the results might be unpredictable. For example, if the most recent
image copy data sets and older image copies are intermixed, the results might be
unpredictable.
You can use the FROMCOPYDDN option to concatenate a full image copy and
incremental image copies for a table space, a partition, or a piece, but duplicate
rows are also unloaded in this situation. Instead, consider using MERGECOPY to
generate an updated full image copy as the input to the UNLOAD utility.
You can select specific rows and columns to unload just as you would for a table
space. However, you can unload only rows that contain LOB columns when the
LOB columns are not included in a field specification list. If you use an image
copy that does not contain the page set header page when unloading a ROWID
column, the unload fails.
If you use the FROMCOPY or the FROMCOPYDDN option, you can specify only
one output data set.
exist, the UNLOAD utility issues a warning message, and all the qualified rows in
duplicate pages are unloaded into the output data set.
If you specify a dropped table on the FROM TABLE option, the UNLOAD utility
terminates with return code 4. If you do not specify a FROM TABLE option and if
an image copy contains rows from dropped tables, UNLOAD ignores these rows.
When you specify either a full or incremental copy of partitions of a segmented
table space that consists of multiple data sets in the FROMCOPY option, be careful
when applying a mass delete to a table in the table space before you create the
copy. If a mass delete of a table occurs, the utility unloads deleted rows if the
space map pages that indicate the mass delete are not included in the data set that
corresponds to the specified copy. Where possible, use the FROMCOPYDDN
option to concatenate the copy of table space partitions.
If an image copy contains a table to which ALTER ADD COLUMN was applied
after the image copy was taken, the UNLOAD utility sets the system or
user-specified default value for the added column when the data is unloaded from
such an image copy.
When you unload a floating-point type column, you can specify the binary form of
the output to either the S/390 format (hexadecimal floating point, or HFP), or the
IEEE format (binary floating point, or BFP).
You can also convert a varying-length column to a fixed-length output field, with
or without padding characters. In either case, unless you explicitly specify a
fixed-length data type for the field, the data itself is treated as a varying-length
data, and a length field is appended to the data.
For certain data types, you can unload data into fields with a smaller length by
using the TRUNCATE or STRIP options. In this situation, if a character code
conversion is applied, the length of the data in bytes might change due to the code
conversion. The truncation operation is applied after the code conversion.
You can perform character code conversion on a character type field, including
converting numeric columns to the external format and the CLOB type. Be aware
that when you apply a character code conversion for mixed-data fields, the length
of the result string in bytes can be shorter or longer than the length of the source
string. Character type data is always converted if you specify any of the character
code conversion options (EBCDIC, ASCII, UNICODE, or CCSID).
DATE, TIME, or TIMESTAMP column types are always converted into the external
formats based on the DATE, TIME, and TIMESTAMP formats of your installation.
the general DB2 rules and conventions on the data type attributes and the
compatibility among the data types, as described in Chapter 2 of DB2 SQL
Reference.
If you specify a data type in the UNLOAD control statement, the field type
information is included in the generated LOAD utility statement. For specific data
type compatibility information, refer to Table 138, Table 139, and Table 140 on page
705. These tables show the compatibility of the data type of the source column
(input data type) with the data type of the output field (output data type). A Y
indicates that the input data type can be converted to the output data type.
Notes:
1. Subject to the CCSID conversion, if specified (EXTERNAL case). For more information about CCSID, see “CCSID” on page 669.
2. Potential overflow (conversion error).
| 3. When converting from DECFLOAT(34) to DECFLOAT(16), you might encounter overflow, underflow, subnormal number, or
| inexact. However, there will be no conversion error.
Notes:
1. Subject to the CCSID conversion, if specified.
2. Results in an error if the field length is too small for the data unless you specify the TRUNCATE option. Note that a LOB has a
4-byte length field; any other varying-length type has a 2-byte length field.
3. Only in the EBCDIC output mode.
4. Not applicable to BIT subtype data.
Notes:
1. Subject to the CCSID conversion, if specified.
2. Zeros in the time portion.
3. DATE or TIME portion of the timestamp.
Use the POSITION option to specify field position in the output records. You can
also specify the size of the output data field by using the length parameter for a
particular data type. The length parameter must indicate the size of the actual data
field. The start parameter of the POSITION option indicates the starting position of
a field, including the NULL indicator byte (if the field can be null) and the length
field (if the field is varying length).
Using the POSITION parameter, the length parameter, or both can restrict the size
of the data field in the output records. Use care when specifying the POSITION
and length parameters, especially for nullable fields and varying length fields. If a
conflict exists between the length parameter and the size of the field in the output
record that is specified by the POSITION parameters, DB2 issues an error message,
and the UNLOAD utility terminates. If an error occurs, the count of the number of
records in error is incremented. See the description of the MAXERR option under
“MAXERR” on page 672 for more information.
If you specify a length parameter for a varying-length field and you also specify
the NOPAD option, length indicates the maximum length of data that is to be
unloaded. Without the NOPAD option, UNLOAD reserves a space of the given
length instead of the maximum data size.
If you explicitly specify start parameters for certain fields, they must be listed in
ascending order in the field selection list. Unless you specify HEADER NONE for
the table, a fixed-length record header is placed at the beginning of each record for
the table, and the start parameter must not overlap the record header area.
The TRUNCATE option is available for certain output field types. See
“FROM-TABLE-spec ” on page 674 and “Specifying TRUNCATE and STRIP
options for output data” on page 711 for more information. For the output field
types where the TRUNCATE option is not applicable, enough space must be
provided in the output record for each field. The output field layouts are
summarized in “Determining the layout of output fields.”
For information about errors that can occur at the record level due to the field
specifications, see “Interpreting field specification errors” on page 712.
Data field
Figure 108 shows the layout of a fixed-length field that can be null. This diagram
shows that a null indicator byte is stored before the data field, which begins at the
specified position or at the next byte position past the end of the previous data
field.
Data field
If you are running UNLOAD with the NOPAD option and need to determine the
layout of a varying-length field that cannot be null, see the layout diagram in
Figure 109. The length field begins at the specified position or at the next byte
position past the end of the previous data field.
Data field
Figure 109. Layout of a varying-length field (NOT NULL) with the NOPAD option
For UNLOAD without the NOPAD option, the layout of a varying-length field that
cannot be null is depicted inFigure 110.
Figure 110. Layout of a varying-length field (NOT NULL) without the NOPAD option
For UNLOAD with the NOPAD option, the layout of a varying-length field that
can be null is depicted in Figure 111 on page 708. The length field begins at the
specified position or at the next byte position past the end of the previous data
field.
Data field
Figure 111. Layout of a nullable varying-length field with the NOPAD option
For UNLOAD without the NOPAD option, the layout of a varying-length field that
can be null is depicted in Figure 112. The length field begins at the specified
position or at the next byte position past the end of the previous data field.
Figure 112. Layout of a nullable varying-length field without the NOPAD option
You are responsible for ensuring that the chosen delimiters are not part of the data
in the file. If the delimiters are part of the file’s data, unexpected errors can occur.
Table 35 on page 262 lists by encoding scheme the default hex values for the
delimiter characters.
Table 141. Default delimiter values for different encoding schemes
EBCDIC ASCII/Unicode ASCII/Unicode
Character EBCDIC SBCS DBCS/MBCS SBCS MBCS
Character string X'7F' X'7F' X'22' X'22'
delimiter
Decimal point X'4B' X'4B' X'2E' X'2E'
character
Column X'6B' X'6B' X'2C' X'2C'
delimiter
In most EBCDIC code pages, the hex values in Table 35 on page 262 represent a
double quotation mark(") for the character string delimiter, a period(.) for the
decimal point character, and a comma(,) for the column delimiter.
Table 36 on page 263 lists by encoding scheme the maximum allowable hex values
for any delimiter character.
Table 142. Maximum delimiter values for different encoding schemes
Encoding scheme Maximum allowable value
EBCDIC SBCS None
EBCDIC DBCS/MBCS X'3F'
ASCII/Unicode SBCS None
ASCII/Unicode MBCS X'7F'
Table 143 identifies the acceptable data type forms for the delimited file format that
the LOAD and UNLOAD utilities use.
Table 143. Acceptable data type forms for delimited files
Acceptable form for loading a Form that is created by unloading a
Data type delimited file delimited file
CHAR, VARCHAR A delimited or non-delimited Character data that is enclosed by
character string character delimiters. For VARCHAR,
length bytes do not precede the data
in the string.
GRAPHIC (any type) A delimited or non-delimited Data that is unloaded as a delimited
character stream character string. For VARGRAPHIC,
length bytes do not precede the data
in the string.
INTEGER (any type) A stream of characters that represents Numeric data in external format.
a number in EXTERNAL format
Decimal (any type) A character stream that represents a A string of characters that represents
number in EXTERNAL format a number.
FLOAT Representation of a number in the A string of characters that represents
range -7.2E + 75 to 7.2E + 75in a number in floating-point notation.
EXTERNAL format
BLOB, CLOB A delimited or non-delimited Character data that is enclosed by
character string character delimiters. Length bytes do
not precede the data in the string.
DBCLOB A delimited or non-delimited Character data that is enclosed by
character string character delimiters. Length bytes do
not precede the data in the string.
DATE A delimited or non-delimited A string of characters that represents
character string that contains a date a date.
value in EXTERNAL format
TIME A delimited or non-delimited A string of characters that represents
character string that contains a time a time.
value in EXTERNAL format
TIMESTAMP A delimited or non-delimited A string of characters that represents
character string that contains a a timestamp.
timestamp value in EXTERNAL
format
Table 143. Acceptable data type forms for delimited files (continued)
Acceptable form for loading a Form that is created by unloading a
Data type delimited file delimited file
| XML A delimited or non-delimited XML A string of characters that represents
character string an XML document.
For bit strings, truncation occurs at a byte boundary. For character type data,
truncation occurs at a character boundary (a multi-byte character is not split). If a
mixed-character type data is truncated in an output field of fixed size, the
truncated string can be shorter than the specified field size. In this case, blanks in
the output CCSID are padded to the right. If the output data is in EBCDIC for a
mixed-character type field, truncation preserves the SO (shift-out) and the SI
(shift-in) characters around a DBCS substring.
The TRUNCATE option of the UNLOAD utility truncates string data, and it has a
different purpose than the SQL TRUNCATE scalar function.
The generated LOAD statement includes WHEN and INTO TABLE specifications
that identify the table where the rows are to be reloaded, unless the HEADER
NONE option was specified in the UNLOAD control statement. You need to edit
the generated LOAD statement if you intend to load the UNLOAD output data
into different tables than the original ones.
If multiple table spaces are to be unloaded and you want UNLOAD to generate
LOAD statements, you must specify a physically distinct data set for each table
space to PUNCHDDN by using a template that contains the table space as a
variable (&TS.).
If PUNCHDDN is not specified and the SYSPUNCH DD name does not exist, the
LOAD statement is not generated.
If the image copy data set is an incremental copy or a copy of pieces that does not
contain a dictionary, the FROMCOPYDDN option can be used for a DD name to
concatenate the data set with the corresponding full image copy that contains the
dictionary. If SYSTEMPAGES YES is used, a dictionary will always be available in
the incremental copies or pieces. For more information, see “FROMCOPYDDN” on
page 667.
When the source is one or more image copy data sets (when FROMCOPY or
FROMCOPYDDN is specified), UNLOAD always starts processing from the
beginning.
Claims and drains: Table 145 shows which claim classes UNLOAD drains and the
restrictive states that the utility sets.
Table 145. Claim classes of UNLOAD operations
Target UNLOAD UNLOAD PART
Table space or physical partition of a table DW/UTRO DW/UTRO
space with SHRLEVEL REFERENCE
Table space or physical partition of a table CR/UTRW CR/UTRW
space with SHRLEVEL CHANGE
Image copy* CR/UTRW CR/UTRW
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v UTRO: Utility restrictive state, read-only access allowed
v CR: Claim read, concurrent access for SQL writers and readers
v UTRW: Utility restrictive state; read-write access allowed
Note: * If the target object is an image copy, the UNLOAD utility applies CR/UTRW to the
corresponding table space or physical partitions to prevent the table space from being
dropped while data is being unloaded from the image copy, even though the UNLOAD
utility does not access the data in the table space.
Compatibility: The compatibility of the UNLOAD utility and the other utilities on
the same target objects are shown in Table 146. If the SHRLEVEL REFERENCE
option is specified, only SQL read operations are allowed on the same target
objects; otherwise SQL INSERT, DELETE, and UPDATE are also allowed. If the
target object is an image copy, INSERT, DELETE, and UPDATE are always allowed
on the corresponding table space. In any case, DROP or ALTER cannot be applied
to the target object while the UNLOAD utility is running.
Table 146. Compatibility of UNLOAD with other utilities
UNLOAD UNLOAD
SHRLEVEL SHRLEVEL FROM IMAGE
Action REFERENCE CHANGE COPY
CHECK DATA Yes Yes Yes
DELETE NO
CHECK DATA No No Yes
DELETE YES
CHECK INDEX Yes Yes Yes
CHECK LOB Yes Yes Yes
COPY INDEXSPACE Yes Yes Yes
COPY TABLESPACE Yes Yes Yes*
DIAGNOSE Yes Yes Yes
LOAD SHRLEVEL No Yes Yes
CHANGE
LOAD SHRLEVEL No No Yes
NONE
MERGECOPY Yes Yes No
MODIFY RECOVERY Yes Yes No
MODIFY STATISTICS Yes Yes Yes
QUIESCE Yes Yes Yes
REBUILD INDEX Yes Yes Yes
The output from this example might look similar to the following output:
000060@@STERN# 32250.00
000150@@ADAMSON# 25280.00
000200@@BROWN# 27740.00
000220@@LUTZ# 29840.00
200220@@JOHN# 29840.00
In this output:
v '@@' before the last name represents the 2-byte binary field that contains the
length of the VARCHAR field LASTNAME (for example, X'0005' for STERN).
v '#' represents the NULL indicator byte for the nullable SALARY field.
v Because the SALARY column is declared as DECIMAL (9,2) on the table, the
default output length of the SALARY field is 11 (9 digits + sign + decimal point),
not including the NULL indicator byte.
v LASTNAME is unloaded as a variable-length field because the NOPAD option is
specified.
Example 3: Unloading data from an image copy. The FROMCOPY option in the
following control statement specifies that data is to be unloaded from a single
image copy data set, JUKWU111.FCOPY1.STEP1.FCOPY1.
Example 5: Unloading data from two tables in a segmented table space. The
following control statement specifies that data from table ADMF001.TBKW1504
and table ADMF001.TBKW1505 is to be unloaded from the segmented table space
DBKW1502.TSKW1502. The PUNCHDDN option indicates that UNLOAD is to
generate LOAD utility control statements and write them to the SYSPUNCH data
set, which is the default. The UNLDDN option specifies that the data is to be
unloaded to the data set that is defined by the SYSREC DD statement, which is
also the default.
UNLOAD TABLESPACE DBKW1502.TSKW1502
PUNCHDDN SYSPUNCH UNLDDN SYSREC
FROM TABLE ADMF001.TBKW1504
FROM TABLE ADMF001.TBKW1505
Assume that table space TDB1.TSP1, which contains table TCRT.TTBL, has three
partitions. Because the table space is partitioned and each partition is associated
with an output data set that is defined by the UNLDDS template, the UNLOAD
job runs in parallel in a multi-processor environment. The number of parallel tasks
are determined by the number of available processors.
Figure 114. Example of unloading data in parallel from a partitioned table space
Assume that the user ID is USERID. This UNLOAD job creates the following three
data sets to store the unloaded data:
v USERID.SMPLUNLD.TSP1.P00001 ... contains rows from partition 1.
v USERID.SMPLUNLD.TSP1.P00002 ... contains rows from partition 2.
v USERID.SMPLUNLD.TSP1.P00003 ... contains rows from partition 3.
The data is to be unloaded to data sets that are defined by the UNLDDS template.
For more information about TEMPLATE control statements, see “Syntax and
options of the TEMPLATE control statement ” on page 641 in the TEMPLATE
chapter.
Figure 115. Example of using a LISTDEF utility statement to specify partitions to unload
Assume that the user ID is USERID. This UNLOAD job creates the following two
data sets to store the unloaded data:
v USERID.SMPLUNLD.TSP1.P00001 ... contains rows from partition 1.
v USERID.SMPLUNLD.TSP1.P00003 ... contains rows from partition 3.
The UNLDDN option specifies that the data is to be unloaded to data sets that are
defined by the UNLDDS template. The PUNCHDDN option specifies that
UNLOAD is to generate LOAD utility control statements and write them to the
data sets that are defined by the PUNCHDS template. For more information about
TEMPLATE control statements, see “Syntax and options of the TEMPLATE control
statement ” on page 641 in the TEMPLATE chapter.
Assume that the user ID is USERID. This UNLOAD job creates the following two
data sets to store the unloaded data:
v USERID.SMPLUNLD.TSP1 ... contains rows from table space TDB1.TSP1.
v USERID.SMPLUNLD.TSP2 ... contains rows from table space TDB1.TSP2.
The column delimiter is specified by the COLDEL option as a semicolon (;), the
character string delimiter is specified by the CHARDEL option as a pound sign (#),
and the decimal point character is specified by the DECPT option as an
exclamation point (!).
The EBCDIC option indicates that all output character data is to be in EBCDIC.
//*
//STEP3 EXEC DSNUPROC,UID=’JUQBU105.UNLD1’,
// UTPROC='',
// SYSTEM='SSTR'
//UTPRINT DD SYSOUT=*
//SYSREC DD DSN=JUQBU105.UNLD1.STEP3.TBQB0501,DISP=(MOD,DELETE,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSPUNCH DD DSN=JUQBU105.UNLD1.STEP3.SYSPUNCH
// DISP=(MOD,CATLG,CATLG)
// UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSIN DD*
UNLOAD TABLESPACE DBQB0501.TSQB0501
DELIMITED CHARDEL '#' COLDEL ';' DECPT '!'
PUNCHDDN SYSPUNCH
UNLDDN SYSREC EBCDIC
FROM TABLE ADMF001.TBQB0501
(RECID POSITION(*) CHAR,
CHAR7SBCS POSITION(*) CHAR,
CHAR7SBIT POSITION(*) CHAR(7),
VCHAR20 POSITION(*) VARCHAR,
VCHAR20SBCS POSITION(*) VARCHAR,
VCHAR20BIT POSITION(*) VARCHAR)
/*
Example 10: Converting character data. For this example, assume that table
DSN8810.DEMO_UNICODE contains character data in Unicode. The UNLOAD
control statement in Figure 118specifies that the utility is to unload the data in this
table as EBCDIC data.
UNLOAD
EBCDIC
TABLSPACE DSN8D81E.DSN8S81U
FROM TABLE DSN8810.DEMO_UNICODE
Example 11: Unloading LOB data to a file. The UNLOAD control statement in
Figure 119 specifies that the utility is to unload data from table
DSN8910.EMP_PHOTO_RESUME into the data set that is identified by the
SYSREC DD statement. Data in the EMPNO field is six bytes of character data, as
indicated by the CHAR(6) option, and is unloaded directly into the SYSREC data
set. Data in the RESUME column is CLOB data as indicated by the CLOBF option.
This CLOB data is to be unloaded to the files identified by the LOBFRV template,
which is defined in the preceding TEMPLATE statement. If these files do not
already exist, DB2 creates them. The names of these files are stored in the SYSREC
data set. The length of the file name to be stored in this data set can be up to 255
bytes as specified by the VARCHAR option.
UNLOAD DATA
FROM TABLE DSN8910.EMP_PHOTO_RESUME
(EMPNO CHAR(6),
RESUME VARCHAR(255) CLOBF LOBFRV)
SHRLEVEL CHANGE
| Example 12: Unloading data from clone tables. The UNLOAD control statement
| specifies that the utility is to unload data from only clone tables in the specified
| table spaces. The PUNCHDDN option specifies that the SYSPUNCH data set is to
| receive the LOAD utility control statements that the UNLOAD utility generates.
| UNLOAD TABLESPACE DBKQRE01.TPKQRE01
| FROM TABLE ADMF001.TBKQRE01_CLONE
| PUNCHDDN SYSPUNCH UNLDDN SYSREC
| CLONE
Chapter 36. DSNJU003 (change log inventory) 729 Chapter 39. DSN1COMP . . . . . . . . . 775
Syntax and options of the DSNJU003 control Syntax and options of the DSN1COMP control
statement . . . . . . . . . . . . . . . 729 statement . . . . . . . . . . . . . . 775
DSNJU003 (change log inventory) syntax DSN1COMP syntax diagram . . . . . . . 775
diagram . . . . . . . . . . . . . . 729 Option descriptions . . . . . . . . . . 776
Option descriptions . . . . . . . . . . 732 Before running DSN1COMP . . . . . . . . 778
Before running DSNJU003 . . . . . . . . . 743 Environment . . . . . . . . . . . . 778
Environment . . . . . . . . . . . . 743 Authorization required . . . . . . . . . 778
Authorization required . . . . . . . . . 743 Control statement . . . . . . . . . . . 778
Control statement . . . . . . . . . . . 743 Recommendation . . . . . . . . . . . 779
Using DSNJU003 to modify the BSDS . . . . . 744 DSN1COMP usage information . . . . . . . 780
Running DSNJU003 . . . . . . . . . . 745 Estimating compression savings achieved with
Making changes for active logs . . . . . . 745 option REORG . . . . . . . . . . . . 780
Making changes for archive logs . . . . . . 746 Free space in compression calculations on table
Creating a conditional restart control record . . 747 space . . . . . . . . . . . . . . . 780
Deleting log data sets with errors. . . . . . 747 The effect of running DSN1COMP on a table
Altering references to NEWLOG and DELETE space with identical rows . . . . . . . . 781
data sets . . . . . . . . . . . . . . 748 Sample DSN1COMP control statements . . . . 781
Defining the high-level qualifier for catalog and DSN1COMP output . . . . . . . . . . . 783
directory objects . . . . . . . . . . . 748 Message DSN1941 . . . . . . . . . . . 783
Renaming DB2 system data sets . . . . . . 749 Sample DSN1COMP report . . . . . . . . 783
Renaming DB2 active log data sets . . . . . 749 Example of DSN1COMP on an index . . . . 783
Renaming DB2 archive log data sets . . . . . 749
Sample DSNJU003 control statements . . . . . 750 Chapter 40. DSN1COPY . . . . . . . . . 785
Syntax and options of the DSN1COPY control
Chapter 37. DSNJU004 (print log map) . . . . 753 statement . . . . . . . . . . . . . . 786
Syntax and options of the DSNJU004 control DSN1COPY syntax diagram . . . . . . . 786
statement . . . . . . . . . . . . . . . 753 Option descriptions . . . . . . . . . . 786
DSNJU004 (print log map) syntax diagram . . 753 Before running DSN1COPY . . . . . . . . 791
Utility control statements and parameters define the function that a utility job
performs. Some stand-alone utilities read the control statements from an input
stream, and others obtain the function definitions from JCL EXEC PARM
parameters.
The following utilities read control statements from the input stream file of the
specified DD name:
Utility DD name
DSNJU003 (change log inventory)
SYSIN
DSNJU004 (print log map) SYSIN (optional)
DSN1LOGP SYSIN
DSN1SDMP SDMPIN
Utility control statements are read from the DD name input stream. The statements
in that stream must conform to the following rules:
v The logical record length (LRECL) must be 80 characters. Columns 73 through
80 are ignored.
v The records are concatenated into a single stream before they are parsed. No
concatenation character is necessary.
v The SYSIN stream can contain multiple utility control statements.
Ensure that the parameters that you specify obey the following OS/390 JCL EXEC
PARM parameter specification rules:
Environment
Execute the DSNJCNVB utility as a batch job only when DB2 is not running.
Authorization required
The authorization ID of the DSNJCNVB job must have the requisite RACF
authorization.
Prerequisite actions
If you have migrated to a new version of DB2, you need to create a larger BSDS
before converting it. See the DB2 Installation Guide for instructions on how to create
a larger BSDS. For a new installation, you do not need to create a larger BSDS.
DB2 provides a larger BSDS definition in installation job DSNTIJIN; however, if
you want to convert the BSDS, you must still run DSNJCNVB.
Control statement
See “Sample DSNJCNVB control statement ” on page 726 for an example of using
DSNJCNVB to convert the BSDS.
SYSUT2 Specifies the BSDS copy 2 data set that DSNJCNVB is to use as
input. This statement is optional.
Specify this statement if you are using dual BSDSs and you want
to convert both with a single execution of DSNJCNVB. You can run
DSNJCNVB separately for each copy if desired.
SYSPRINT Specifies a data set or print spool class for print output. This
statement is required. The logical record length (LRECL) is 125.
Running DSNJCNVB
Use the following EXEC statement to execute this utility:
//EXEC PGM=DSNJCNVB
DSNJCNVB output
The following example shows sample DSNJCNVB output:
CONVERSION OF BSDS DATA SET - COPY 1, DSN=DSNC810.BSDS01
SYSTEM TIMESTAMP - DATE=2003.199 LTIME= 9:40:58.74
UTILITY TIMESTAMP - DATE=2003.216 LTIME=14:26:02.21
PREVIOUS HIKEY - 04000053
NEW HIKEY - 040002F0
RECORDS ADDED - 669
DSNJ260I DSNJCNVB BSDS CONVERSION FOR DDNAME=SYSUT1 COMPLETED SUCCESSFULLY
DSNJ200I DSNJCNVB CONVERT BSDS UTILITY PROCESSING COMPLETED SUCCESSFULLY
Environment
Run DSNJLOGF as a z/OS job.
Control statement
See “Sample DSNJLOGF control statement” for an example of using DSNJLOGF to
preformat the active log data sets.
//JOBLIB DD DSN=DSN910.SDSNLOAD,DISP=SHR
//STEP1 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC910.LOGCOPY1.DS01,DISP=SHR
//STEP2 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC910.LOGCOPY1.DS02,DISP=SHR
//STEP3 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC910.LOGCOPY2.DS01,DISP=SHR
//STEP4 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC910.LOGCOPY2.DS02,DISP=SHR
DSNJLOGF output
The following sample shows the DSNJLOGF output for the first data set in the
sample control statement in Figure 120.
DSNJ991I DSNJLOGF START OF LOG DATASET PREFORMAT FOR JOB LOGFRMT STEP1
DSNJ992I DSNJLOGF LOG DATA SET NAME = DSNC910.LOGCOPY1.DS01
DSNJ996I DSNJLOGF LOG PREFORMAT COMPLETED SUCCESSFULLY, 00015000
RECORDS FORMATTED
NEWLOG statement
,COPY1
,COPY2 ,STARTRBA=startrba,ENDRBA=endrba
,CATALOG=NO
,COPY1VOL=vol-id ,STARTRBA=startrba,ENDRBA=endrba,UNIT=unit-id
,COPY2VOL=vol-id ,CATALOG=YES
STRTLRSN=startlrsn,ENDLRSN=endlrsn
DELETE statement
DELETE DSNAME=data-set-name
,COPY1VOL=vol-id
,COPY2VOL=vol-id
CCSIDS
CRESTART statement
create-spec:
|
,STARTRBA=startrba ,ENDRBA=endrba ,CHKPTRBA=chkptrba
,ENDLRSN=endlrsn
,SYSPITR=log-truncation-point
,ENDTIME=log-truncation-timestamp
,SYSPITRT=log-truncation-timestamp
,FORWARD=YES ,BACKOUT=YES
,FORWARD=NO ,BACKOUT=NO
,CSRONLY
NEWCAT statement
NEWCAT VSAMCAT=catalog-name
| DDF statement
|
| DDF ip-spec
lu-spec
no-spec
|
|
| ip-spec:
| ,
LOCATION=locname
PORT=port
RESPORT=resport
SECPORT=secport
,
ALIAS= alias-name
: alias-port
: alias-secport
: alias-port-:alias-secport
IPNAME=ipname
,
IPV4=IPV4-address
,GRPIPV4=group-ipv4-addr
IPV6=IPV6-address
,GRPIPV6=group-ipv6-addr
| lu-spec:
| ,
LOCATION=locname
LUNAME=luname
PASSWORD=password
GENERIC=gluname
PORT=port
RESPORT=resport
,
ALIAS= alias-name
:alias-port
| no-spec:
| NOPASSWD
NGENERIC
NOALIAS
NOIPV4 , NGRPIPV4
NOIPV6 , NGRPIPV6
NGRPIPV4
NGRPIPV6
NOIPNAME
NOLUNAME
CHECKPT statement
HIGHRBA statement
Option descriptions
“Creating utility control statements” on page 723 provides general information
about specifying options for DB2 utilities.
NEWLOG Declares one of the following data sets:
v A VSAM data set that is available for use as an active log data
set.
Use only the keywords DSNAME=, COPY1, and COPY2.
v An active log data set that is replacing one that encountered an
I/O error.
Use only the keywords DSNAME=, COPY1, COPY2,
STARTRBA=, and ENDRBA=.
v An archive log data set volume.
Use only the keywords DSNAME= ,COPY1VOL=, COPY2VOL=,
STARTRBA=, ENDRBA=, UNIT=, CATALOG=, STRTLRSN=, and
ENDLRSN=.
If you create an archive log data set and add it to the BSDS with
this utility, you can specify a name that DB2 might also generate.
DB2 generates archive log data set names of the form
DSNCAT.ARCHLOGx.Annnnnnn where:
– DSNCAT and ARCHLOG are parts of the data set prefix that
you specified on installation panels DSNTIPA2 and DSNTIPH.
– x is 1 for the first copy of the logs, and 2 is for the second
copy.
– Annnnnnn represents the series of low-level qualifiers that
DB2 generates for archive log data set names, beginning with
A0000001, and incrementing to A0000002, A0000003, and so
forth.
For data sharing, the naming convention is
DSNCAT.ARCHLOG1 or DSNCAT.DSN1.ARCLG1.
If you do specify a name by using the same naming convention
as DB2, you receive a dynamic allocation error when DB2
generates that name. The error message, DSNJ103I, is issued
once. DB2 then increments the low-level qualifier to generate the
next data set name in the series and offloads to it the next time
DB2 archives. (The active log that previously was not offloaded
is offloaded to this data set.)
The newly defined active logs cannot specify a start and end
LRSN. When DB2 starts, it reads the new active log data sets
with an RBA range to determine the LRSN range, and updates
the start and end LRSN in the BSDS for the new log data sets.
The start and end LRSN for new active logs that contain active
log data are read at DB2 start-up time from the new active log
data sets that are specified in the change log inventory
NEWLOG statements. For new archive logs that are defined
with change log inventory, the user must specify the start and
end RBAs. For data sharing, the user must also specify the start
and end LRSNs. DB2 startup does not attempt to find these
values from the new archive log data sets.
DSNAME=data-set-name
Specifies a log data set.
data-set-name can be up to 44 characters long.
COPY1 Makes the data set an active log copy-1 data set.
COPY2 Makes the data set an active log copy-2 data set.
STARTRBA=startrba
Identifies a hexadecimal number of up to 12 characters. If you use
fewer than 12 characters, leading zeros are added. startrba must
end with '000'; otherwise DB2 returns a DSNJ4381 error message.
You can obtain the RBA from messages or by printing the log map.
On the NEWLOG statement, startrba gives the log RBA of the
beginning of the replacement active log data set or the archive log
data set volume that is specified by DSNAME.
On the CRESTART statement, startrba is the earliest RBA of the
log that is to be used during restart. If you omit STARTRBA, DB2
determines the beginning of the log range.
On the CHECKPT statement, startrba indicates the start checkpoint
log record.
STARTRBA is required when STARTIME is specified.
On the HIGHRBA statement, startrba denotes the log RBA of the
highest-written log record in the active log data sets.
ENDRBA=endrba
endrba is a hexadecimal number of up to 12 characters. If you use
fewer than 12 characters, leading zeros are added. endrba must end
with 'FFF' or DB2 returns a DSNJ4381 error message.
On the NEWLOG statement, endrba gives the log RBA (relative
byte address within the log) of the end of the replacement active
log data set or the archive log data set volume that is specified by
DSNAME.
On the CRESTART statement, endrba is the last RBA of the log
that is to be used during restart, and it is also the starting RBA of
the next active log that is written after restart. Any log information
in the bootstrap data set, the active logs, and the archive logs with
an RBA that is greater than endrba is discarded. If you omit
ENDRBA, DB2 determines the end of the log range.
The value of ENDRBA must be a multiple of 4096. (The
hexadecimal value must end in 000.) Also, the value must be
greater than or equal to the value of STARTRBA. If STARTRBA and
ENDRBA are equal, the next restart is a cold start; that is, no log
records are processed during restart. The specified RBA becomes
the beginning RBA of the new log.
On the CHECKPT statement, endrba indicates the end checkpoint
log record that corresponds to the start checkpoint log record.
COPY1VOL=vol-id
vol-id is the volume serial of the copy-1 archive log data set that is
specified after DSNAME.
COPY2VOL=vol-id
vol-id is the volume serial of the copy-2 archive log data set that is
specified after DSNAME.
UNIT=unit-id unit-id is the device type of the archive log data set that is named
after DSNAME.
CATALOG Indicates whether the archive log data set is to be cataloged.
NO Indicates that the archive log data set is not to be
In this format:
yyyy Indicates the year (1989-2099).
ddd Indicates the day of the year (0-365; 366 in leap years).
hh Indicates the hour (0-23).
mm Indicates the minutes (0-59).
ss Indicates the seconds (0-59).
t Indicates tenths of a second.
there exists a log record with an LRSN that is greater than or equal
to the specified LRSN value. Use the same LRSN value for all
members of the data sharing group that require log truncation.
You cannot specify any other option with CREATE, SYSPITR. You
can run this option of the utility only after new-function mode is
enabled.
| ENDTIME=log-truncation-timestamp
| Specifies an end time value that is to be used as the log truncation
| point. A valid truncation point is any GMT timestamp for which
| there exists a log record with a timestamp that is greater than or
| equal to the specified timestamp value. Any log information in the
| bootstrap data set, the active logs, and the archive logs with a
| timestamp greater than the ENDTIME is discarded. If you do not
| specify ENDTIME, DB2 determines the end of the log range.
| You cannot specify any other option with CREATE, ENDTIME. You
| can run this option of the utility only after new-function mode is
| enabled.
| SYSPITRT=log-truncation-timestamp
| Specifies the timestamp value that represents the point-in-time log
| truncation point for system recovery. Before you run the RESTORE
| SYSTEM utility to recover system data, you must use the SYSPITR
| or SYSPITRT option of DSNJU003. The options enable you to create
| a conditional restart control record to truncate the logs for system
| point-in-time recovery.
| Log-truncation-timestamp specifies a timestamp value that is to be
| used as the log truncation point. A valid log truncation point is
| any GMT timestamp for which there exists a log record with a
| timestamp that is greater than or equal to the specified timestamp
| value. Any log information in the bootstrap data set, the active
| logs, and the archive logs with a timestamp greater than SYSPITRT
| is discarded. If you omit SYSPITRT, DB2 determined the end of the
| log range. Use the same timestamp value for all members of the
| data sharing group that require log truncation.
| You cannot specify any other option with CREATE, SYSPITRT. You
| can run this option of the utility only after new-function mode is
| enabled.
CANCEL On the CRESTART statement, deactivates the currently active
conditional restart control record. The record remains in the BSDS
as historical information.
No other keyword can be used with CANCEL on the CRESTART
statement.
On the CHECKPT statement, deletes the checkpoint queue entry
that contains a starting RBA that matches the parameter that is
specified by the STARTRBA keyword.
CHKPTRBA=chkptrba
Identifies the log RBA of the start of the checkpoint record that is
to be used during restart.
If you use STARTRBA or ENDRBA, and you do not use
CHKPTRBA, the DSNJU003 utility selects the RBA of an
appropriate checkpoint record. If you do use CHKPTRBA, you
override the value that is selected by the utility.
chkptrba must be in the range that is determined by startrba and
endrba or their default values.
If possible, do not use CHKPTRBA; let the utility determine the
RBA of the checkpoint record.
CHKPTRBA=0 overrides any selection by the utility; at restart, DB2
attempts to use the most recent checkpoint record.
FORWARD= Indicates whether to use the forward-log-recovery phase of DB2
restart, which reads the log in a forward direction to recover any
units of recovery that were in one of the following two states when
DB2 was last stopped:
v Indoubt (the units of recovery had finished the first phase of
commit, but had not started the second phase)
v In-commit (had started but had not finished the second phase of
commit)
For a complete description of the forward-log-recovery phase, see
Part 4 of DB2 Administration Guide.
YES Allows forward-log recovery.
If you specify a cold start (by using the same value for
STARTRBA and ENDRBA), no recovery processing is
performed.
NO Terminates forward-log recovery before log records are
processed. When you specify, FORWARD=NO, DB2 does
not go back in the log to the beginning of any indoubt or
in-commit units of recovery to complete forward recovery
for these units. Choose this option if a very old indoubt
unit of recovery exists to avoid a lengthy restart. The
in-commit and indoubt units of recovery are marked as
bypassed and complete in the log. However, any database
writes that are pending at the end of the log, including
updates from other units of recovery, are still written out
during the forward phase of restart. Any updates that must
be rolled-back, such as for an inflight or in-abort unit of
recovery, are done during the backout phase of restart.
BACKOUT= Indicates whether to use the backward-log-recovery phase of DB2
restart, which rolls back any units of recovery that were in one of
the following two states when DB2 was last stopped:
v Inflight (did not complete the first phase of commit)
v In-abort (had started but not finished an abort)
YES Allows backward-log recovery.
If you specify a cold start (by using the same value for
STARTRBA and ENDRBA), no recovery processing is
performed.
Environment
Execute the change log inventory utility only as a batch job when DB2 is not
running. Changing a BSDS for a data-sharing member by using DSNJU003 might
cause a log read request from another data-sharing member to fail. The failure
occurs only if the second member tries to access the changed BSDS before the first
member is started.
Authorization required
The authorization ID of the DSNJU003 job must have the requisite RACF
authorization.
Control statement
See “Syntax and options of the DSNJU003 control statement” on page 729 for
DSNJU003 syntax and option descriptions.
Optional statements
The change log inventory utility provides the following statements:
v NEWLOG
v DELETE
v SYSTEMDB
v CRESTART
v NEWCAT
v DDF
v CHECKPT
v HIGHRBA
You can specify any statement one or more times. In each statement, separate the
operation name from the first parameter by one or more blanks. You can use
parameters in any order; separate them by commas with no blanks. Do not split a
parameter description across two SYSIN records.
Running DSNJU003
Execute the utility with the following statement, which can be included only in a
batch job:
//EXEC PGM=DSNJU003
To copy the contents of an old active log data set to the new one, you can also give
the RBA range and the starting and ending timestamp on the NEWLOG statement.
To archive to disk when the size of your active logs has increased, you might find
it necessary to increase the size of your archive data set primary and secondary
space quantities in DSNZPARM.
Deleting: To delete information about an active log data set from the BSDS, you
might specify the following statements:
DELETE DSNAME=DSNC910.LOGCOPY1.DS01
DELETE DSNAME=DSNC910.LOGCOPY2.DS01
Recording: To record information about an existing active log data set in the BSDS,
you might specify the following statement:
NEWLOG DSNAME=DSNC910.LOGCOPY2.DS05,COPY2,STARTIME=19910212205198,
ENDTIME=19910412205200,STARTRBA=43F8000,ENDRBA=65F3FFF
You can insert a record of that information into the BSDS for any of these reasons:
v The data set has been deleted and is needed again.
v You are copying the contents of one active log data set to another data set (copy
1 to copy 2).
v You are recovering the BSDS from a backup copy.
Enlarging: When DB2 is inactive (down), use one of the following procedures.
If you can use the Access Method Services REPRO command, follow these steps:
1. Stop DB2. This step is required because DB2 allocates all active log data sets
when it is active.
2. Use the Access Method Services ALTER command with the NEWNAME option
to rename your active log data sets.
3. Use the Access Method Services DEFINE command to define larger active log
data sets. Refer to installation job DSNTIJIN to see the definitions that create
the original active log data sets. See DB2 Installation Guide.
By reusing the old data set names, you don’t need to run the change log
inventory utility to establish new names in the BSDSs. The old data set names
and the correct RBA ranges are already in the BSDSs.
4. Use the Access Method Services REPRO command to copy the old (renamed)
data sets into their respective new data sets.
5. Start DB2.
If you cannot use the Access Method Services REPRO command, follow this
procedure:
1. Ensure that all active log data sets except the current active log data sets have
been archived. Active log data sets that have been archived are marked
REUSABLE in print log map utility (DSNJU004) output.
2. Stop DB2.
3. Rename or delete the reusable active logs. Allocate new, larger active log data
sets with the same names as the old active log data sets.
4. Run the DSNJLOGF utility to preformat the new log data sets.
5. Run the change log inventory utility (DSNJU003) with the DELETE statement
to delete all active logs except the current active logs from the BSDS.
6. Run the change log inventory utility with the NEWLOG statement to add to
the BSDS the active logs that you just deleted. So that the logs are added as
empty, do not specify an RBA range.
7. Start DB2.
8. Issue the ARCHIVE LOG command to cause DB2 to truncate the current active
logs and switch to one of the new sets of active logs.
9. Repeat steps 2 through 7 to enlarge the active logs that were just archived.
Although all log data sets do not need to be the same size, from an operational
standpoint using the same size is more consistent and efficient. If the log data sets
are not the same size, tracking your system’s logs can be more difficult. Space can
be wasted if you are using dual data sets of different sizes because they fill only to
the size of the smallest, not using the remaining space on the larger one.
If you are archiving to disk and the size of your active logs has increased, you
might need to increase the size of your archive log data sets. However, because of
DFSMS disk management limits, you must specify less than 64 000 tracks for the
primary space quantity. See the PRIMARY QUANTITY and SECONDARY QTY
fields on installation panel DSNTIPA to modify the primary and secondary
allocation space quantities. See DB2 Installation Guide for more information.
Deleting: To delete an entire archive log data set from one or more volumes, you
might specify the following statement:
DELETE DSNAME=DSNC910.ARCHLOG1.D89021.T2205197.A0000015,COPY1VOL=DSNV04
To specify a cold start, make the values of STARTRBA and ENDRBA equal with a
statement similar to the following statement:
CRESTART CREATE,STARTRBA=4A000,ENDRBA=4A000
In most cases when doing a cold start, you should make sure that the STARTRBA
and ENDRBA are set to an RBA value that is greater than the highest used RBA.
| To truncate the DB2 logs via conditional restart by specifying a timestamp rather
| than an RBA value, use a statement similar to the following statement:
| CRESTART CREATE,ENDTIME=20051402030068
| An existing conditional restart control record governs any START DB2 operation
until one of these events occurs:
v A restart operation completes.
v A CRESTART CANCEL statement is issued.
v A new conditional restart control record is created.
Use the print log map utility before and after running the change log inventory
utility to ensure correct execution and to document changes.
When using dual active logs, choose a naming convention that distinguishes
primary and secondary active log data set. The naming convention should also
identify the log data sets within the series of primary or secondary active log data
sets. For example, the default naming convention that is established at DB2
installation time is:
prefix.LOGCOPYn.DSmm
In this convention, n=1 for all primary log data sets, n=2 for all secondary log data
sets, and mm is the data set number within each series.
If a naming convention such as the default convention is used, pairs of data sets
with equal mm values are usually used together. For example,
DSNC120.LOGCOPY1.DS02 and DSNC120.LOGCOPY2.DS02 are used together.
However, after you run the change log inventory utility with the DELETE and
NEWLOG statements, the primary and secondary series can become
unsynchronized, even if the NEWLOG data set name that you specify is the same
as the old data set name. To avoid this situation, always do maintenance on both
data sets of a pair in the same change log inventory execution:
v Delete both data sets together.
v Define both data sets together with NEWLOG statements.
To ensure consistent results, execute the change log inventory utility on the same
z/OS system on which the DB2 online subsystem executes.
If misused, the change log inventory utility can compromise the viability and
integrity of the DB2 subsystem. Only highly skilled people, such as the DB2 system
administrator, should use this utility, and then only after careful consideration.
Before initiating a conditional restart or cold restart, you should consider making
backup copies of all disk volumes that contain any DB2 data sets. This enables a
possible fallback. The backup data sets must be generated when DB2 is not active.
At startup, the DB2 system checks that the name that is recorded with NEWCAT in
the BSDS is the high-level qualifier of the DB2 system table spaces that are defined
in the load module for subsystem parameters.
NEWCAT is normally used only at installation time. See “Renaming DB2 system
data sets” on page 749 for an additional function of NEWCAT.
When you change the high-level qualifier by using the NEWCAT statement, you
might specify the following statements:
//S2 EXEC PGM=DSNJU003
//SYSUT1 DD DSN=DSNC120.BSDS01,DISP=OLD
//SYSUT2 DD DSN=DSNC120.BSDS02,DISP=OLD
//SYSPRINT DD SYSOUT=*
NEWCAT VSAMCAT=DBP1
After you run the change log inventory utility with the NEWCAT statement, the
utility generates output similar to the following output:
NEWCAT VSAMCAT=DBP1
DSNJ210I OLD VASAM CATALOG NAME=DSNC120, NEW CATALOG NAME=DBP1
DSNJ225I NEWCAT OPERATION COMPLETED SUCCESSFULLY
DSNJ200I DSNJU003 CHANGE LOG INVENTORY UTILITY
PROCESSING COMPLETED SUCCESSFULLY
To modify the high-level qualifier for archive log data sets, you need to reassemble
DSNZPARM.
| Example 2: Deleting a data set. The following control statement specifies that
DSNJU003 is to delete data set DSNREPAL.A0001187 from the BSDS. The volume
serial number for the data set is DSNV04, as indicated by the COPY1VOL option.
DELETE DSNAME=DSNREPAL.A0001187,COPY1VOL=DSNV04
Example 7: Adding multiple aliases and alias ports to the BSDS. The following
control statement specifies five alias names for the communication record in the
BSDS (MYALIAS1, MYALIAS2, MYALIAS3, MYALIAS4, and MYALIAS5). Only
MYALIAS2 and MYALIAS5 support subsets of a data sharing group. Any alias
names that were specified in a previous DSNJU003 utility job are removed.
DDF ALIAS=MYALIAS1,MYALIAS2:8002,MYALIAS3,MYALIAS4,MYALIAS5:10001
Example 8: Specifying a point in time for system recovery. The following control
statement specifies that DSNJU003 is to create a new conditional restart control
record. The SYSPITR option specifies an end RBA value as the point in time for
system recovery for a non-data sharing system. For a data sharing system, use an
end LRSN value instead of an end RBA value. This point in time is used by the
RESTORE SYSTEM utility.
//JOBLIB DD DSN=USER.TESTLIB,DISP=SHR
// DD DSN=DSN910.SDSNLOAD,DISP=SHR
//STEP01 EXEC PGM=DSNJU003
//SYSUT1 DD DSN=DSNC910.BSDS01,DISP=OLD
//SYSUT2 DD DSN=DSNC910.BSDS02,DISP=OLD
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
CRESTART CREATE,SYSPITR=04891665D000
/*
In a data sharing environment, the DSNJU004 utility can list information from any
or all BSDSs of a data sharing group.
MEMBER *
MEMBER DDNAME
,
( member-name )
Option descriptions
The following keywords can be used in an optional control statement on the SYSIN
data set:
MEMBER
Specifies which member’s BSDS information to print.
* Prints the information from the BSDS of each member in
the data sharing group.
DDNAME Prints information from only those BSDSs that are pointed
to by the MxxBSDS DD statements.
(member-name) Prints information for only the named group members.
Environment
The DSNJU004 program runs as a batch job.
This utility can be executed either when DB2 is running and when it is not
running. However, to ensure consistent results from the utility job, the utility and
the DB2 online subsystem must both be executing under the control of the same
operating system.
Authorization required
The user ID of the DSNJU004 job must have requisite RACF authorization.
Control statement
See “DSNJU004 (print log map) syntax diagram” on page 753 for DSNJU004 syntax
and option descriptions. See “Sample DSNJU004 control statement” on page 755
for an example of a control statement.
Recommendations
v For dual BSDSs, execute the print log map utility twice, once for each BSDS, to
compare their contents.
v To ensure consistent results for this utility, execute the utility job on the same
z/OS system on which the DB2 online subsystem executes.
v Execute the print log map utility regularly, possibly daily, to keep a record of
recovery log data set usage.
v Use the print log map utility to document changes that are made by the change
log inventory utility.
created, and the data set’s name (DSN), unit and volume of storage, and status.
| You might see consecutive active or archive log data sets with an end LRSN
| value that is the same as the beginning LRSN value of the next data set.
v Conditional restart control records. For a description of these records and the
format of this part of the output from the print log map utility, see “Reading
conditional restart control records” on page 764.
v The contents of the checkpoint description queue. For a description of this
output, see Figure 125 on page 764.
v Archive log command history. For a description of this output, see Figure 124 on
page 763.
v The distributed data facility (DDF) communication record. This record contains
the DB2-defined location name, any alias names for the location name, and the
VTAM-defined LU name. DB2 uses this information to establish the distributed
database environment.
v The tokens for all BACKUP SYSTEM utility records. The token identifies each
backup version that has been created.
v The RBA or LRSN when the subsystem was converted to enabling-new-function
mode.
The sample print log map utility output in Figure 121 is for a non-data-sharing
subsystem.
| ****************************************************************************************
| * *
| * LOG MAP OF THE BSDS DATA SET BELONGING TO MEMBER ’NO NAME ’ OF GROUP ’NO NAME ’. *
| * *
| ****************************************************************************************
| DSNJCNVB CONVERSION PROGRAM HAS RUN DDNAME=SYSUT1
| LOG MAP OF BSDS DATA SET COPY 1, DSN=DSNC910.BSDS01
| LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
| DATA SHARING MODE IS OFF
| SYSTEM TIMESTAMP - DATE=2007.011 LTIME=18:58:19.11
| UTILITY TIMESTAMP - DATE=2007.008 LTIME= 0:19:33.53
| VSAM CATALOG NAME=DSNC910
| HIGHEST RBA WRITTEN 0000384B7A15 2007.011 06:27:04.4
| HIGHEST RBA OFFLOADED 000000000000
| RBA WHEN CONVERTED TO V4 000000000000
| THIS BSDS HAS MEMBER RECORDS FOR THE FOLLOWING MEMBERS:
| HOST MEMBER NAME: ........
| MEMBER ID: 0
| GROUP NAME: ........
| BSDS COPY 1 DATA SET NAME: ............................................
| BSDS COPY 2 DATA SET NAME: ............................................
| ENFM START RBA/LRSN: 000000000000
| **** DISTRIBUTED DATA FACILITY ****
| COMMUNICATION RECORD
| 04:44:04 JANUARY 12, 2007
| LOCATION=STLEC1 IPNAME=(NULL) PORT=446 SPORT=NULL RPORT=5001
| ALIAS=(NULL)
| IPV4=NULL IPV6=NULL
| GRPIPV4=NULL GRPIPV6=NULL
| LUNAME=SYEC1DB2 PASSWORD=DB2PW1 GENERICLU=(NULL)
|
Figure 121. Sample print log map utility output for a non-dating-sharing subsystem (Part 1 of 3)
Figure 121. Sample print log map utility output for a non-dating-sharing subsystem (Part 2 of 3)
Figure 121. Sample print log map utility output for a non-dating-sharing subsystem (Part 3 of 3)
The sample print log map utility output in Figure 122 on page 759 is for a member
of a data sharing group.
| *****************************************************************************************
| * *
| * LOG MAP OF THE BSDS DATA SET BELONGING TO MEMBER ’V91A ’ OF GROUP ’DSNCAT ’. *
| * *
| *****************************************************************************************
| DSNJCNVB CONVERSION PROGRAM HAS RUN DDNAME=SYSUT1
| LOG MAP OF BSDS DATA SET COPY 1, DSN=DSNC910.BSDS01
| LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
| DATA SHARING MODE IS ON
| SYSTEM TIMESTAMP - DATE=2006.299 LTIME= 8:58:20.49
| UTILITY TIMESTAMP - DATE=2007.012 LTIME=10:00:20.82
| VSAM CATALOG NAME=DSNC910
| HIGHEST RBA WRITTEN 00002B7458B4 2006.299 15:58:04.9
| HIGHEST RBA OFFLOADED 000000000000
| RBA WHEN CONVERTED TO V4 00000CF0F0A6
| MAX RBA FOR TORBA 00000CF0F0A6
| MIN RBA FOR TORBA 000000000000
| STCK TO LRSN DELTA 000000000000
| THIS BSDS HAS MEMBER RECORDS FOR THE FOLLOWING MEMBERS:
| HOST MEMBER NAME: V91A
| MEMBER ID: 1
| GROUP NAME: DSNCAT
| BSDS COPY 1 DATA SET NAME: DSNC910.BSDS01
| BSDS COPY 2 DATA SET NAME: DSNC910.BSDS02
| ENFM START RBA/LRSN: 000000000000
| MEMBER NAME: V91B
| MEMBER ID: 2
| GROUP NAME: DSNCAT
| BSDS COPY 1 DATA SET NAME: DSNC918.BSDS01
| BSDS COPY 2 DATA SET NAME: DSNC918.BSDS02
| **** DISTRIBUTED DATA FACILITY ****
| COMMUNICATION RECORD
| 18:05:25 JANUARY 12, 2007
| LOCATION=STLEC1 IPNAME=(NULL) PORT=446 SPORT=NULL RPORT=5001
| ALIAS=(NULL)
| IPV4=NULL IPV6=NULL
| GRPIPV4=NULL GRPIPV6=NULL
| LUNAME=SYEC1DB2 PASSWORD=DB2PW1 GENERICLU=SYEC1GLU
|
| ACTIVE LOG COPY 1 DATA SETS
| START RBA/LRSN/TIME END RBA/LRSN/TIME DATE LTIME DATA SET INFORMATION
| -------------------- -------------------- -------- ----- --------------------
| 000029DB8000 00002A78FFFF 2005.263 17:36 DSN=DSNC910.LOGCOPY1.DS01
| BF215D6C20F3 BF215E3D8600 PASSWORD=(NULL) STATUS=REUSABLE
| 2006.201 16:05:55.0 2006.201 16:09:34.6
| 00002A790000 00002B167FFF 2005.263 17:36 DSN=DSNC910.LOGCOPY1.DS02
| BF215E3D8600 BF2168CB4F59 PASSWORD=(NULL) STATUS=REUSABLE
| 2006.201 16:09:34.6 2006.201 16:56:47.6
| 00002B168000 00002BB3FFFF 2005.263 17:36 DSN=DSNC910.LOGCOPY1.DS03
| BF2168CB4F59 ............ PASSWORD=(NULL) STATUS=REUSABLE
| 2006.201 16:56:47.6 ........ ..........
| ARCHIVE LOG COPY 1 DATA SETS
| NO ARCHIVE DATA SETS DEFINED FOR THIS COPY
|
Figure 122. Sample print log map utility output for a member of a data sharing group (Part 1 of 3)
Figure 122. Sample print log map utility output for a member of a data sharing group (Part 2 of 3)
Figure 122. Sample print log map utility output for a member of a data sharing group (Part 3 of 3)
Timestamps in the output column LTIME are in local time. All other timestamps
are in Greenwich Mean Time (GMT).
Figure 121 on page 756 and Figure 122 on page 759 show example output from the
print log map utility. The following timestamps are included in the header section
of the reports:
System timestamp Reflects the date and time that the BSDS was last
updated. The BSDS can be updated by several
events:
v DB2 startup.
v During log write activities, whenever the write
threshold is reached.
Depending on the number of output buffers that
you have specified and the system activity rate,
the BSDS might be updated several times a
second, or it might not be updated for several
seconds, minutes, or even hours.
v Due to an error, DB2 might drop into
single-BSDS mode from its normal dual BSDS
mode. This action might occur when a request to
get, insert, point to, update, or delete a BSDS
record is unsuccessful. When this error occurs,
DB2 updates the timestamp in the remaining
BSDS to force a timestamp mismatch with the
disabled BSDS.
Utility timestamp The date and time that the contents of the BSDS
were altered by the change log inventory utility
(DSNJU003).
The following timestamps are included in the active and archive log data sets
portion of the reports:
Active log date The date on which the active log data set was
originally allocated on the DB2 subsystem.
Active log time The time at which the active log data set was
originally allocated on the DB2 subsystem.
Archive log date The date of creation (not allocation) of the archive
log data set.
Archive log time The time of creation (not allocation) of the archive
log data set.
The following timestamps are included in the conditional restart control record
portion of the report that is shown in Figure 126 on page 765:
Conditional restart control record
The current time and date. This data is reported for
information only and is not kept in the BSDS.
CRCR created The time and date of creation of the CRCR by the
CRESTART option in the change log inventory
utility.
Begin restart The time and date that the conditional restart was
attempted.
End restart The time and date that the conditional restart
ended.
STARTRBA (timestamp) The time at which the control interval was written.
ENDRBA (timestamp) The time at which the last control interval was
written.
Time of checkpoint The time and date that are associated with the
checkpoint record that was used during the
conditional restart process.
The following timestamps are included in the checkpoint queue and the DDF
communication record sections of the report that is shown in Figure 125 on page
764:
Checkpoint queue The current time and date. This data is reported for
information only and is not kept in the BSDS.
Time of checkpoint The time and date that the checkpoint was taken.
DDF communication record (heading)
The current time and date. This data is reported for
information only, and is not kept in the BSDS.
The status value for each active log data set is displayed in the print log map
utility output. The sample print log map output in Figure 123 shows how the
status is displayed.
The values in the TIME column of the ARCHIVE LOG COMMAND HISTORY
section of the report in Figure 124 on page 763 represent the time that the
ARCHIVE LOG command was issued. This time value is saved in the BSDS and is
converted to printable format at the time that the print log map utility is run.
Therefore this value, when printed, can differ from other time values that were
recorded concurrently. Some time values are converted to printable format when
they are recorded, and then they are saved in the BSDS. These printed values
remain the same when the printed report is run.
CHECKPOINT QUEUE
15:54:57 FEBRUARY 04, 2003
TIME OF CHECKPOINT 15:54:37 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 0000400000EC
END CHECKPOINT RBA 00004000229A
TIME OF CHECKPOINT 15:53:44 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 00000B39E1EC
END CHECKPOINT RBA 00000B3A80A6
SHUTDOWN CHECKPOINT
TIME OF CHECKPOINT 15:49:40 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 00000B2E33E5
END CHECKPOINT RBA 00000B2E9C88
...
TIME OF CHECKPOINT 21:06:01 FEBRUARY 03, 2003
BEGIN CHECKPOINT RBA 00000A7AA19C
END CHECKPOINT RBA 00000A82C998
Use DSN1CHKR on a regular basis to promptly detect any damage to the catalog
and directory.
,
PARM= DUMP
FORMAT
,
HASH( hexadecimal-constant )
,
RID( integer,hexadecimal-constant )
,
HASH( hexadecimal-constant,integer )
,
PAGE( integer,hexadecimal-constant )
Option descriptions
The following parameters are optional. Specify parameters on the EXEC statement
in any order after the required JCL parameter PARM=. If you specify more than
one parameter, separate them with commas but no blanks. If you do not specify
any parameters, DSN1CHKR scans all table space pages for broken links and for
records that are not part of any link or chain, and prints the appropriate diagnostic
messages.
DUMP Specifies that printed table space pages, if any, are to be in dump
format. If you specify DUMP, you cannot specify the FORMAT
parameter.
FORMAT Specifies that printed table space pages, if any, are to be formatted
on output. If you specify FORMAT, you cannot specify the DUMP
parameter.
HASH(hexadecimal-constant, ...)
Specifies a hash value for a hexadecimal database identifier (DBID)
in table space DBD01. DSN1CHKR returns hash values for each
DBID in page form and in anchor point offset form.
hexadecimal-constant is the hash value for a DBID. The maximum
number of DBIDs is 10.
MAP= Identifies a record whose pointer is to be followed. DSN1CHKR
prints each record as it follows the pointer. Use this parameter only
after you have determined which chain is broken. You can
determine if a chain is broken by running DSN1CHKR without any
parameters, or with only FORMAT or DUMP.
The options for this parameter help DSN1CHKR locate the record
whose pointer it follows. Each option must point to the beginning
of the 6-byte prefix area of a valid record or to the beginning of the
hash anchor. If the value that you specify does not point to one of
these, DSN1CHKR issues an error message and continues with the
next pair of values.
ANCHOR(id,integer)
Specifies the anchor point that DSN1CHKR is to map.
id identifies the starting page and anchor point in the form
ppppppaa, where pppppp is the page number, and aa is the
anchor point number.
integer determines which pointer to follow while mapping. 0
specifies the forward pointer; 4 specifies the backward pointer.
The maximum number of pairs is five.
RID(integer, hexadecimal-constant, ...)
Identifies the record or hash anchor from which DSN1CHKR is
to start mapping.
integer is the page and record, in the form pppppprr, where
pppppp is the page number, and rr is the record number. These
values are in hexadecimal format.
hexadecimal-constant specifies the hexadecimal displacement
from the beginning of the record to the pointer in the record
from which mapping starts.
The maximum number of pairs is five.
HASH(hexadecimal-constant, integer, ...)
Specifies the value that DSN1CHKR is to hash and map for
table space DBD01.
hexadecimal constant is the database identifier in table space
DBD01.
integer determines which pointer to follow while mapping. 0
specifies the forward pointer; 4 specifies the backward pointer.
DSN1CHKR is a diagnosis tool; it executes outside the control of DB2. You should
have detailed knowledge of DB2 data structures to make proper use of this service
aid.
Environment
Run the DSN1CHKR program as a z/OS job.
Do not run DSN1CHKR on a table space while it is active under DB2. While
DSN1CHKR runs, do not run other database operations for the database and table
space that are to be checked. Use the STOP DATABASE command for the database
and table space that are to be checked.
Authorization required
This utility does not require authorization. However, if RACF protects any of the
data sets, the authorization ID must also have the necessary RACF authority.
Control statement
Create the utility control statement for the DSN1CHKR job. See “Syntax and
options of the DSN1CHKR control statement” on page 767 for DSN1CHKR syntax
and option descriptions.
Required data sets: DSN1CHKR uses two data definition (DD) statements. Specify
the data set for the utility’s output with the SYSPRINT DD statement. Specify the
first data set piece of the table space that is to be checked with the SYSUT1 DD
statement.
SYSPRINT Defines the data set that contains output messages from the
DSN1CHKR program and all hexadecimal dump output.
SYSUT1 Defines the input data set. This data set can be a DB2 data set or a
copy that is created by the DSN1COPY utility. Specify disposition
of this data set as DISP=OLD to ensure that it is not in use by DB2.
Set the data set’s disposition as DISP=SHR only when the STOP
DATABASE command has stopped the table space you want to
check.
Restrictions
This section contains restrictions that you should be aware of before running
DSN1CHKR.
DSN1CHKR does not use full image copies that are created with the COPY utility.
If you create a full image copy with SHRLEVEL REFERENCE, you can copy it into
a VSAM data set with DSN1COPY and check it with DSN1CHKR.
DSN1CHKR cannot use full image copies that are created with DFSMSdss
concurrent copy. The DFSMSdss data set does not copy to a VSAM data set
because of incompatible formats.
Recommendation: First copy the stopped table space to a temporary data set by
using DSN1COPY. Use the DB2 naming convention for the copied data set. Run
DSN1CHKR on the copy, which frees the actual table space for restart to DB2.
When you run DSN1COPY, use the CHECK option to examine the table space for
page integrity errors. Although DSN1CHKR does check for these errors, running
DSN1COPY with CHECK prevents an unnecessary invocation of DSN1CHKR.
DSN1CHKR prints the chains, beginning with the pointers on the RID option in
the MAP (maintenance analysis procedure) parameter. In this example, the first
pointer is on page 000002, at an offset of 6 bytes from record 1. The second pointer
is on page 00000B, at an offset of 6 bytes from record 1.
//YOUR JOBCARD
//*
//JOBCAT DD DSNAME=DSNCAT1.USER.CATALOG,DISP=SHR
//STEP1 EXEC PGM=IDCAMS
//********************************************************************
//* ALLOCATE A TEMPORARY DATA SET FOR SYSDBASE *
//********************************************************************
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSIN DD *
DELETE -
(TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001) -
CATALOG(DSNCAT)
DEFINE CLUSTER -
( NAME(TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001) -
NONINDEXED -
REUSE -
CONTROLINTERVALSIZE(4096) -
VOLUMES(XTRA02) -
RECORDS(783 783) -
RECORDSIZE(4089 4089) -
SHAREOPTIONS(3 3) ) -
DATA -
( NAME(TESTCAT.DSNDBD.TEMPDB.TMPDBASE.I0001.A001)) -
CATALOG(DSNCAT)
/*
//STEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20
//********************************************************************
//* STOP DSNDB06.SYSDBASE *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSTSPRT DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM(DSN)
-STOP DB(DSNDB06) SPACENAM(SYSDBASE)
END
/*
//STEP3 EXEC PGM=DSN1COPY,PARM=(CHECK)
//********************************************************************
//* CHECK SYSDBASE AND RUN DSN1COPY *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DSNDB06.SYSDBASE.I0001.A001,DISP=SHR
//SYSUT2 DD DSN=TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001,DISP=SHR
/*
Figure 129. Sample JCL for running DSN1CHKR on a temporary data set (Part 1 of 2)
Figure 129. Sample JCL for running DSN1CHKR on a temporary data set (Part 2 of 2)
Example 2: Running DSN1CHKR on a table space. In the sample JCL in Figure 130,
STEP1 stops database DSNDB06 with the STOP DATABASE command. STEP2 runs
DSN1CHKR on the target table space; the output from this utility job is identical to
the output in Example 1. STEP3 restarts the database with the START DATABASE
command.
//YOUR JOBCARD
//*
//STEP1 EXEC PGM=IKJEFT01,DYNAMNBR=20
//********************************************************************
//* EXAMPLE 2 *
//* *
//* STOP DSNDB06.SYSDBASE *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSTSPRT DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM(DSN)
-STOP DB(DSNDB06) SPACENAM(SYSDBASE)
END
/*
Figure 130. Sample JCL for running DSN1CHKR on a stopped table space. (Part 1 of 2)
Figure 130. Sample JCL for running DSN1CHKR on a stopped table space. (Part 2 of 2)
DSN1CHKR output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
You can run this utility on the following types of data sets that contain
uncompressed data:
v DB2 full image copy data sets
v VSAM data sets that contain DB2 table spaces
v Sequential data sets that contain DB2 table spaces (for example, DSN1COPY
output)
DSN1COMP does not estimate savings for data sets that contain LOB table spaces.
DSN1COMP
32K DSSIZE ( integer G ) NUMPARTS(integer)
PAGESIZE ( 4K ) LARGE
8K
16K
32K
FREEPAGE(integer) PCTFREE(integer) FULLCOPY REORG ROWLIMIT(integer)
MAXROWS(integer)
|
|
| DSN1COMP
LEAFLIM(integer)
|
|
Option descriptions
To run DSN1COMP, specify one or more of the following parameters on the EXEC
statement to run DSN1COMP. If you specify more than one parameter, separate
each parameter by a comma. You can specify parameters in any order.
32K Specifies that the input data set, SYSUT1, has a 32-KB page size. If
you specify this option and the SYSUT1 data set does not have a
32-KB page size, DSN1COMP might produce unpredictable results.
The recommended option for performance is PAGESIZE(32K).
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If you
specify an incorrect page size, DSN1COMP might produce
unpredictable results.
If you omit PAGESIZE, DSN1COMP tries to determine the page
size from the input data set. DB2 issues an error message if
DSN1COMP cannot determine the input page size. This might
happen if the header page is not in the input data set, or if the
page size field in the header page contains an invalid page size.
DSSIZE(integer G)
Specifies the data set size, in gigabytes, for the input data set. If
you omit DSSIZE, DB2 assumes that the input data set size is 2
GB.
integer must match the DSSIZE value that was specified when the
table space was defined.
If you omit DSSIZE and the data set is not the assumed default
size, the results from DSN1COMP are unpredictable.
LARGE Specifies that the input data set is a table space that was defined
with the LARGE option. If you specify LARGE, DB2 assumes that
the data set has a 4-GB boundary.
The recommended method of specifying a table space defined with
LARGE is DSSIZE(4G).
If you omit the LARGE or DSSIZE(4G) option when it is needed,
or if you specify LARGE for a table space that was not defined
with the LARGE option, the results from DSN1COMP are
unpredictable.
NUMPARTS(integer)
Specifies the number of partitions that are associated with the
input data set. Valid specifications range from 1 to 4096. If you
omit NUMPARTS or specify it as 0, DSN1COMP assumes that your
input file is not partitioned. If you specify a number greater than
64, DSN1COMP assumes that the data set is for a partitioned table
space that was defined with the LARGE option, even if the LARGE
keyword is not specified.
DSN1COMP cannot always validate the NUMPARTS parameter. If
you specify it incorrectly, DSN1COMP might produce
unpredictable results.
DSN1COMP terminates and issues message DSN1946I when it
encounters an image copy that contains multiple partitions; a
compression report is issued for the first partition.
FREEPAGE(integer)
Specifies how often to leave a page of free space when calculating
the percentage of saved pages. You must specify an integer in the
range 0 to 255. If you specify 0, no pages are included as free space
when DSN1COMP reports the percentage of pages saved.
Otherwise, one free page is included after every n pages, where n
is the specified integer. The default is 0.
Specify the same value that you specify for the FREEPAGE option
of the SQL statement CREATE TABLESPACE or ALTER
TABLESPACE.
PCTFREE(integer)
Indicates what percentage of each page to leave as free space when
calculating the percentage of pages saved. You must specify an
integer in the range 0 to 99. When calculating the savings,
DSN1COMP allows for at least n percent of free space for each
page, where n is the specified integer. The default is 5.
Specify the same value that you specify for the PCTFREE option of
the SQL statement CREATE TABLESPACE or ALTER
TABLESPACE.
FULLCOPY Specifies that a DB2 full image copy (not a DFSMSdss concurrent
copy) of your data is to be used as input. Omitting this parameter
when the input is a full image copy can cause error messages or
unpredictable results. If this data is partitioned, also specify the
NUMPARTS parameter to identify the number of partitions.
REORG Provides an estimate of compression savings that are comparable
to the savings that the REORG utility would achieve. If this
keyword is not specified, the results are similar to the compression
savings that the LOAD utility would achieve.
ROWLIMIT(integer)
Specifies the maximum number of rows to evaluate in order to
provide the compression estimate. This option prevents
DSN1COMP from examining every row in the input data set. Valid
specifications range from 1 to 99000000.
Use this option to limit the elapsed time and processor time that
DSN1COMP requires. An analysis of the first 5 to 10 MB of a table
space provides a fairly representative sample of the table space for
estimating compression savings. Therefore, specify a ROWLIMIT
value that restricts DSN1COMP to the first 5 to 10 MB of the table
space. For example, if the row length of the table space is 200
bytes, specifying ROWLIMIT(50000) causes DSN1COMP to analyze
approximately 10 MB of the table space.
MAXROWS(integer)
Specifies the maximum number of rows that DSN1COMP is to
consider when calculating the percentage of pages saved. You must
specify an integer in the range 1 to 255. The default is 255.
Specify the same value that you specify for the MAXROWS option
of the SQL statement CREATE TABLESPACE or ALTER
TABLESPACE.
| LEAFLIM(integer)
| Specifies how many index leaf pages should be processed before
| giving a compression estimate.
+-------------------------------------------+
| DBNAME | TSNAME | PARTITION | IPREFIX |
+-------------------------------------------+
1_| DBMC0731 | TPMC0731 | 1 | J |
2_| DBMC0731 | TPMC0731 | 2 | J |
3_| DBMC0731 | TPMC0731 | 3 | J |
4_| DBMC0731 | TPMC0731 | 4 | J |
5_| DBMC0731 | TPMC0731 | 5 | J |
+-------------------------------------------+
Figure 131. Result from query on the SYSTABLEPART catalog table to determine the value
in the IPREFIX column
The preceding output provides the current instance qualifier (J), which can be used
to code the data set name in the DSN1COMP JCL as follows.
//STEP1 EXEC PGM=DSN1COMP
//SYSUT1 DD DSN=vcatname.DSNDBC.DBMC0731.J0001.A001,DISP=SHR
//SYSPRINT DD AYAOUT=*
//SYSUDUMP DD AYAOUT=*
Environment
Run DSN1COMP as a z/OS job.
You can run DSN1COMP even when the DB2 subsystem is not operational. Before
you use DSN1COMP when the DB2 subsystem is operational, issue the DB2 STOP
DATABASE command. Issuing the STOP DATABASE command ensures that DB2
has not allocated the DB2 data sets.
Authorization required
DSN1COMP does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1COMP job. See “Syntax and
options of the DSN1COMP control statement ” on page 775 for DSN1COMP syntax
and option descriptions.
Required data sets: DSN1COMP uses the following data definition (DD)
statements:
SYSPRINT Defines the data set that contains output messages from
DSN1COMP and all hexadecimal dump output.
SYSUT1 Defines the input data set, which can be a sequential data set or a
VSAM data set.
Specify the disposition for this data set as OLD (DISP=OLD) to
ensure that it is not in use by DB2. Specify the disposition for this
data set as SHR (DISP=SHR) only in circumstances where the DB2
STOP DATABASE command does not work.
The requested operation takes place only for the specified data set.
In the following situations, you must specify the correct data set.
v The input data set belongs to a linear table space.
v The index space is larger than 2 GB.
v The table space or index space is a partitioned space.
Recommendation
Before using DSN1COMP, be sure that you know the page size and data set size
(DSSIZE) for the table space. Use the following query on the DB2 catalog to get the
information you need:
SELECTT.CREATOR,
T.NAME,
S.PGSIZE,
CASE S.DSSIZE
WHEN 0 THEN
CASE S.TYPE
WHEN ’ ’ THEN 2097152
WHEN ’I’ THEN 2097152
WHEN ’L’ THEN 4194304
WHEN ’K’ THEN 4194304
ELSE NULL
END
ELSE S.DSSIZE
END
FROM SYSIBM.SYSTABLES T,
SYSIBM.SYSTABLESPACE S
WHERE T.DBNAME=S.DBNAME
AND T.TSNAME=S.NAME;
DSN1COMP does not try to convert data to the latest version before it compresses
rows and derives a savings estimate.
Without the REORG option, DSN1COMP uses the first n rows to fill the
compression dictionary. DSN1COMP processes the remaining rows to provide the
compression estimate. If the number of rows that are used to build the dictionary
is a significant percentage of the data set rows, little savings result. With the
REORG option, DSN1COMP processes all the rows, including those that are used
to build the dictionary, which results in greater compression.
| The DSN1COMP utility determines possible saving estimates at the data set level
| for a unique partition only. Therefore, if DSN1COMP is run against an image copy
| data set that contains several partitions or against a single partition of
| partition-by-growth table spaces (PBGs), the results will be different from what the
| REORG utility would produce.
Example 2: Providing intended free space when estimating space savings. In the
sample statements in Figure 133, STEP1 specifies that DSN1COMP is to report the
estimated space savings that are to be achieved by compressing the data in the
data set that is identified by the SYSUT1 DD statement,
DSNC810.DSNDBD.DB254SP4.TS254SP4.I0001.A00. When calculating these
estimates, DSN1COMP considers the values passed by the PCTFREE and
FREEPAGE options. The PCTFREE value indicates that 20% of each page is to be
left as free space. The FREEPAGE value indicates that every fifth page is to be left
as free space. This value must be the same value that you specified for the
FREEPAGE option of the SQL statement CREATE TABLESPACE or ALTER
TABLESPACE.
STEP2 specifies that DSN1COMP is to report the estimated space savings that are
to achieved by compressing the data in the data set that is identified by the
SYSUT1 DD statement, DSNC810.DSNDBD.DB254SP4.TS254SP4.I0001.A0001. When
providing the compression estimate, DSN1COMP is to evaluate no more than
20 000 rows, as indicated by the ROWLIMIT option. Specifying the maximum
number of rows to evaluate limits the elapsed time and processor time that
DSN1COMP requires.
Figure 133. Example DSN1COMP statements with PCTFREE, FREEPAGE, and ROWLIMIT
options
Example 3: Estimating space savings that are comparable to what the REORG
utility would achieve. The statement in Figure 134 on page 783 specifies that
DSN1COMP is to report the estimated space savings that are to be achieved by
compressing the data in the data set that is identified by the SYSUT1 DD
statement, DSNCAT.DSNDBD.DBJT0201.TPJTO201.I0001.A254. This input data set
is a table space that was defined with the LARGE option and has 254 partitions, as
indicated by the DSN1COMP options LARGE and NUMPARTS.
When calculating these estimates, DSN1COMP considers the values passed by the
PCTFREE and FREEPAGE options. The PCTFREE value indicates that 30% of each
page is to be left as free space. The FREEPAGE value indicates that every thirtieth
page is to be left as free space. This value must be the same value that you
specified for the FREEPAGE option of the SQL statement CREATE TABLESPACE
or ALTER TABLESPACE. DSN1COMP is to evaluate no more than 20 000 rows, as
indicated by the ROWLIMIT option.
Figure 134. Example DSN1COMP statement with the LARGE, PCTFREE, FREEPAGE,
NUMPARTS, REORG, and ROWLIMIT options.
DSN1COMP output
This section contains examples of output that is generated by the DSN1COMP
utility.
Message DSN1941
If you receive this message, use a data set with more rows as input, or specify a
larger ROWLIMIT.
----------------------------------------------
8 K Page Buffer Size yields a
51 % Reduction in Index Leaf Page Space
The Resulting Index would have approximately
49 % of the original index’s Leaf Page Space
No Bufferpool Space would be unused
----------------------------------------------
----------------------------------------------
16 K Page Buffer Size yields a
74 % Reduction in Index Leaf Page Space
The Resulting Index would have approximately
26 % of the original index’s Leaf Page Space
3 % of Bufferpool Space would be unused to
ensure keys fit into compressed buffers
----------------------------------------------
Note: A DB2 VSAM data set is a single piece of a nonpartitioned table space or
index, or a single partition of a partitioned table space or index. The input
must be a single z/OS sequential or VSAM data set. Concatenation of input
data sets is not supported.
Using DSN1COPY, you can also print hexadecimal dumps of DB2 data sets and
databases, check the validity of data or index pages (including dictionary pages for
compressed data), translate database object identifiers (OBIDs) to enable moving
data sets between different systems, and reset to 0 the log RBA that is recorded in
each index page or data page.
You can use the DSN1COPY utility on LOB table spaces by specifying the LOB
keyword and omitting the SEGMENT and INLCOPY keywords.
DSN1COPY
CHECK 32K FULLCOPY LARGE
PAGESIZE( 4K ) INCRCOPY LOB
8K SEGMENT
16K INLCOPY
32K
DSSIZE ( integer G ) PIECESIZ(integer K ) NUMPARTS(integer)
M
G
(1)
EBCDIC
PRINT
(hexadecimal-constant,hexadecimal-constant) ASCII
UNICODE
VALUE( string ) OBIDXLAT RESET
hexadecimal-constant
Notes:
1 EBCDIC is not necessarily the default if the first page of the input data set is a header page. If
the first page is a header page, DSN1COPY uses the format information in the header page as the
default format.
Option descriptions
To run DSN1COPY, specify one or more of the following parameters on the EXEC
statement. If you specify more than one parameter, separate each parameter by a
comma. You can specify parameters in any order.
CHECK Checks each page from the SYSUT1 data set for validity. The
validity checking operates on one page at a time and does not
include any cross-page checking. If an error is found, a message is
issued describing the type of error, and a dump of the page is sent
| to the SYSPRINT data set. If an unexpected page number is
| encountered, validity checking continues to the end and a report
| will be printed of all unexpected page numbers. If you do not
receive any messages, no errors were found. If more than one error
exists in a given page, the check identifies only the first of the
errors. However, the entire page is dumped. DSN1COPY does not
check system pages for validity.
32K Specifies that the SYSUT1 data set has a 32-KB page size. If you
specify this option and the SYSUT1 data set does not have a 32-KB
page size, DSN1COPY might produce unpredictable results.
The recommended option for performance is PAGESIZE(32K).
786 Utility Guide and Reference
DSN1COPY
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If you
specify an incorrect page size, DSN1COPY might produce
unpredictable results.
If you do not specify the page size, DSN1COPY tries to determine
the page size from the input data set if the first page of the input
data set is a header page. DB2 issues an error message if
DSN1COPY cannot determine the input page size. This might
happen if the header page is not in the input data set, or if the
page size field in the header page contains an invalid page size.
| FULLCOPY Specifies that a DB2 full image copy (not a DFSMSdss concurrent
| copy) of your data is to be used as input. If this data is partitioned,
| specify NUMPARTS to identify the total number of partitions. If
| you specify FULLCOPY without NUMPARTS, DSN1COPY tries to
| determine the page size from the input data set if the first page of
| the input data set is a header page; otherwise, DSN1COPY
| assumes that your input file is not partitioned.
| Specify FULLCOPY when using a full image copy as input.
| Omitting the parameter can cause error messages or unpredictable
| results.
| The FULLCOPY parameter requires SYSUT2 (output data set) to be
| either a DB2 VSAM data set or a DUMMY data set.
INCRCOPY Specifies that an incremental image copy of the data is to be used
as input. DSN1COPY with the INCRCOPY parameter updates
existing data sets; do not redefine the existing data sets.
INCRCOPY requires that the output data set (SYSUT2) be a DB2
VSAM data set.
Before you apply an incremental image copy to your data set, you
must first apply a full image copy to the data set by using the
FULLCOPY parameter. Make sure that you apply the full image
copy in a separate execution step because you receive an error
message if you specify both the FULLCOPY and the INCRCOPY
parameters in the same step. Then, apply each incremental image
copy in a separate step, starting with the oldest incremental image
copy.
Specifying neither FULLCOPY nor INCRCOPY implies that the
input is not an image copy data set. Therefore, only a single output
data set is used.
SEGMENT Specifies that you want to use a segmented table space as input to
DSN1COPY. Pages with all zeros in the table space are copied, but
no error messages are issued. You cannot specify FULLCOPY or
INCRCOPY if you specify SEGMENT.
If you are using DSN1COPY with the OBIDXLAT to copy a DB2
data set to another DB2 data set, the source and target table spaces
must have the same SEGSIZE attribute.
You cannot specify the SEGMENT option with the LOB parameter.
| INLCOPY Specifies that the input data is an inline copy data set. The
| INLCOPY parameter requires SYSUT2 (output data set) to be either
| a VSAM data set or a DUMMY data set.
| You cannot specify the INLCOPY option with the LOB parameter.
DSSIZE(integer G)
Specifies the data set size, in gigabytes, for the input data set. If
you omit the DSSIZE keyword or the LARGE keyword,
DSN1COPY assumes the appropriate default input data set size
that is listed in Table 148.
Table 148. Default input data set sizes
Object Default input data set size (in GB)
Non-LOB linear table space or index 2
LOB 4
Partitioned table space or index with 4
NUMPARTS = 1-16
Partitioned table space or index with 2
NUMPARTS = 17-32
Partitioned table space or index with 1
NUMPARTS = 33-64
Partitioned table space or index with 4
NUMPARTS >64
integer must match the DSSIZE value that was specified when the
table space was defined.
If you omit DSSIZE and the data set is not the assumed default
size, the results from DSN1COPY are unpredictable.
If you specify DSSIZE, you cannot specify LARGE.
LARGE Specifies that the input data set is a table space that was defined
with the LARGE option, or an index on such a table space. If you
specify the LARGE keyword, DB2 assumes that the data set has a
4-GB boundary. The recommended method of specifying a table
space that was defined with the LARGE option is DSSIZE(4G).
If you omit the LARGE or DSSIZE(4G) option when it is needed,
or if you specify LARGE for a table space that was not defined
with the LARGE option, the results from DSN1COPY are
unpredictable.
If you specify LARGE, you cannot specify LOB or DSSIZE.
LOB Specifies that SYSUT1 data set is a LOB table space. Empty pages
in the table space are copied, but no error messages are issued. You
cannot specify the SEGMENT and INLCOPY options with the LOB
parameter.
DB2 attempts to determine if the input data set is a LOB data set.
If you specify the LOB option but the data set is not a LOB data
set, or if you omit the LOB option for a data set that is a LOB data
set, DB2 issues an error message and DSN1COPY terminates.
If you specify LOB, you cannot specify LARGE.
NUMPARTS(integer)
Specifies the total number of partitions that are associated with the
data set that you are using as input or whose page range you are
printing. When you use DSN1COPY to copy a data-partitioned
secondary index, specify the number of partitions in the index.
integer can range from 1 to 4096.
DSN1COPY uses this value to calculate the size of its output data
sets and to help locate the first page in a range that is to be
printed. If you omit NUMPARTS or specify it as 0, DSN1COPY
will get the NUMPARTS value from the header page if possible,
otherwise DSN1COPY will assume that your input is not
partitioned. If you specify a number greater than 64, DSN1COPY
assumes that the data set is for a partitioned table space that was
defined with the LARGE option, even if the LARGE keyword is
not specified for DSN1COPY.
If you specify the number of partitions incorrectly, DSN1COPY can
copy the data to the wrong data sets, return an error message
indicating that an unexpected page number was encountered, or
fail to allocate the data sets correctly. In the last case, a VSAM PUT
error might be detected, resulting in a request parameter list (RPL)
error code of 24.
PRINT(hexadecimal-constant,hexadecimal-constant)
Causes the SYSUT1 data set to be printed in hexadecimal format
on the SYSPRINT data set. You can specify the PRINT parameter
with or without the page range specifications (hexadecimal-
constant,hexadecimal-constant). If you do not specify a range, all
pages of the SYSUT1 are printed. If you want to limit the range of
pages that are printed, indicate the beginning and ending page. If
you want to print a single page, supply only that page number. In
either case, your range specifications must be from one to eight
hexadecimal characters in length.
The following example shows how you code the PRINT parameter
if you want to begin printing at page X'2F0' and stop at page
X'35C':
PRINT(2F0,35C)
Because the CHECK and RESET options and the copy function run
independently of the PRINT range, these options apply to the
entire input file, regardless of whether a range of pages is being
printed.
You can indicate the format of the row data in the PRINT output
by specifying EBCDIC, ASCII, or UNICODE. For an example of the
output that is affected by these options, see the DSN1PRNT
FORMAT output in Figure 145 on page 832.
EBCDIC
Indicates that the row data in the PRINT output is to be
displayed in EBCDIC. The default is EBCDIC if the first page
of the input data set is not a header page.
If the first page is a header page, DSN1COPY uses the format
information in the header page as the default format. However,
if you specify EBCDIC, ASCII, or UNICODE, that format
overrides the format information in the header page. The
unformatted header page dump is always displayed in
EBCDIC, because most of the fields are in EBCDIC.
ASCII
Indicates that the row data in the PRINT output is to be
displayed in ASCII. Specify ASCII when printing table spaces
that contain ASCII data.
UNICODE
Indicates that the row data in the PRINT output is to be
displayed in Unicode. Specify UNICODE when printing table
spaces that contain Unicode data.
PIECESIZ(integer)
Specifies the maximum piece size (data set size) for nonpartitioned
indexes. The value that you specify must match the value that was
specified when the nonpartitioning index was created or altered.
The defaults for PIECESIZ are 2G (2 GB) for indexes that are
backed by non-large table spaces and 4G (4 GB) for indexes that
are backed by table spaces that were defined with the LARGE
option. This option is required if the piece size is not one of the
default values. If PIECESIZ is omitted and the index is backed by a
table space that was defined with the LARGE option, the LARGE
option is required for DSN1COPY.
The subsequent keyword K, M, or G indicates the unit of the value
that is specified in integer.
K Indicates that the integer value is to be multiplied by 1 KB
to specify the maximum piece size in bytes. integer must be
either 256 or 512.
M Indicates that the integer value is to be multiplied by 1 MB
to specify the maximum piece size in bytes. integer must be
a power of two, between 1 and 512.
G Indicates that the integer value is to be multiplied by 1 GB
to specify the maximum piece size in bytes. integer must be
1, 2, or 4.
Attention: Do not use DSN1COPY in place of COPY for both backup and
recovery. Improper use of DSN1COPY can result in unrecoverable damage and loss
of data.
Environment
Execute DSN1COPY as a z/OS job when the DB2 subsystem is either active or not
active.
If you execute DSN1COPY when DB2 is active, use the following procedure:
1. Start the table space as read-only by using START DATABASE.
2. Run the QUIESCE utility with the WRITE (YES) option to externalize all data
pages and index pages.
3. Run DSN1COPY with DISP=SHR on the data definition (DD) statement.
4. Start the table space as read-write by using START DATABASE to return to
normal operations.
Authorization required
DSN1COPY does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1COPY job. See “Syntax and
options of the DSN1COPY control statement ” on page 786 for DSN1COPY syntax
and option descriptions.
To obtain the names, DBIDs, PSIDs, ISOBIDs, and OBIDs, run the
DSNTEP2 sample application on both the source and target
systems. The following SQL statements yield the preceding
information.
DB2 allows input of only one DSN1COPY data set. DB2 does not permit the input
of concatenated data sets. For a table space that consists of multiple data sets,
ensure that you specify the correct data set. For example, if you specify the
CHECK option to validate pages of a partitioned table space’s second partition,
code the second data set of the table space for SYSUT1.
v Method 2:
1. Use your existing DB2 data set as the SYSUT1 specification, creating a new
VSAM data set for SYSUT2.
2. After completion of the reset operation, delete the data set that you specified
as SYSUT1, and rename the SYSUT2 data set. Give SYSUT2 the name of the
data set that you just deleted.
If you use full, incremental, or inline copies as input, specify the SYSUT2 data sets
according to the following guidelines:
v If SYSUT1 is an image copy of a single partition, SYSUT2 must list the data set
name for that partition of the table space. Specify the NUMPARTS parameter to
identify the number of partitions in the entire table space.
| v If SYSUT1 is an image copy of an entire partitioned table space, SYSUT2 must
list the name of the table space’s first data set. Important: All data sets in the
partitioned table space must use the same fifth-level qualifier, I0001 or J0001 (can
be I0002 or J0002 for clone tables), before DSN1COPY can run successfully on a
partitioned table space. DSN1COPY allocates all of the target data sets. However,
you must previously define the target data sets by using IDCAMS. Specify the
NUMPARTS parameter to identify the number of partitions in the whole table
space.
v If SYSUT1 is an image copy of a nonpartitioned data set, SYSUT2 should be
the name of the actual output data set. Do not specify the NUMPARTS
parameter because this parameter is only for partitioned table spaces.
v If SYSUT1 is an image copy of all data sets in a linear table space with
multiple data sets, SYSUT2 should be the name of its first data set. DSN1COPY
allocates all target data sets. However, you must previously define the target
data sets by using IDCAMS.
Performing these steps resets the data set and causes normal extensions through
DB2.
Restrictions
This section contains restrictions that you should know about when running
DSN1COPY.
DSN1COPY does not alter data set structure. For example, DSN1COPY does not
copy a partitioned or segmented table space into a simple table space. The output
data set is a page-for-page copy of the input data set. If the intended use of
DSN1COPY is to move or restore data, ensure that definitions for the source and
target table spaces, tables, and indexes are identical. Otherwise, unpredictable
results can occur.
DSN1COPY cannot copy DB2 recovery log data sets. The format of a DB2 log page
is different from that of a table or index page. If you try to use DSN1COPY to
recover log data sets, DSN1COPY will abnormally terminate.
| For a compressed table space, DSN1COPY does not reset the dictionary version for
| an inline image copy, or for an incremental image copy that was created with the
| SYSTEMPAGES=YES COPY utility option. To reset the dictionary version for an
| inline image copy, use the inline image copy as input to DSN1COPY with a VSAM
| intermediate data set as output. This intermediate data set can then be used as
| input to DSN1COPY RESET to copy the intermediate data set to the real target
| data set.
Recommendations
This section contains recommendations that you should know about when running
the DSN1COPY utility.
| To determine the source table space and target table space row format, run the
| following query against your DB2 catalog:
| SELECT DBNAME, TSNAME, PARTITION, FORMAT
| FROM SYSIBM.SYSTABLEPART
| WHERE (DBNAME = ’source-database-name’ AND TSNAME=’source-table-space-name’)
| OR (DBNAME = ’target-database-name’ AND TSNAME=’target-table-space-name’)
| If the FORMAT column has a value of ’R’, then the table space or partition is in
| RRF (reordered row format). If the FORMAT column has a blank value, then the
| table space or partition is in BRF (basic row format).
For more information about versions and how DB2 uses them, see Part 2 of DB2
Administration Guide.
You must run a CHECK utility job on the table space that is involved to ensure
that no inconsistencies exist between data and indexes on that data:
v Before using DSN1COPY to save critical data that is indexed
v After using DSN1COPY to restore critical data that is indexed
The CHECK utility performs validity checking between pages.
| To protect against invalidating the OBIDs, specify the OBIDXLAT parameter for
| DSN1COPY. The OBIDXLAT parameter translates OBID, DBID, PSID, or ISOBID
| before DSN1COPY copies the data.
| SHRLEVEL REFERENCE. Using the FULLCOPY parameter ensures that the data
| that is contained in your image copies is consistent. DSN1COPY accepts an index
| image copy as input when you specify the FULLCOPY option. If you want to use
| inline image copies as input to DSN1COPY, you must produce those image copies
| by using the REORG utility or LOAD utility.
Do not specify the RESET parameter for page sets that are in group buffer pool
RECOVER-pending (GRECP) status.
v If you do not have an image copy of the index, use the REBUILD INDEX utility,
which reconstructs the indexes from the data. For more information about the
REBUILD INDEX utility, refer to Chapter 22, “REBUILD INDEX,” on page 355.
The MODIFY utility might have removed the row in SYSIBM.SYSCOPY. If the row
has been removed, and if the image copy is a full image copy with SHRLEVEL
REFERENCE, use DSN1COPY to restore the table space or data set.
DSN1COPY can restore the object to an incremental image copy, but it must first
restore the previous full image copy and any intermediate incremental image
copies. These actions ensure data integrity. You are responsible for providing the
correct sequence of image copies. DB2 cannot help ensure the proper sequence.
| You can use the DSN1COPY utility to restore a partition or an entire table space
| for a partition-by-growth table space. The total number of partitions in a
| DSN1COPY might not be consistent with the number of partitions defined on the
| current table space. To avoid residual data , delete data in the excess partitions
| from the table space before you apply the DSN1COPY utility.
If you use DSN1COPY for point-in-time recovery, the table space is not recoverable
with the RECOVER utility. Because DSN1COPY executed outside of DB2’s control,
DB2 is not aware that you recovered to a point in time. Use DSN1COPY to recover
the affected table space after point-in-time recovery. Then perform the following
steps:
1. Remove old image copies by using MODIFY AGE.
2. Create one or more full image copies by using SHRLEVEL REFERENCE.
When you use DSN1COPY for printing, you must specify the PRINT parameter.
The requested operation takes place only for the specified data set. If the input
data set belongs to a linear table space or index space that is larger than 2 GB,
specify the correct data set. Alternatively, if it is a partitioned table space or
partitioned index, specify the correct data set. For example, DSN1COPY prints a
page range in the second partition of a four-partition table space. DSN1COPY does
this by specifying NUMPARTS(4) and the data set name of the second data set in
the VSAM group (DSN=...A002).
To print a full image copy data set (rather than recovering a table space), specify a
DUMMY SYSUT2 DD statement, and specify the FULLCOPY parameter.
Be careful when you copy a table that contains an identity column from one DB2
subsystem to another:
1. Stop the table space on the source subsystem.
2. Issue a SELECT statement to query the SYSIBM.SYSSEQUENCES entry that
corresponds to the identity column for this table on the source subsystem. Add
the INCREMENT value to the MAXASSIGNEDVAL to determine the next value
(nv) for the identity column.
3. Create the table on the target subsystem. On the identity column specification,
specify nv for the START WITH value, and ensure that all of the other identity
column attributes are the same as for the source table.
4. Stop the table space on the target subsystem.
5. Copy the data by using DSN1COPY.
6. Start the table space on the source subsystem for read-write access.
7. Start the table space on the target subsystem for read-write access.
Example 1: Checking input data set before copying. The following statement
specifies that the DSN1COPY utility is to copy the data set that is identified by the
SYSUT1 DD statement to the data set that is identified by the SYSUT2 DD
statement. Before DSN1COPY copies this data, the utility is to check the validity of
the input data set.
//RUNCOPY EXEC PGM=DSN1COPY,PARM=’CHECK’
//* COPY VSAM TO SEQUENTIAL AND CHECK PAGES
//STEPLIB DD DSN=PDS CONTAINING DSN1COPY
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DSNDB01.SYSUTILX.I0001.A001,DISP=OLD
//SYSUT2 DD DSN=TAPE.DS,UNIT=TAPE,DISP=(NEW,KEEP),VOL=SER=UTLBAK
Example 2: Translating the DB2 internal identifiers. The statement in Figure 138 on
page 803 specifies that DSN1COPY is to copy the data set that is identified by the
SYSUT1 DD statement to the data set that is identified by the SYSUT2 DD
statement. The OBIDXLAT option specifies that DSN1COPY is to translate the
OBIDs before the data set is copied. The OBIDs are provided as input on the
SYSXLAT DD statement. Because the OBIDXLAT option is specified, DSN1COPY
also checks the validity of the input data set, even though the CHECK option is
not specified.
| Example 7: Using DSN1COPY with UTS table spaces. The following statements
| specify that DSN1COPY is to copy a UTS table space vsam data set to a sequential
| data set.
| //******************************************************************
| //* COMMENT: RUN DSN1COPY FOR THE TABLESPACE Part 1
| //******************************************************************
| //STEP1 EXEC PGM=DSN1COPY,
| // PARM=’SEGMENT,RESET’
| //SYSUDUMP DD SYSOUT=A
| //SYSPRINT DD SYSOUT=A
| //SYSOUT DD SYSOUT=A
| //SYSABEND DD SYSOUT=A
| //SYSUT1 DD DSN=DSNCAT.DSNDBD.DBKQBG01.TPKQBG01.I0001.A001,DISP=SHR
| //SYSUT2 DD DSN=JUKQU2BG.DSN1COPY.D1P1,DISP=(NEW,CATLG,CATLG),
| // VOL=SER=SCR03,UNIT=SYSDA,SPACE=(TRK,(55,1))
| /*
| //******************************************************************
| //* COMMENT: RUN DSN1COPY FOR THE TABLESPACE Part 2
| //******************************************************************
| //STEP2 EXEC PGM=DSN1COPY,
| // PARM=’SEGMENT,RESET’
| //SYSUDUMP DD SYSOUT=A
| //SYSPRINT DD SYSOUT=A
| //SYSOUT DD SYSOUT=A
| //SYSABEND DD SYSOUT=A
| //SYSUT1 DD DSN=DSNCAT.DSNDBD.DBKQBG01.TPKQBG01.I0001.A002,DISP=SHR
| //SYSUT2 DD DSN=JUKQU2BG.DSN1COPY.D1P2,DISP=(NEW,CATLG,CATLG),
| // VOL=SER=SCR03,UNIT=SYSDA,SPACE=(TRK,(55,1))
| /*
|
DSN1COPY output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
You can specify the range of the log to process and select criteria within the range
to limit the records in the detail report. For example, you can specify:
v One or more units of recovery that are identified by URID
v A single database
By specifying a URID and a database, you can display recovery log records that
correspond to the use of one database by a single unit of recovery.
| DSN1LOGP can print the log records for both base and clone table objects.
DSN1LOGP cannot read logs that have been compressed by DFSMS. (This
compression requires extended format data sets.)
SYSCOPY (NO)
SYSCOPY (YES) DBID(hex-constant) OBID(hex-constant) PAGE(hex-constant)
RID(hex-constant) URID(hex-constant) LUWID(luwid)
TYPE ( hex-constant )
SUBTYPE ( hex-constant )
value/offset statement
SUMMARY (NO)
SUMMARY ( YES ) CHECK(DATA)
ONLY FILTER
value/offset statement:
VALUE/OFFSET
VALUE(hex-constant) OFFSET(hex-constant)
Option descriptions
To execute DSN1LOGP, construct a batch job. The utility name, DSN1LOGP, should
appear on the EXEC statement, as shown in “Sample DSN1LOGP control
statements” on page 818.
If you specify more than one keyword, separate them by commas. You can specify
the keywords in any order. You can include blanks between keywords, and also
between the keywords and the corresponding values.
RBASTART(hex-constant)
Specifies the hexadecimal log RBA from which to begin reading. If
the value does not match the beginning RBA of one of the log
records, DSN1LOGP begins reading at the beginning RBA of the
next record. For any given job, specify this keyword only once.
Alternative spellings: STARTRBA, ST.
hex-constant is a hexadecimal value consisting of 1 to 12 characters
(6 bytes); leading zeros are not required.
The default is 0.
| DB2 issues a warning if the value is not within the range of log
| records that is covered by the input log record information.
RBAEND(hex-constant)
Specifies the last valid hexadecimal log RBA to extract. If the
specified RBA is in the middle of a log record, DSN1LOGP
continues reading the log in an attempt to return a complete log
record.
To read to the last valid RBA in the log, specify
RBAEND(FFFFFFFFFFFF). For any given job, specify this keyword
only once. Alternative spellings: ENDRBA, EN.
hex-constant is a hexadecimal value consisting of 1 to 12 characters
(6 bytes); leading zeros are not required.
The default is FFFFFFFFFFFF.
| DB2 issues a warning if the value is not within the range of log
| records that is covered by the input log record information.
RBAEND can be specified only if RBASTART is specified.
LRSNSTART(hex-constant)
Specifies the log record sequence number (LRSN) from which to
begin the log scan. DSN1LOGP starts its processing on the first log
record that contains an LRSN value that is greater than or equal to
the LRSN value that is specified on LRSNSTART. The default
LRSN is the LRSN at the beginning of the data set. Alternative
spellings: STARTLRSN, STRTLRSN, and LRSNSTRT.
For any given job, specify this keyword only once.
You must specify this keyword to search the member BSDSs and to
locate the log data sets from more than one DB2 subsystem. You
can specify either the LRSNSTART keyword or the RBASTART
keyword to search the BSDS of a single DB2 subsystem and to
locate the log data sets.
| DB2 issues a warning if the value is not within the range of log
| records that is covered by the input log record information.
LRSNEND(hex-constant)
Specifies the LRSN value of the last log record that is to be
scanned. When LRSNSTART is specified, the default is
X'FFFFFFFFFFFF'. Otherwise, it is the end of the data set.
Alternative spelling: ENDLRSN.
contains the OBID of a file descriptor; don’t confuse this with the
PSID, which is the information that you must include when you
execute DSN1LOGP.
Whenever DB2 makes a change to an index, the log record that
describes the change identifies the database (by DBID) and the
index space (by index space OBID or ISOBID). You can find the
ISOBID for an index space in the column named ISOBID in the
SYSIBM.SYSINDEXES catalog table.
You can also find a column named OBID in the
SYSIBM.SYSINDEXES catalog table. This column actually contains
the identifier of a fan set descriptor; don’t confuse this with the
ISOBID, which is the information that you must include when you
execute DSN1LOGP.
When you select either the PSID or the ISOBID from a catalog
table, the value is displayed in decimal format. Use the SQL HEX
function in your select statement to convert them to hexadecimal.
For any given DSN1LOGP job, use this keyword only once. If you
specify OBID, you must also specify DBID.
PAGE(hex-constant)
Specifies a hexadecimal page number. When data or an index is
changed, a recovery log record is written to the log, identifying the
object identifier and the page number of the changed data page or
index page. Specifying a page number limits the search to a single
page; otherwise, all pages for a given combination of DBID and
OBID are extracted. The log output also contains page set control
log records for the specified DBID and OBID, and system event log
records, unless DATAONLY(YES) is also specified.
hex-constant is a hexadecimal value consisting of a maximum of
eight characters.
You can specify a maximum of 100 PAGE keywords in any given
DSN1LOGP job. You must also specify the DBID and OBID
keywords that correspond to those pages.
The PAGE and RID keywords are mutually exclusive.
RID(hex-constant)
Specifies a record identifier, which is a hexadecimal value
consisting of 10 characters, with the first eight characters
representing the page number and the last two characters
representing the page ID map entry number. The option limits the
log records that are extracted to those that are associated with that
particular record. The log records that are extracted include not
only those that are directly associated with the RID, such as insert
and delete, but also the control records that are associated with the
DBID and OBID specifications, such as page set open, page set
close, set write, reset write, page set write, data set open, and data
set close.
You can specify a maximum of 40 RID keywords in any given
DSN1LOGP job. You must also specify the DBID and OBID
keywords that correspond to the specified records.
The PAGE and RID keywords are mutually exclusive.
URID(hex-constant)
Specifies a hexadecimal unit of recovery identifier (URID). Changes
to data and indexes occur in the context of a DB2 unit of recovery,
which is identified on the log by a BEGIN UR record. In the
summary DSN1LOGP report, the URID is listed in the STARTRBA
field in message DSN1162I. In the detail DSN1LOGP report, look
for the subtype of BEGIN UR; the URID is listed in the URID field.
Using the log RBA of that record as the URID value limits the
extraction of information from the DB2 log to that unit of recovery.
hex-constant is a hexadecimal value consisting of 1 to 12 characters
(6 bytes). Leading zeros are not required.
You can specify a maximum of 10 URID keywords in any given
DSN1LOGP job.
LUWID(luwid) Specifies up to 10 LUWIDs that DSN1LOGP is to include
information about in the summary report.
luwid consists of three parts: an LU network name, an LUW
instance number, and a commit sequence number. If you supply
the first two parts, the summary report includes an entry for each
commit that is performed in the logical unit of work (within the
search range). If you supply all three parts, the summary report
includes an entry for only that LUWID.
The LU network name consists of a one- to eight-character network
ID, a period, and a one- to eight-character network LU name. The
LUW instance number consists of a period, followed by 12
hexadecimal characters. The last element of the LUWID is the
commit sequence number of 4 hexadecimal characters, preceded by
a period.
TYPE(hex-constant)
Limits the log records that are extracted to records of a specified
type. The TYPE and SUBTYPE options are mutually exclusive.
hex-constant indicates the type, as follows:
Constant Description
2 Page set control record
4 SYSCOPY utility record
10 System event record
20 UR control record
100 Checkpoint record
200 UR-UNDO record
400 UR-REDO record
800 Archive quiesce record
1000 to 8000 Assigned by the resource manager
SUBTYPE(hex-constant)
Restricts formatting to a particular subtype of unit of recovery
undo and redo log records (types 200 and 400). The TYPE and
SUBTYPE options are mutually exclusive.
hex-constant indicates the subtype, as follows:
Constant
Description
1 Update data page
2 Format page or update space map
3 Update space map bits
4 Update to index space map
5 Update to index page
6 DBA table update log record
7 Checkpoint DBA table log record
9 DBD virtual memory copy
A Exclusive lock on page set partition or DBD
B Format file page set
C Format index page set
F Update by repair (first half if 32 KB)
10 Update by repair (second half if 32 KB)
11 Allocate or deallocate a segment entry
12 Undo/redo log record for modified page or redo log record
for formatted page
14 Savepoint
15 Other DB2 component log records that are written for
RMID 14
17 Checkpoint record of modified page set
19 Type 2 index update
1A Type 2 index undo/redo or redo log record
1B Type 2 index change notification log record
1C Type 2 index space map update
1D DBET log record with exception data
1E DBET log record with LPL/GRECP data
| 65 Change Data Capture diagnostic log
81 Index dummy compensation log record
82 START DATABASE ACCESS (FORCE) log record
The VALUE and OFFSET options must be used together. You can
specify a maximum of 10 VALUE-OFFSET pairs. The SUBTYPE
parameter is required when using the VALUE and OFFSET
options.
VALUE(hex-constant)
Specifies a value that must appear in a log record that is to be
extracted.
hex-constant is a hexadecimal value consisting of a maximum of
64 characters and must be an even number of characters.
Environment
DSN1LOGP runs as a batch z/OS job.
DSN1LOGP runs on archive data sets, but not active data sets, when DB2 is
running.
Authorization required
DSN1LOGP does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1LOGP job. See “Syntax and
options of the DSN1LOGP control statement” on page 808 for DSN1LOGP syntax
and option descriptions.
DSN1LOGP identifies the recovery log by DD statements that are described in the
stand-alone log services. For a description of these services, see Appendix C of
DB2 Administration Guide.
| Data sharing requirements: When selecting log records from more than one DB2
| subsystem, you must use all or one of the following DD statements to locate the
| log data sets:
| GROUP
| MxxBSDS
| MxxARCHV
| MxxACTn
If you perform archiving on tape, the first letter of the lowest-level qualifier varies
for both the first and second data sets. The first letter of the first data set is B (for
BSDS), and the first letter of the second data set is A (for archive). Hence, the
archive log data set names all end in Axxxxxxx, and the DD statement identifies
each of them as the second data set on the corresponding tape:
LABEL=(2,SL)
When reading archive log data sets on tape (or copies of active log data sets on
tape), add one or more of the following Job Entry Subsystem (JES) statements:
Alternatively, submit the job to a z/OS initiator that your operations center has
established for exclusive use by jobs that require tape mounts. Specify the initiator
class by using the CLASS parameter on the JOB statement, in both JES2 and JES3
environments.
For additional information on these options, refer to z/OS MVS JCL User's Guide or
z/OS MVS JCL Reference.
You can think of the DB2 recovery log as a large sequential file. When recovery log
records are written, they are written to the end of the log. A log RBA is the address
of a byte on the log. Because the recovery log is larger than a single data set, the
recovery log is physically stored on many data sets. DB2 records the RBA ranges
and their corresponding data sets in the BSDS. To determine which data set
contains a specific RBA, read the information about the DSNJU004 utility under
Chapter 37, “DSNJU004 (print log map),” on page 753 and see Part 4 of DB2
Administration Guide. During normal DB2 operation, messages are issued that
include information about log RBAs.
Example 2: Extracting information from the active log when the BSDS is not
available. The following example shows how to extract the information from the
active log when the BSDS is not available. The extraction includes log records that
apply to the table space or index space that is identified by the DBID of X'10A' and
the OBID of X'1F'. The only information that is extracted is information that relates
to page numbers X'3B' and X'8C', as identified by the PAGE options. You can omit
beginning and ending RBA values for ACTIVEn or ARCHIVE DD statements
because the DSN1LOGP search includes all specified ACTIVEn DD statements. The
DD statements ACTIVE1, ACTIVE2, and ACTIVE3 specify the log data sets in
ascending log RBA range. Use the DSNJU004 utility to determine what the log
RBA range is for each active log data set. If the BSDS is not available and you
cannot determine the ascending log RBA order of the data sets, you must run each
log data set individually.
//STEP1 EXEC PGM=DSN1LOGP
//STEPLIB DD DSN=PDS containing DSN1LOGP
//SYSPRINT DD SYSOUT=A
//SYSABEND DD SYSOUT=A
//ACTIVE1 DD DSN=DSNCAT.LOGCOPY1.DS02,DISP=SHR RBA X’A000’ - X’BFFF’
//ACTIVE2 DD DSN=DSNCAT.LOGCOPY1.DS03,DISP=SHR RBA X’C000’ - X’EFFF’
//ACTIVE3 DD DSN=DSNCAT.LOGCOPY1.DS01,DISP=SHR RBA X’F000’ - X’12FFF’
//SYSIN DD *
DBID (10A) OBID(1F) PAGE(3B) PAGE(8C)
/*
Example 3: Extracting information from the archive log when the BSDS is not
available. The example in Figure 140 shows how to extract the information from
archive logs when the BSDS is not available. The extraction includes log records
that apply to a single unit of recovery (whose URID is X'61F321'). Because the
BEGIN UR is the first record for the unit of recovery and is at X'61F321', the
beginning RBA is specified to indicate that it is the first RBA in the range from
which to extract recovery log records. Also, because no ending RBA value is
specified, all specified archive logs are scanned for qualifying log records. The
specification of DBID(4) limits the scan to changes that the specified unit of
recovery made to all table spaces and index spaces in the database whose DBID is
X'4'.
Figure 140. Example DSN1LOGP statement with RBASTART and URID options
The following example produces both a detail and a summary report that uses the
BSDS to identify the log data sets. The summary report summarizes all recovery
log information within the RBASTART and RBAEND specifications. You cannot
limit the output of the summary report with any of the other options, except by
using the FILTER option with a URID or LUWID specification. RBASTART and
RBAEND specification use depends on whether a BSDS is used.
DSN1LOGP output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
| For data sharing, you might see multiple log records with the same LRSN value on
| a single DB2 data sharing member.
Figure 141 on page 822 shows a sample of the summary report. Figure 142 on page
823 shows a sample of the detail report. Figure 143 on page 825 shows a sample of
data propagation information from a summary report. A description of the output
precedes each sample.
The first section lists all completed units of recovery (URs) and checkpoints within
the range of the log that is scanned. Events are listed chronologically, with URs
listed according to when they were completed and checkpoints listed according to
when the end of the checkpoint was processed. The page sets that are changed by
each completed UR are listed. If a log record that is associated with a UR is
unavailable, the attribute INFO=PARTIAL is displayed for the UR. Otherwise, the
UR is marked INFO=COMPLETE. A log record that is associated with a UR is
unavailable if the range of the scanned log is not large enough to contain all
records for a given UR.
The DISP attribute can be one of the following values: COMMITTED, ABORTED,
INFLIGHT, IN-COMMIT, IN-ABORT, POSTPONED ABORT, or INDOUBT. The
DISP attributes COMMITTED and ABORTED are used in the first section; the
remaining attributes are used in the second section.
The list in the second section shows the work that is required of DB2 at restart as
it is recorded in the log that you specified. If the log is available, the checkpoint
that is to be used is identified, as is each outstanding UR, together with the page
sets it changed. Each page set with pending writes is also identified, as is the
earliest log record that is required to complete those writes. If a log record that is
associated with a UR is unavailable, the attribute INFO=PARTIAL is displayed,
and the identification of modified page sets is incomplete for that UR.
================================================================================
DSN1150I SUMMARY OF COMPLETED EVENTS
================================================================================
DSN1157I RESTART SUMMARY
You can reduce the volume of the detail log records by specifying one or more of
the optional keywords that are listed under “Syntax and options of the DSN1LOGP
control statement” on page 808.
Figure 143. Sample data propagation information from the summary report
A detail report contains the following information for each page regression error:
v DBID
v OBID
v Page number
v Current LRSN or RBA
v Member name
v Previous level
v Previous update
v Date
v Time
A summary report contains the total number of page regressions that the utility
found as well as the following information for each table space in which it found
page regression errors:
v Database name
v Table space name
v DBID
v OBID
If no page regression errors are found, DSN1LOGP outputs a single message that
no page regression errors were found.
The sample output in Figure 144 shows detail and summary reports when page
regression errors are found.
DSN1191I:
-------------------------------------------------------------------------------------
DETAIL REPORT OF PAGE REGRESSION ERRORS
-------------------------------------------------------------------------------------
DBID OBID PAGE# CURRENT MEMBER PREV-LEVEL PREV-UPDATE DATE TIME
---- --- -------- ------------ ------- ----------- ------------ ------ --------
0001 OOCF OOOO132F B7A83F071892 0002 84A83BBEE81F B7A83C6042DF 02.140 15:29:20
0001 OOCF 000086C2 B7A84BD4C3E5 0003 04A83BC42C58 B7A83C61D53E 02.140 18:01:13
0006 0009 00009DBF B7A8502A39F4 0002 04A83BC593B6 B7A83C669743 02.140 18:20:37
Figure 144. Sample DSN1LOGP detail and summary reports for page regression errors.
Note: A DB2 VSAM data set is a single piece of a nonpartitioned table space or
index, or a single partition of a partitioned table space or index. The input
must be a single z/OS sequential or VSAM data set. Concatenation of input
data sets is not supported.
Using DSN1PRNT, you can print hexadecimal dumps of DB2 data sets and
databases. If you specify the FORMAT option, DSN1PRNT formats the data and
indexes for any page that does not contain an error that would prevent formatting.
If DSN1PRNT detects such an error, it prints an error message just before the page
and dumps the page without formatting. Formatting resumes with the next page.
DSN1PRNT is especially useful when you want to identify the contents of a table
space or index. You can run DSN1PRNT on image copy data sets and on table
spaces and indexes. DSN1PRNT accepts an index image copy as input when you
specify the FULLCOPY option.
DSN1PRNT is compatible with LOB table spaces, when you specify the LOB
keyword and omit the INLCOPY keyword.
32K FULLCOPY LARGE DSSIZE ( integer G )
PAGESIZE ( 4K ) INCRCOPY LOB
8K INLCOPY
16K
32K
PIECESIZ(integer K ) NUMPARTS(integer)
M
G
(1)
PRINT EBCDIC
(1)
EBCDIC
PRINT
(hexadecimal-constant,hexadecimal-constant) ASCII
UNICODE
VALUE( string ) FORMAT
hexadecimal-constant EXPAND NODATA
SWONLY NODATPGS
Notes:
1 EBCDIC is not necessarily the default if the first page of the input data set is a header page. If
the first page is a header page, DSN1PRNT uses the format information in the header page as the
default format.
Option descriptions
To run DSN1PRNT, specify one or more of the following parameters on the EXEC
statement.
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If you
specify an incorrect page size, DSN1PRNT might produce
unpredictable results.
If you do not specify the page size, DSN1PRNT tries to determine
the page size from the input data set if the first page of the input
data set is a header page. DB2 issues an error message if
DSN1PRNT cannot determine the input page size. This might
happen if the header page is not in the input data set, or if the
page size field in the header page contains an invalid page size.
FULLCOPY Specifies that a DB2 full image copy (not a DFSMSdss concurrent
copy) of your data is to be used as input. If this data is partitioned,
you also need to specify the NUMPARTS parameter to identify the
number and length of the partitions. If you specify FULLCOPY
without including a NUMPARTS specification, DSN1PRNT
assumes that the input file is not partitioned.
The FULLCOPY parameter must be specified when you use an
image copy as input to DSN1PRNT. Omitting the parameter can
cause error messages or unpredictable results.
INCRCOPY Specifies that an incremental image copy of the data is to be used
as input. If the data is partitioned, also specify NUMPARTS to
identify the number and length of the partitions. If you specify
INCRCOPY without NUMPARTS, DSN1PRNT assumes that the
input file is not partitioned.
The INCRCOPY parameter must be specified when you use an
incremental image copy as input to DSN1PRNT. Omitting the
parameter can cause error messages or unpredictable results.
INLCOPY Specifies that the input data is to be an inline copy data set.
When you use DSN1PRNT to print a page or a page range from an
inline copy that is produced by LOAD or REORG, DSN1PRNT
prints all instances of the pages. The last instance of the printed
page or pages is the last one that is created by the utility.
LARGE Specifies that the input data set is a table space that was defined
with the LARGE option, or an index on such a table space. If you
specify LARGE, DB2 assumes that the data set has a 4-GB
boundary. The recommended method of specifying a table space
that was defined with the LARGE option is DSSIZE(4G).
If you omit the LARGE or DSSIZE(4G) option when it is needed,
or if you specify LARGE for a table space that was not defined
with the LARGE option, the results from DSN1PRNT are
unpredictable.
If you specify LARGE, you cannot specify LOB or DSSIZE.
LOB Specifies that the SYSUT1 data set is a LOB table space. You cannot
specify the INLCOPY option with the LOB parameter.
DB2 attempts to determine if the input data set is a LOB data set.
If you specify the LOB option but the data set is not a LOB data
set, or if you omit the LOB option but the data set is a LOB data
set, DB2 issues an error message and DSN1PRNT terminates.
If you specify LOB, you cannot specify LARGE.
DSSIZE(integer G)
Specifies the data set size, in gigabytes, for the input data set. If
you omit the DSSIZE keyword or the LARGE keyword,
DSN1PRNT assumes the appropriate default input data set size
that is listed in Table 149.
Table 149. Default input data set sizes
Object Default input data set size (in GB)
Non-LOB linear table space or index 2
LOB 4
Partitioned table space or index with 4
NUMPARTS = 1-16
Partitioned table space or index with 2
NUMPARTS = 17-32
Partitioned table space or index with 1
NUMPARTS = 33-64
Partitioned table space or index with 4
NUMPARTS >64
integer must match the DSSIZE value that was specified when the
table space was defined.
If you omit DSSIZE and the data set is not the assumed default
size, the results from DSN1PRNT are unpredictable.
If you specify DSSIZE, you cannot specify LARGE.
PIECESIZ(integer)
Specifies the maximum piece size (data set size) for nonpartitioned
indexes. The value that you specify must match the value that is
specified when the secondary index was created or altered.
The defaults for PIECESIZ are 2G (2 GB) for indexes that are
backed by non-large table spaces and 4G (4 GB) for indexes that
are backed by table spaces that were defined with the LARGE
option. This option is required if a print range is specified and the
piece size is not one of the default values. If PIECESIZ is omitted
and the index is backed by a table space that was defined with the
LARGE option, the LARGE keyword is required for DSN1PRNT.
The subsequent keyword K, M, or G, indicates the units of the
value that is specified in integer.
K Indicates that the integer value is to be multiplied by 1 KB
to specify the maximum piece size in bytes. integer must be
either 256 or 512.
M Indicates that the integer value is to be multiplied by 1 MB
to specify the maximum piece size in bytes. integer must be
a power of 2, between 1 and 512.
G Indicates that the integer value is to be multiplied by 1 GB
to specify the maximum piece size in bytes. integer must be
1, 2, or 4.
v 4 MB or 4 GB
v 8 MB
v 16 MB
v 32 MB
v 64 MB
v 128 MB
v 256 KB or 256 MB
v 512 KB or 512 MB
NUMPARTS(integer)
Specifies the number of partitions that are associated with the
input data set. NUMPARTS is required if the input data set is
partitioned. When you use DSN1PRNT to copy a data-partitioned
secondary index, specify the number of partitions in the index.
Valid specifications range from 1 to 4096. DSN1PRNT uses this
value to help locate the first page in a range that is to be printed. If
you omit NUMPARTS or specify it as 0, DSN1PRNT assumes that
your input file is not partitioned. If you specify a number greater
than 64, DSN1PRNT assumes that the data set is for a partitioned
table space that was defined with the LARGE option, even if the
LARGE keyword is not specified for DSN1PRNT.
DSN1PRNT cannot always validate the NUMPARTS parameter. If
you specify it incorrectly, DSN1PRNT might print the wrong data
sets or return an error message that indicates that an unexpected
page number was encountered.
PRINT(hexadecimal-constant,hexadecimal-constant)
Causes the SYSUT1 data set to be printed in hexadecimal format
on the SYSPRINT data set. This option is the default for
DSN1PRNT.
You can specify the PRINT parameter with or without page range
specifications. If you do not specify a range, all pages of the
SYSUT1 are printed. If you want to limit the range of pages that
are printed, you can do so by indicating the beginning and ending
page numbers with the PRINT parameter or, if you want to print a
single page, by indicating only the beginning page. In either case,
your range specifications must be from one to eight hexadecimal
characters in length.
The following example shows how to code the PRINT parameter if
you want to begin printing at page X'2F0' and to stop at page
X'35C':
PRINT(2F0,35C)
Note that the actual size of a 4-GB DB2 data set that is full is 4G -
256 x 4KB. This size also applies to data sets that are created with a
DFSMS data class that has extended addressability. When
calculating the print range of pages in a non-first data set of a
multiple data set linear table space or index with 4G DSSIZE or
PIECESIZ, use the actual data set size.
The relationship between the page size and the number of pages in
a 4-GB data set is shown in Table 150 on page 832.
Table 150. Relationship between page size and the number of pages in a 4-GB data set
Page size Number of pages
4 KB X'FFF00'
8 KB X'7FF80'
16 KB X'3FFC0'
32 KB X'1FFE0'
You can indicate the format of the row data in the PRINT output
by specifying EBCDIC, ASCII, or UNICODE. The part of the
output that is affected by these options is in bold in Figure 145.
EBCDIC
Indicates that the row data in the PRINT output is to be
displayed in EBCDIC. The default is EBCDIC if the first page
of the input data set is not a header page.
If the first page is a header page, DSN1PRNT uses the format
information in the header page as the default format. However,
if you specify EBCDIC, ASCII, or UNICODE, that format
overrides the format information in the header page. The
unformatted header page dump is always displayed in
EBCDIC, because most of the fields are in EBCDIC.
ASCII
Indicates that the row data in the PRINT output is to be
displayed in ASCII. Specify ASCII when printing table spaces
that contain ASCII data.
UNICODE
Indicates that the row data in the PRINT output is to be
displayed in Unicode. Specify UNICODE when printing table
spaces that contain Unicode data.
VALUE Causes each page of the input data set SYSUT1 to be scanned for
the character string that you specify in parentheses following the
VALUE parameter. Each page that contains that character string is
then printed in SYSPRINT. You can specify the VALUE parameter
in conjunction with any of the other DSN1PRNT parameters.
(string)
Can consist of from 1 to 20 alphanumeric EBCDIC characters.
For non-EBCDIC characters, use hexadecimal characters.
(hexadecimal-constant)
Consists of from 2 to 40 hexadecimal characters. You must
specify two apostrophe characters before and after the
hexadecimal character string.
If, for example, you want to search your input file for the string
'12345', your JCL should look like the following JCL:
//STEP1 EXEC PGM=DSN1PRNT,PARM=’VALUE(12345)’
Environment
Run DSN1PRNT as a z/OS job.
You can run DSN1PRNT even when the DB2 subsystem is not operational. If you
choose to use DSN1PRNT when the DB2 subsystem is operational, ensure that the
DB2 data sets that are to be printed are not currently allocated to DB2.
To make sure that a data set is not currently allocated to DB2, issue the DB2 STOP
DATABASE command, specifying the table spaces and indexes that you want to
print.
Authorization required
No special authorization is required. However, if any of the data sets is RACF
protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1PRNT job. See “Syntax and
options of the DSN1PRNT control statement” on page 828 for DSN1PRNT syntax
and option descriptions.
Recommendations
This section contains recommendations for running the DSN1PRNT utility.
SELECT I.CREATOR,
I.NAME,
S.PGSIZE,
CASE S.DSSIZE
WHEN 0 THEN CASE S.TYPE
WHEN ’ ’ THEN 2097152
WHEN ’I’ THEN 2097152
WHEN ’L’ THEN 4194304
WHEN ’K’ THEN 4194304
ELSE NULL
END
ELSE S.DSSIZE
END
FROM SYSIBM.SYSINDEXES I,
SYSIBM.SYSTABLES T,
SYSIBM.SYSTABLESPACE S
WHERE I.CREATOR=’DSN8610’ AND
I.NAME=’XEMP1’ AND
I.TBCREATOR=T.CREATOR AND
I.TBNAME=T.NAME AND
T.DBNAME=S.DBNAME AND
T.TSNAME=S.NAME;
Figure 146. Example SQL query that returns the page size and data set size for the page
set.
See “Data sets that REORG INDEX uses” on page 433 for information about
determining data set names.
The fifth-level qualifier in the data set name can be either I0001 or J0001. This
example uses I0001.
//PRINT2 EXEC PGM=DSN1PRNT,
// PARM=(PRINT(F0000,F000F),FORMAT,PIECESIZ(64M))
//SYSUDUMP DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DISP=OLD,DSN=DSNCAT.DSNDBD.MMRDB.NPI1.I0001.A061
Example 4: Printing a partitioned data set. The following example specifies that
DSN1PRNT is to print the data set that is identified by the SYSUT1 DD statement.
Because this data set is a table space that was defined with the LARGE option, the
DSSIZE(4G) option is specified in the parameter list for DSN1PRNT. You could
specify the LARGE option in this list instead, but specifying DSSIZE(4G) is
recommended. This input table space has 260 partitions, as indicated by the
NUMPARTS option.
//RUNPRNT1 EXEC PGM=DSN1PRNT,
// PARM=’DSSIZE(4G),PRINT,NUMPARTS(260),FORMAT’
//STEPLIB DD DSN=DB2A.SDSNLOAD,DISP=SHR
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DBOM0301.TPOM0301.I0001.A259,DISP=SHR
/*
DSN1PRNT output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
For information about the format of trace records, see Appendix A of DB2
Performance Monitoring and Tuning Guide.
SELECT function,offset,data-specification
(X'00E60100')
ACTION (action )
(abend-code)
(1)
(STTRACE )second-trace-spec
(X'00E60100')
,action
(abend-code)
Notes:
1 The options in the second-trace-spec do not have to be specified immediately following the
STTRACE option. However, they can be specified only if the STTRACE option is also specified.
second-trace-spec:
(X'00E60100') FILTER( ACE ) COMMAND command
ACTION2( action ) EB
(abend-code)
SELECT2 function,offset,data-specification
Option descriptions
START TRACE (trace-parameters)
Indicates the start of a DSN1SDMP job. START TRACE is a
required keyword and must be the first keyword that is specified
in the SDMPIN input stream. The trace parameters that you use
are those that are described in Chapter 2 of DB2 Command
Reference, except that you cannot use the subsystem recognition
character.
If the START TRACE command in the SDMPIN input stream is not
valid, or if the user is not properly authorized, the IFI
(instrumentation facility interface) returns an error code and
START TRACE does not take effect. DSN1SDMP writes the error
message to the SDMPPRNT data set.
Trace Destination: If DB2 trace data is to be written to the SDMPTRAC data set,
the trace destination must be an IFI online performance (OP) buffer. OP buffer
destinations are specified in the DEST keyword of START TRACE. Eight OP buffer
destinations exist, OP1 to OP8. The OPX trace destination assigns the next
| available OP buffer. Any record destined for the exclusive internal trace table (RES)
| is not eligible to be evaluated. For example, if you start IFCID(0) DEST(RES), this
| will not execute DSN1SDMP logic and cannot be acted upon.
The DB2 output text from the START TRACE command is written to SDMPPRNT.
START TRACE and its associated keywords must be specified first. Specify the
remaining selective dump keywords in any order following the START TRACE
command.
SELECT function,offset,data-specification
Specifies selection criteria in addition to those that are specified on the START
TRACE command. SELECT expands the data that is available for selection in a
trace record and allows more specific selection of data in the trace record than
using START TRACE alone. You can specify a maximum of eight SELECT
criteria.
The selection criteria use the concept of the current-record pointer. DB2
initializes the current-record pointer to zero, that is, at the beginning of the
trace record. For this instance of the DSN1SDMP trace, the trace record begins
Figure 147. Format of the DB2 trace record at data specification comparison time
An abend reason code can also be specified on this parameter. The codes
must be in the range X'00E60100' to X'00E60199'. The default value is
X'00E60100'.
STTRACE
Specifies that a second trace is to be started when a trace record passes the
selection criteria.
If you do not specify action or STTRACE, the record is written and no action is
performed.
AFTER(integer)
Specifies that the ACTION is to be performed after the trace point is reached
integer times.
integer must be between 1 and 32767. The default is AFTER(1).
FOR(integer)
Specifies the number of times that the ACTION is to take place when the
specified trace point is reached. After integer times, the trace is stopped, and
DSN1SDMP terminates.
integer must be between 1 and 32767 and includes the first action. If no
SELECT criteria are specified, use an integer greater than 1; the START TRACE
command automatically causes the action to take place one time. The default is
FOR(1).
ACTION2
Specifies the action to perform when a trace record passes the selection criteria
of the START TRACE, SELECT, and SELECT2 keywords.
Attention: The ACTION2 keyword, like the ACTION keyword, should be used
with extreme caution, because you might damage existing data. Not all abends
are recoverable, even if the ABENDRET parameter is specified. Some abends
might force the DB2 subsystem to terminate, particularly those that occur
during end-of-task or end-of-memory processing due to the agent having
experienced a previous abend.
action(abend-code)
Specifies a particular action to perform. Possible values for action are:
ABENDRET ABEND and retry the agent.
ABENDTER ABEND and terminate the agent.
An abend reason code can also be specified on this parameter. The codes
must be in the range X'00E60100-00E60199'. If no abend code is specified,
X'00E60100' is used.
If you do not specify action, the record is written and no action is performed.
FILTER
Specifies that DSN1SDMP is to filter the output of the second trace based on
either an ACE or an EB.
(ACE)
Specifies that DSN1SDMP is to include trace records only for the agent
control element (ACE) that is associated with the agent when the first
action is triggered and the second trace is started.
(EB)
Specifies that DSN1SDMP is to include trace records only for the execution
block (EB) that is associated with the agent when the first action is
triggered and the second trace is started.
COMMAND
Indicates that the specified command is to be issued when a trace record
passes the selection criteria for the first trace and a second trace is started. You
can start a second trace by specifying the STTRACE option.
command
Specifies a specific command to be issued. For a complete list of
commands, see DB2 Command Reference.
FOR2(integer)
Specifies the number of times that the ACTION2 is to take place when the
specified second trace point is reached. After integer times, the second trace is
stopped, and DSN1SDMP terminates.
integer must be between 1 and 32767 and includes the first action. If no
SELECT2 criteria are specified, use an integer greater than 1; the STTRACE
option automatically causes the action to take place one time. The default is
FOR2(1).
AFTER2(integer)
Specifies that the ACTION2 is to be performed after the second trace point is
reached integer times.
integer must be between 1 and 32767. The default is AFTER2(1).
SELECT2 function,offset,data-specification
Specifies selection criteria for the second trace. This option functions like the
SELECT option, except that it pertains to the second trace only. You can start a
second trace by specifying the STTRACE option.
Environment
Run DSN1SDMP as a z/OS job, and execute it with the DSN TSO command
processor. To execute DSN1SDMP, the DB2 subsystem must be running.
The z/OS job completes only under one of the following conditions:
v The TRACE and any additional selection criteria that are started by DSN1SDMP
meet the criteria specified in the FOR parameter.
v The TRACE that is started by DSN1SDMP is stopped by using the STOP TRACE
command.
v The job is canceled by the operator.
Authorization required
To execute this utility, the privilege set of the process must include one of the
following privileges or authorities:
v TRACE system privilege
v SYSOPR authority
v SYSADM authority
v MONITOR1 or MONITOR2 privileges (if you are using user-defined data sets)
The user who executes DSN1SDMP must have EXECUTE authority on the plan
that is specified in the trace-parameters of the START TRACE keyword.
Control statement
See “Syntax and options of the DSN1SDMP control statement” on page 837 for
DSN1SDMP syntax and option descriptions.
| To ensure that you do not take action on an IFCID 4 or IFCID 5 start or stop trace
| record, it is good practice to add
| P4,00
| DR,04,X'hhhh'
| to your control statement, where hhhh is the hex representation of the IFCID that
| you are trying to trigger on.
The DB2 subsystem name must be filled in by the user. The DSN
RUN command must specify a plan for which the user has execute
authority. DSN1SDMP dump does not execute the specified plan;
the plan is used only to connect to DB2.
Assigning buffers
The OPX trace destination assigns the next available OP buffer. You must specify
the OPX destination for all traces that are being recorded to an OPn buffer, thereby
avoiding the possibility of starting a trace to a buffer that has already been
assigned.
If a trace is started to an OPn buffer that has already been assigned, DSN1SDMP
waits indefinitely until the trace is manually stopped. The default for
MONITOR-type traces is the OPX destination (the next available OP buffer). Other
trace types must be explicitly directed to OP destinations via the DEST keyword of
the START TRACE command. DSN1SDMP interrogates the IFCAOPN field after
the START TRACE COMMAND call to determine if the trace was started to an OP
buffer.
Trace records are written to the SDMPTRAC data set when the trace destination is
an OP buffer (see “Trace Destination” on page 838). The instrumentation facilities
component (IFC) writes trace records to the buffer and posts DSN1SDMP to read
the buffer when it fills to half of the buffer size.
You can specify the buffer size on the BUFSIZE keyword of the START TRACE
command. All returned records are written to SDMPTRAC.
If the number of generated trace records requires a larger buffer size than was
specified, you can lose some trace records. If this happens, error message
DSN2724I is issued.
If all three events occur, an 00E601xx abend occurs. xx is an integer between 1 and
99 that DB2 obtains from the user-specified value on the ACTION keyword.
If DSN1SDMP does not finish execution, you can stop the utility by issuing the
STOP TRACE command, as in the following example:
-STOP TRACE=P CLASS(32)
A STOP TRACE or MODIFY TRACE command that is entered from a console for
the trace that is started by DSN1SDMP causes immediate abnormal termination of
DSN1SDMP processing. The IFI READA function terminates with an appropriate
IFI termination message and reason code. Additional error messages and reason
codes that are associated with the DSN1SDMP STOP TRACE command vary
depending on the specific trace command that is entered by the console operator.
If the console operator terminates the original trace by using the STOP TRACE
command, the subsequent STOP TRACE command that is issued by DSN1SDMP
fails.
If the console operator enters a MODIFY TRACE command and processing of this
command completes before the STOP TRACE command is issued by DSN1SDMP,
the modified trace is also terminated.
/*
//**********************************************************************
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROG(DSN1SDMP) PLAN(DSNEDCL)
END
//*
Example 2: Abending and retrying agent on -904 SQL CODE. The example in
Figure 149 on page 846 specifies that DB2 is to start a performance trace (which is
indicated by the letter A) and activate IFCID 53, 58. To start only those IFCIDs that
are specified in the IFCID option, use trace classes 30-32. In this example, trace
class 32 is specified. The IFCID 53 and 58 are started and inspected to see if they
match the SELECT criteria. These START TRACE options are explained in greater
detail in DB2 Command Reference.
| The SELECT option indicates additional criteria for data in the trace record. In this
| example, the P4,00 positions the current record pointer to the product section. The
| GE, 04,X'0005' insures that the IFCID being traced is either an IFCID 53 or 58 and
| is not an IFCID4 which is automatically generated via the START TRACE
| command. The P4,08 positions the current record pointer to data section 1 of the
| IFCID 53 or 58. A direct comparison is then made at decimal offset 74 for SQL code
| X'’FFFFFC78’'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by
the ACTION keyword. In this example, the job is to abend and retry with reason
code 00E60188. This action is to take place only once, as indicated by the FOR
option. FOR(1) is the default, and is therefore not required to be explicitly
specified.
| //SDMPIN DD *
| START TRACE=A CLASS(32) IFCID(53,58) DEST(OPX)
| FOR(1)
| AFTER(1)
| ACTION(ABENDRET(00E60188))
| SELECT
| * Position to the prodcut section
| P4,00
| * Insure QWHSIID = 58 or 53 (not IFCID 4)
| GE,04,X’0005’
| * Position to the data section 1
| P4,08
| * Compare SQLCODE in QW0058SQ or QW0053SQ
| DR,74,X’FFFFFC78’
| /*
|
| Figure 149. Example job that abends and terminates agent on -904 SQL code
|
Example 3: Abending and retrying on RMID 20. The example in Figure 150 on page
847 specifies that DB2 is to start a performance trace (which is indicated by the
letter P) and activate all IFCIDs in classes 3 and 8. The trace output is to be
recorded in a generic destination that uses the first free OPn slot, as indicated by
the DEST option. The TDATA (TRA) option specifies that a CPU header is to be
placed into the product section of each trace record. These START TRACE options
are explained in greater detail in DB2 Command Reference.
The SELECT option indicates additional criteria for data in the trace record. In this
example, the SELECT option first specifies that the current-record pointer is to be
placed at the 4-byte field that is located at the start of the record. The current
record pointer is then to be advanced the number of bytes that are indicated in the
2-byte field that is located at the current record pointer. The utility is then to
directly compare the data that is 4 bytes from the current-record pointer with the
value X'0025'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by
the ACTION keyword. In this example, the job is to abend and retry the agent.
Example 4: Generating a dump on SQLCODE -811 RMID16 IFCID 58. The example
in Figure 151 on page 848 specifies that DB2 is to start a performance trace (which
is indicated by the letter P) and activate all IFCIDs in class 3. The trace output is to
be recorded in the system management facility (SMF). The TDATA (COR,TRA)
option specifies that a trace header and a CPU header are to be placed into the
product section of each trace record. These START TRACE options are explained in
greater detail in DB2 Command Reference.
The SELECT option indicates additional criteria for data in the trace record. In this
example, the SELECT option first specifies that the current-record pointer is to be
placed at the 4-byte field that is located at the start of the record. The utility is then
to directly compare the data that is 2 bytes from the current-record pointer with
the value X'0116003A'. The current record pointer is then to be moved to the 4-byte
field that is located 8 bytes past the start of the current record. The utility is then
to directly compare the data that is 74 bytes from the current-record pointer with
the value X'FFFFFCD5'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by
the ACTION keyword. In this example, the job is to abend with reason code
00E60188 and retry the agent. This action is to take place only once, as indicated
by the FOR option. FOR(1) is the default, and is therefore not required to be
explicitly specified. AFTER(1) indicates that this action is to be performed the first
time the trace point is reached. AFTER(1) is also the default.
//SDMPIN DD *
START TRACE=P CLASS(3) RMID(22) DEST(SMF) TDATA(COR,TRA)
AFTER(1)
FOR(1)
SELECT
* POSITION TO HEADERS (QWHS IS ALWAYS FIRST)
P4,00
* CHECK QWHS 01, FOR RMID 16, IFCID 58
DR,02,X’0116003A’
* POSITION TO SECOND SECTION (1ST DATA SECTION)
P4,08
* COMPARE SQLCODE FOR 811
DR,74,X’FFFFFCD5’
ACTION(ABENDRET(00E60188))
/*
Figure 151. Example job that generates a dump on SQL code -811 RMID16 IFCID
Example 5: Starting a second trace. The example job in Figure 152 starts a trace on
IFC 196 records. An IFC 196 record is written when a lock timeout occurs. In this
example, when a lock timeout occurs, DSN1SDMP is to start a second trace, as
indicated by the ACTION(STTRACE) option. This second trace is to be an
accounting trace, as indicated by the COMMAND START TRACE(ACCTG) option.
This trace is to include records only for the ACE that is associated with the agent
that timed out, as indicated by the FILTER(ACE) option. When the qualifying
accounting record is found, DSN1SDMP generates a dump.
//SDMPIN DD *
* START ONLY IFCID 196, TIMEOUT
START TRACE=P CLASS(32) IFCID(196) DEST(SMF)
AFTER(1)
* ACTION = START ACCOUNTING TRACE
ACTION(STTRACE)
* FILTER ON JUST 196 RECORDS...
SELECT
P4,00
DR,04,X’00C4’
* WHEN ACCOUNTING IS CUT, ABEND
ACTION2(ABENDRET(00E60188))
* START THE ACCOUNTING TRACE FILTER ON THE ACE OF THE AGENT
* THAT TIMED OUT
COMMAND
START TRACE(ACCTG) CLASS(32) IFCID(3) DEST(SMF)
* Filter can be for ACE or EB
FILTER(ACE)
/*
DSN1SDMP output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
Table 152 shows the minimum and maximum limits for numeric values.
Table 152. Numeric limits
Item Limit
Smallest SMALLINT value -32768
Largest SMALLINT value 32767
Smallest INTEGER value -2147483648
Largest INTEGER value 2147483647
| Smallest BIGINT value -9223372036854775808
| Largest BIGINT value 9223372036854775807
Smallest REAL value About -7.2×1075
Largest REAL value About 7.2×1075
| Notes:
| 1. These are the limits for normal numbers in DECFLOAT. DECFLOAT also contains special values such as NaN
| and Infinity that are also valid. DECFLOAT also supports subnormal numbers that are outside of the documented
| range.
Table 154 shows the minimum and maximum limits for datetime values.
Table 154. Datetime limits
Item Limit
Smallest DATE value (shown in ISO format) 0001-01-01
Largest DATE value (shown in ISO format) 9999-12-31
Smallest TIME value (shown in ISO format) 00.00.00
Largest TIME value (shown in ISO format) 24.00.00
Smallest TIMESTAMP value 0001-01-01-00.00.00.000000
Largest TIMESTAMP value 9999-12-31-24.00.00.000000
| Table 158. XML schema and XML decomposition stored procedures (continued)
| Stored procedure name Function For information, see:
| XDBDECOMPXML The XDBDECOMPXML procedure extracts values from The topic “The XML
| serialized XML data and populates relational tables with decomposition stored
| the values. procedure
| (XDBDECOMPXML)” in
| DB2 Application Programming
| and SQL Guide
|
|
Invoking utilities as a stored procedure (DSNUTILS)
The DSNUTILS stored procedure enables you use the SQL CALL statement to
execute DB2 utilities from a DB2 application program that specifies EBCDIC input.
When called, DSNUTILS performs the following actions:
v Dynamically allocates the specified data sets
v Creates the utility input (SYSIN) stream
v Invokes DB2 utilities (program DSNUTILB)
v Deletes all the rows that are currently in the created temporary table
(SYSIBM.SYSPRINT)
v Captures the utility output stream (SYSPRINT) into a created temporary table
(SYSIBM.SYSPRINT)
v Declares a cursor to select from SYSPRINT:
DECLARE SYSPRINT CURSOR WITH RETURN FOR
SELECT SEQNO, TEXT FROM SYSPRINT
ORDER BY SEQNO;
v Opens the SYSPRINT cursor and returns.
The calling program then fetches from the returned result set to obtain the
captured utility output.
Then, to execute the utility, you must use a privilege set that includes the
authorization to run the specified utility.
If the DSNUTILS stored procedure invokes a new utility, refer to Table 159 on page
861 for information about the default data dispositions that are specified for
dynamically allocated data sets. This table lists the DD name that is used to
identify the data set and the default dispositions for the data set by utility.
Table 159. Data dispositions for dynamically allocated data sets
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSREC ignored ignored ignored ignored OLD KEEP ignored ignored ignored NEW NEW CATLG
KEEP CATLG CATLG
CATLG
SYSDISC ignored ignored ignored ignored NEW ignored ignored ignored NEW ignored
CATLG CATLG
CATLG CATLG
SYSPUNCH ignored ignored ignored ignored ignored ignored ignored ignored NEW NEW CATLG
CATLG CATLG
CATLG
SYSCOPY ignored ignored NEW ignored NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSCOPY2 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY1 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY2 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSUT1 NEW NEW ignored ignored NEW ignored NEW NEW NEW ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG CATLG
SORTOUT NEW ignored ignored ignored NEW ignored ignored ignored NEW ignored
DELETE DELETE DELETE
CATLG CATLG CATLG
SYSMAP ignored ignored ignored ignored NEW ignored ignored ignored ignored ignored
CATLG
CATLG
SYSERR NEW ignored ignored ignored NEW ignored ignored ignored ignored ignored
CATLG CATLG
CATLG CATLG
FILTER ignored ignored NEW ignored ignored ignored ignored ignored ignored ignored
DELETE
CATLG
If the DSNUTILS stored procedure restarts a current utility, refer to Table 160 for
information about the default data dispositions that are specified for
dynamically-allocated data sets on RESTART. This table lists the DD name that is
used to identify the data set and the default dispositions for the data set by utility.
Table 160. Data dispositions for dynamically allocated data sets on RESTART
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSREC ignored ignored ignored ignored OLD ignored ignored ignored MOD MOD CATLG
KEEP CATLG CATLG
KEEP CATLG
SYSDISC ignored ignored ignored ignored MOD ignored ignored ignored MOD ignored
CATLG CATLG
CATLG CATLG
SYSPUNCH ignored ignored ignored ignored ignored ignored ignored ignored MOD MOD CATLG
CATLG CATLG
CATLG
Table 160. Data dispositions for dynamically allocated data sets on RESTART (continued)
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSCOPY ignored ignored MOD ignored MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSCOPY2 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY1 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY2 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSUT1 MOD MOD ignored ignored MOD ignored MOD MOD MOD ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG CATLG
SORTOUT MOD ignored ignored ignored MOD ignored ignored ignored MOD ignored
DELETE DELETE DELETE
CATLG CATLG CATLG
SYSMAP ignored ignored ignored ignored MOD ignored ignored ignored ignored ignored
CATLG
CATLG
SYSERR MOD ignored ignored ignored MOD ignored ignored ignored ignored ignored
CATLG CATLG
CATLG CATLG
FILTER ignored ignored MOD ignored ignored ignored ignored ignored ignored ignored
DELETE
CATLG
6. Use ANY to indicate that TEMPLATE dynamic allocation is to be used. This value suppresses the dynamic allocation that is
normally performed by DSNUTILS.
COPY
COPYTOCOPY
DIAGNOSE
LOAD
MERGECOPY
MODIFY RECOVERY
MODIFY STATISTICS
QUIESCE
REBUILD INDEX
RECOVER
REORG INDEX
REORG LOB
REORG TABLESPACE
REPAIR
REPORT RECOVERY
REPORT TABLESPACESET
RUNSTATS INDEX
RUNSTATS TABLESPACE
STOSPACE
UNLOAD
Recommendation: Invoke DSNUTILS with a utility-name of ANY and omit
all of the xxxdsn, xxxdevt, and xxxspace parameters. Use TEMPLATE
statements to allocate the data sets.
recdsn Specifies the cataloged data set name that is required by LOAD for input,
or by REORG TABLESPACE as the unload data set. recdsn is required for
LOAD. It is also required for REORG TABLESPACE unless you also
specified NOSYSREC or SHRLEVEL CHANGE. If you specify recdsn, the
data set is allocated to the SYSREC DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specified the INDDN parameter for LOAD, the specified ddname
value must be SYSREC.
If you specify the UNLDDN parameter for REORG TABLESPACE, the
specified ddname value must be SYSREC.
recdevt Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the recdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
recspace
Specifies the number of cylinders to use as the primary space allocation for
the recdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
discdsn Specifies the cataloged data set name that is used by LOAD as a discard
data set to hold records not loaded, and by REORG TABLESPACE as a
discard data set to hold records that are not reloaded. If you specify
discdsn, the data set is allocated to the SYSDISC DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the DISCARDDN parameter for LOAD or REORG
TABLESPACE, the specified ddname value must be SYSDISC.
discdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the discdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
discspace
Specifies the number of cylinders to use as the primary space allocation for
the discdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
pnchdsn
Specifies the cataloged data set name that REORG TABLESPACE UNLOAD
EXTERNAL or REORG TABLESPACE DISCARD uses to hold the
generated LOAD utility control statements. If you specify a value for
pnchdsn, the data set is allocated to the SYSPUNCH DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the PUNCHDDN parameter for REORG TABLESPACE, the
specified ddname value must be SYSPUNCH.
pnchdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the pnchdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
pnchspace
Specifies the number of cylinders to use as the primary space allocation for
the pnchdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
copydsn1
Specifies the name of the required target (output) data set, which is needed
when you specify the COPY, COPYTOCOPY, or MERGECOPY utilities. It
is optional for LOAD and REORG TABLESPACE. If you specify copydsn1,
the data set is allocated to the SYSCOPY DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the COPYDDN parameter for COPY, COPYTOCOPY,
MERGECOPY, LOAD, or REORG TABLESPACE, the specified ddname1
value must be SYSCOPY.
copydevt1
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the copydsn1 data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
copyspace1
Specifies the number of cylinders to use as the primary space allocation for
the copydsn1 data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
copydsn2
Specifies the name of the cataloged data set that is used as a target
(output) data set for the backup copy. It is optional for COPY,
rcpyspace2
Specifies the number of cylinders to use as the primary space allocation for
the rcpydsn2 data set. The secondary space allocation is 10% of the primary
space allocation
This is an input parameter of type SMALLINT.
workdsn1
Specifies the name of the cataloged data set that is required as a work data
set for sort input and output. It is required for CHECK DATA, CHECK
INDEX and REORG INDEX. It is also required for LOAD and REORG
TABLESPACE unless you also specify the SORTKEYS keyword. It is
optional for REBUILD INDEX. If you specify workdsn1, the data set is
allocated to the SYSUT1 DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the WORKDDN parameter for CHECK DATA, CHECK
INDEX, LOAD, REORG INDEX, REORG TABLESPACE, or REBUILD
INDEX, the specified ddname value must be SYSUT1.
workdevt1
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the workdsn1 data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
workspace1
Specifies the number of cylinders to use as the primary space allocation for
the workdsn1 data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
workdsn2
Specifies the name of the cataloged data set that is required as a work data
set for sort input and output. It is required for CHECK DATA. It is also
required if you use REORG INDEX to reorganize non-unique type 1
indexes. It is required for LOAD or REORG TABLESPACE unless you also
specify the SORTKEYS keyword. If you specify workdsn2, the data set is
allocated to the SORTOUT DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the WORKDDN parameter for CHECK DATA, LOAD,
REORG INDEX, or REORG TABLESPACE, the specified ddname value must
be SORTOUT.
workdevt2
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the workdsn2 data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
workspace2
Specifies the number of cylinders to use as the primary space allocation for
the workdsn2 data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
mapdsn
Specifies the name of the cataloged data set that is required as a work data
DSNUTILS output
DB2 creates the result set according to the DECLARE statement that is shown
under “Example of declaring a cursor to select from SYSPRINT” on page 860.
If DSNUTILB abends, the abend codes are returned as DSNUTILS return codes.
The BIND PACKAGE statement for the DSNUTILU stored procedure determines
the character set of the resulting utility SYSPRINT output that is placed in the
SYSIBM.SYSPRINT table. If ENCODING(EBCDIC) is specified, the SYSPRINT
contents are in EBCDIC. If ENCODING(UNICODE) is specified, the SYSPRINT
contents are in Unicode. The default install job, DSNTIJSG, is shipped with
ENCODING(EBCDIC).
Then, to execute the utility, you must use a privilege set that includes the
authorization to run the specified utility.
//*************************************************************
//* JCL FOR RUNNING THE WLM-ESTABLISHED STORED PROCEDURES
//* ADDRESS SPACE
//* RGN -- THE MVS REGION SIZE FOR THE ADDRESS SPACE.
//* DB2SSN -- THE DB2 SUBSYSTEM NAME.
//* APPLENV -- THE MVS WLM APPLICATION ENVIRONMENT
//* SUPPORTED BY THIS JCL PROCEDURE.
//*
//*************************************************************
//DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN
//IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
// PARM=’&DB2SSN,1,&APPLENV’
//STEPLIB DD DISP=SHR,DSN=CEE.V!R!M!.SCEERUN
// DD DISP=SHR,DSN=DSN!!0.SDSNLOAD
//UTPRINT DD SYSOUT=*
//RNPRIN01 DD SYSOUT=*
//DSSPRINT DD SYSOUT=*
//SYSIN DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSPRINT DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
Figure 153. Sample PROC for running the WLM-established stored procedures
DSNUTILU output
DB2 creates the result set according to the DECLARE statement shown on
“Example of declaring a cursor to select from SYSPRINT” on page 860
DSNACCOR uses the set of criteria that are shown in “DSNACCOR formulas for
recommending actions” on page 882 to evaluate table spaces and index spaces. By
default, DSNACCOR evaluates all table spaces and index spaces in the subsystem
that have entries in the real-time statistics tables. However, you can override this
default through input parameters.
| DSNACCOR creates and uses declared temporary tables. Therefore, before you can
| invoke DSNACCOR, you need to create a TEMP database and segmented table
| spaces in the TEMP database. For information about creating TEMP databases and
| table spaces, see CREATE DATABASE and CREATE TABLESPACE Chapter 5 of
| DB2 SQL Reference.
| You should bind the package for DSNACCOR with isolation UR to avoid lock
| contention. You can find the installation steps for DSNACCOR in job DSNTIJSG.
The owner of the package or plan that contains the CALL statement must also
have:
v SELECT authority on the real-time statistics tables
v The DISPLAY system privilege
contains one or more of the following values. Each value is enclosed in single
quotation marks and separated from other values by a space.
ALL Makes recommendations for all of the following actions.
COPY Makes a recommendation on whether to perform an image
copy.
RUNSTATS Makes a recommendation on whether to perform RUNSTATS.
REORG Makes a recommendation on whether to perform REORG.
Choosing this value causes DSNACCOR to process the
EXTENTS value also.
EXTENTS Indicates when data sets have exceeded a user-specified extents
limit.
RESTRICT Indicates which objects are in a restricted state.
Restricted
A parameter that is reserved for future use. Specify the null value for this
parameter. Restricted is an input parameter of type VARCHAR(80).
CRUpdatedPagesPct
Specifies a criterion for recommending a full image copy on a table space or
index space. If the following condition is true for a table space, DSNACCOR
recommends an image copy:
The total number of distinct updated pages, divided by the total number of
preformatted pages (expressed as a percentage) is greater than
CRUpdatedPagesPct.
See item 2 in Figure 154 on page 883. If both of the following conditions are
true for an index space, DSNACCOR recommends an image copy:
v The total number of distinct updated pages, divided by the total number of
preformatted pages (expressed as a percentage) is greater than
CRUpdatedPagesPct.
v The number of active pages in the index space or partition is greater than
CRIndexSize. See items 2 and 3 in Figure 155 on page 883.
CRUpdatedPagesPct is an input parameter of type INTEGER. The default is 20.
CRChangesPct
Specifies a criterion for recommending a full image copy on a table space or
index space. If the following condition is true for a table space, DSNACCOR
recommends an image copy:
The total number of insert, update, and delete operations since the last
image copy, divided by the total number of rows or LOBs in a table space
or partition (expressed as a percentage) is greater than CRChangesPct.
See item 3 in Figure 154 on page 883. If both of the following conditions are
true for an index table space, DSNACCOR recommends an image copy:
v The total number of insert and delete operations since the last image copy,
divided by the total number of entries in the index space or partition
(expressed as a percentage) is greater than CRChangesPct.
v The number of active pages in the index space or partition is greater than
CRIndexSize.
See items 2 and 4 in Figure 155 on page 883. CRChangesPct is an input
parameter of type INTEGER. The default is 10.
CRDaySncLastCopy
Specifies a criterion for recommending a full image copy on a table space or
index space. If the number of days since the last image copy is greater than
this value, DSNACCOR recommends an image copy. (See item 1 in Figure 154
on page 883 and item 1 in Figure 155 on page 883.) CRDaySncLastCopy is an
input parameter of type INTEGER. The default is 7.
ICRUpdatedPagesPct
Specifies a criterion for recommending an incremental image copy on a table
space. If the following condition is true, DSNACCOR recommends an
incremental image copy:
The number of distinct pages that were updated since the last image copy,
divided by the total number of active pages in the table space or partition
(expressed as a percentage) is greater than CRUpdatedPagesPct.
(See item 1 in Figure 156 on page 883.) ICRUpdatedPagesPct is an input
parameter of type INTEGER. The default is 1.
ICRChangesPct
Specifies a criterion for recommending an incremental image copy on a table
space. If the following condition is true, DSNACCOR recommends an
incremental image copy:
The ratio of the number of insert, update, or delete operations since the last
image copy, to the total number of rows or LOBs in a table space or
partition (expressed as a percentage) is greater than ICRChangesPct.
(See item 2 in Figure 156 on page 883.) ICRChangesPct is an input parameter of
type INTEGER. The default is 1.
CRIndexSize
Specifies, when combined with CRUpdatedPagesPct or CRChangesPct, a criterion
for recommending a full image copy on an index space. (See items 2, 3, and 4
in Figure 155 on page 883.) CRIndexSize is an input parameter of type
INTEGER. The default is 50.
RRTInsDelUpdPct
Specifies a criterion for recommending that the REORG utility is to be run on a
table space. If the following condition is true, DSNACCOR recommends
running REORG:
The sum of insert, update, and delete operations since the last REORG,
divided by the total number of rows or LOBs in the table space or partition
(expressed as a percentage) is greater than RRTInsDelUpdPct
(See item 1 in Figure 157 on page 884.) RRTInsDelUpdPct is an input parameter
of type INTEGER. The default is 20.
RRTUnclustInsPct
Specifies a criterion for recommending that the REORG utility is to be run on a
table space. If the following condition is true, DSNACCOR recommends
running REORG:
The number of unclustered insert operations, divided by the total number
of rows or LOBs in the table space or partition (expressed as a percentage)
is greater than RRTUnclustInsPct.
(See item 2 in Figure 157 on page 884.) RRTUnclustInsPct is an input parameter
of type INTEGER. The default is 10.
RRTDisorgLOBPct
Specifies a criterion for recommending that the REORG utility is to be run on a
table space. If the following condition is true, DSNACCOR recommends
running REORG:
The number of imperfectly chunked LOBs, divided by the total number of
rows or LOBs in the table space or partition (expressed as a percentage) is
greater than RRTDisorgLOBPct.
(See item 3 in Figure 157 on page 884.) RRTDisorgLOBPct is an input parameter
of type INTEGER. The default is 10.
RRTMassDelLimit
Specifies a criterion for recommending that the REORG utility is to be run on a
table space. If one of the following values is greater than RRTMassDelLimit,
DSNACCOR recommends running REORG:
v The number of mass deletes from a segmented or LOB table space since the
last REORG or LOAD REPLACE
v The number of dropped tables from a nonsegmented table space since the
last REORG or LOAD REPLACE
SRIInsDelPct
Specifies, when combined with SRIInsDelAbs, a criterion for recommending
that the RUNSTATS utility is to be run on an index space. If both of the
following conditions are true, DSNACCOR recommends running RUNSTATS:
v The number of inserted and deleted index entries since the last RUNSTATS
on an index space or partition, divided by the total number of index entries
in the index space or partition (expressed as a percentage) is greater than
SRIInsDelUpdPct.
v The sum of the number of inserted and deleted index entries since the last
RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs.
(See items 1 and 2 in Figure 160 on page 884.) SRIInsDelPct is an input
parameter of type INTEGER. The default is 20.
SRIInsDelAbs
Specifies, when combined with SRIInsDelPct, specifies a criterion for
recommending that the RUNSTATS utility is to be run on an index space. If the
following condition is true, DSNACCOR recommends running RUNSTATS:
v The number of inserted and deleted index entries since the last RUNSTATS
on an index space or partition, divided by the total number of index entries
in the index space or partition (expressed as a percentage) is greater than
SRIInsDelUpdPct.
v The sum of the number of inserted and deleted index entries since the last
RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs,
(See items 1 and 2 in Figure 160 on page 884.) SRIInsDelAbs is an input
parameter of type INTEGER. The default is 0.
SRIMassDelLimit
Specifies a criterion for recommending that the RUNSTATS utility is to be run
on an index space. If the number of mass deletes from an index space or
partition since the last REORG, REBUILD INDEX, or LOAD REPLACE is
greater than this value, DSNACCOR recommends running RUNSTATS.
(See item 3 in Figure 160 on page 884.) SRIMassDelLimit is an input parameter
of type INTEGER. The default is 0.
ExtentLimit
Specifies a criterion for recommending that the RUNSTATS or REORG utility is
to be run on a table space or index space. Also specifies that DSNACCOR is to
warn the user that the table space or index space has used too many extents.
DSNACCOR recommends running RUNSTATS or REORG, and altering data
set allocations if the following condition is true:
v The number of physical extents in the index space, table space, or partition
is greater than ExtentLimit.
(See Figure 161 on page 885.) ExtentLimit is an input parameter of type
INTEGER. The default is 50.
LastStatement
When DSNACCOR returns a severe error (return code 12), this field contains
the SQL statement that was executing when the error occurred. LastStatement is
an output parameter of type VARCHAR(8012).
ReturnCode
The return code from DSNACCOR execution. Possible values are:
0 DSNACCOR executed successfully. The ErrorMsg parameter contains
the approximate percentage of the total number of objects in the
subsystem that have information in the real-time statistics tables.
The figure below shows the formula that DSNACCOR uses to recommend a full
image copy on a table space.
Figure 154. DSNACCOR formula for recommending a full image copy on a table space
The figure below shows the formula that DSNACCOR uses to recommend a full
image copy on an index space.
Figure 155. DSNACCOR formula for recommending a full image copy on an index space
The figure below shows the formula that DSNACCOR uses to recommend an
incremental image copy on a table space.
Figure 156. DSNACCOR formula for recommending an incremental image copy on a table
space
The figure below shows the formula that DSNACCOR uses to recommend a
REORG on a table space. If the table space is a LOB table space, and CHCKLVL=1,
the formula does not include EXTENTS>ExtentLimit.
The figure below shows the formula that DSNACCOR uses to recommend a
REORG on an index space.
The figure below shows the formula that DSNACCOR uses to recommend
RUNSTATS on a table space.
The figure below shows the formula that DSNACCOR uses to recommend
RUNSTATS on an index space.
The figure below shows the formula that DSNACCOR uses to that too many index
space or table space extents have been used.
EXTENTS>ExtentLimit
Figure 161. DSNACCOR formula for warning that too many data set extents for a table space
or index space are used
To create the exception table, execute a CREATE TABLE statement similar to the
following one. You can include other columns in the exception table, but you must
include at least the columns that are shown.
CREATE TABLE DSNACC.EXCEPT_TBL
(DBNAME CHAR(8) NOT NULL,
NAME CHAR(8) NOT NULL,
QUERYTYPE CHAR(40))
CCSID EBCDIC;
Recommendation: If you plan to put many rows in the exception table, create a
nonunique index on DBNAME, NAME, and QUERYTYPE.
After you create the exception table, insert a row for each object for which you
want to include information in the INEXCEPTTABLE column.
Example: Suppose that you want the INEXCEPTTABLE column to contain the
string 'IRRELEVANT’ for table space STAFF in database DSNDB04. You also want
the INEXCEPTTABLE column to contain ’CURRENT’ for table space DSN8S91D in
database DSN8D91A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSNDB04 ’, ’STAFF ’, ’IRRELEVANT’);
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSN8D91A’, ’DSN8S91D’, ’CURRENT’);
Example: Suppose that you want to include all rows for database DSNDB04 in the
recommendations result set, except for those rows that contain the string
’IRRELEVANT’ in the INEXCEPTTABLE column. You might include the following
search condition in your Criteria input parameter:
DBNAME=’DSNDB04’ AND INEXCEPTTABLE<>’IRRELEVANT’
WORKING-STORAGE SECTION.
...
***********************
* DSNACCOR PARAMETERS *
***********************
01 QUERYTYPE.
49 QUERYTYPE-LN PICTURE S9(4) COMP VALUE 40.
49 QUERYTYPE-DTA PICTURE X(40) VALUE ’ALL’.
01 OBJECTTYPE.
49 OBJECTTYPE-LN PICTURE S9(4) COMP VALUE 3.
49 OBJECTTYPE-DTA PICTURE X(3) VALUE ’ALL’.
01 ICTYPE.
49 ICTYPE-LN PICTURE S9(4) COMP VALUE 1.
49 ICTYPE-DTA PICTURE X(1) VALUE ’B’.
01 STATSSCHEMA.
49 STATSSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 STATSSCHEMA-DTA PICTURE X(128) VALUE ’SYSIBM’.
01 CATLGSCHEMA.
49 CATLGSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 CATLGSCHEMA-DTA PICTURE X(128) VALUE ’SYSIBM’.
01 LOCALSCHEMA.
49 LOCALSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 LOCALSCHEMA-DTA PICTURE X(128) VALUE ’DSNACC’.
01 CHKLVL PICTURE S9(9) COMP VALUE +3.
01 CRITERIA.
49 CRITERIA-LN PICTURE S9(4) COMP VALUE 4096.
49 CRITERIA-DTA PICTURE X(4096) VALUE SPACES.
01 RESTRICTED.
49 RESTRICTED-LN PICTURE S9(4) COMP VALUE 80.
49 RESTRICTED-DTA PICTURE X(80) VALUE SPACES.
01 CRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRCHANGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRDAYSNCLASTCOPY PICTURE S9(9) COMP VALUE +0.
01 ICRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0.
01 ICRCHANGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRINDEXSIZE PICTURE S9(9) COMP VALUE +0.
01 RRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 RRTUNCLUSTINSPCT PICTURE S9(9) COMP VALUE +0.
01 RRTDISORGLOBPCT PICTURE S9(9) COMP VALUE +0.
01 RRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRTINDREFLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRIINSERTDELETEPCT PICTURE S9(9) COMP VALUE +0.
01 RRIAPPENDINSERTPCT PICTURE S9(9) COMP VALUE +0.
01 RRIPSEUDODELETEPCT PICTURE S9(9) COMP VALUE +0.
01 RRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRILEAFLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRINUMLEVELSLIMIT PICTURE S9(9) COMP VALUE +0.
01 SRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 SRTINSDELUPDABS PICTURE S9(9) COMP VALUE +0.
01 SRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 SRIINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 SRIINSDELUPDABS PICTURE S9(9) COMP VALUE +0.
01 SRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 EXTENTLIMIT PICTURE S9(9) COMP VALUE +0.
01 LASTSTATEMENT.
49 LASTSTATEMENT-LN PICTURE S9(4) COMP VALUE 8012.
49 LASTSTATEMENT-DTA PICTURE X(8012) VALUE SPACES.
01 RETURNCODE PICTURE S9(9) COMP VALUE +0.
01 ERRORMSG.
49 ERRORMSG-LN PICTURE S9(4) COMP VALUE 1331.
49 ERRORMSG-DTA PICTURE X(1331) VALUE SPACES.
01 IFCARETCODE PICTURE S9(9) COMP VALUE +0.
01 IFCARESCODE PICTURE S9(9) COMP VALUE +0.
01 EXCESSBYTES PICTURE S9(9) COMP VALUE +0.
*****************************************
* INDICATOR VARIABLES. *
* INITIALIZE ALL NON-ESSENTIAL INPUT *
* VARIABLES TO -1, TO INDICATE THAT THE *
* INPUT VALUE IS NULL. *
*****************************************
01 QUERYTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 OBJECTTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 ICTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 STATSSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CATLGSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 LOCALSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CHKLVL-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRITERIA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RESTRICTED-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRDAYSNCLASTCOPY-IND PICTURE S9(4) COMP-4 VALUE -1.
01 ICRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 ICRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRINDEXSIZE-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTUNCLUSTINSPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTDISORGLOBPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTINDREFLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIINSERTDELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIAPPENDINSERTPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIPSEUDODELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRILEAFLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRINUMLEVELSLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTINSDELUPDABS-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIINSDELUPDABS-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 EXTENTLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 LASTSTATEMENT-IND PICTURE S9(4) COMP-4 VALUE +0.
01 RETURNCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 ERRORMSG-IND PICTURE S9(4) COMP-4 VALUE +0.
01 IFCARETCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 IFCARESCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 EXCESSBYTES-IND PICTURE S9(4) COMP-4 VALUE +0.
..PROCEDURE DIVISION.
.
*********************************************************
* SET VALUES FOR DSNACCOR INPUT PARAMETERS: *
* - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK *
* FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT *
* TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE *
* RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) *
* - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO *
* MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES *
* DSN8D91A AND DSN8D91L. *
*****************
* CALL DSNACCOR *
*****************
EXEC SQL
CALL SYSPROC.DSNACCOR
(:QUERYTYPE :QUERYTYPE-IND,
:OBJECTTYPE :OBJECTTYPE-IND,
:ICTYPE :ICTYPE-IND,
:STATSSCHEMA :STATSSCHEMA-IND,
:CATLGSCHEMA :CATLGSCHEMA-IND,
:LOCALSCHEMA :LOCALSCHEMA-IND,
:CHKLVL :CHKLVL-IND,
:CRITERIA :CRITERIA-IND,
:RESTRICTED :RESTRICTED-IND,
:CRUPDATEDPAGESPCT :CRUPDATEDPAGESPCT-IND,
:CRCHANGESPCT :CRCHANGESPCT-IND,
:CRDAYSNCLASTCOPY :CRDAYSNCLASTCOPY-IND,
:ICRUPDATEDPAGESPCT :ICRUPDATEDPAGESPCT-IND,
:ICRCHANGESPCT :ICRCHANGESPCT-IND,
:CRINDEXSIZE :CRINDEXSIZE-IND,
:RRTINSDELUPDPCT :RRTINSDELUPDPCT-IND,
:RRTUNCLUSTINSPCT :RRTUNCLUSTINSPCT-IND,
:RRTDISORGLOBPCT :RRTDISORGLOBPCT-IND,
:RRTMASSDELLIMIT :RRTMASSDELLIMIT-IND,
:RRTINDREFLIMIT :RRTINDREFLIMIT-IND,
:RRIINSERTDELETEPCT :RRIINSERTDELETEPCT-IND,
:RRIAPPENDINSERTPCT :RRIAPPENDINSERTPCT-IND,
:RRIPSEUDODELETEPCT :RRIPSEUDODELETEPCT-IND,
:RRIMASSDELLIMIT :RRIMASSDELLIMIT-IND,
:RRILEAFLIMIT :RRILEAFLIMIT-IND,
:RRINUMLEVELSLIMIT :RRINUMLEVELSLIMIT-IND,
:SRTINSDELUPDPCT :SRTINSDELUPDPCT-IND,
:SRTINSDELUPDABS :SRTINSDELUPDABS-IND,
:SRTMASSDELLIMIT :SRTMASSDELLIMIT-IND,
:SRIINSDELUPDPCT :SRIINSDELUPDPCT-IND,
:SRIINSDELUPDABS :SRIINSDELUPDABS-IND,
:SRIMASSDELLIMIT :SRIMASSDELLIMIT-IND,
:EXTENTLIMIT :EXTENTLIMIT-IND,
:LASTSTATEMENT :LASTSTATEMENT-IND,
:RETURNCODE :RETURNCODE-IND,
:ERRORMSG :ERRORMSG-IND,
:IFCARETCODE :IFCARETCODE-IND,
:IFCARESCODE :IFCARESCODE-IND,
:EXCESSBYTES :EXCESSBYTES-IND)
END-EXEC.
*************************************************************
* ASSUME THAT THE SQL CALL RETURNED +466, WHICH MEANS THAT *
* RESULT SETS WERE RETURNED. RETRIEVE RESULT SETS. *
*************************************************************
* LINK EACH RESULT SET TO A LOCATOR VARIABLE
EXEC SQL ASSOCIATE LOCATORS (:LOC1, :LOC2)
WITH PROCEDURE SYSPROC.DSNACCOR
END-EXEC.
* LINK A CURSOR TO EACH RESULT SET
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC1
END-EXEC.
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :LOC2
END-EXEC.
* PERFORM FETCHES USING C1 TO RETRIEVE ALL ROWS FROM FIRST RESULT SET
* PERFORM FETCHES USING C2 TO RETRIEVE ALL ROWS FROM SECOND RESULT SET
DSNACCOR output
If DSNACCOR executes successfully, in addition to the output parameters
described in “DSNACCOR option descriptions” on page 874, DSNACCOR returns
two result sets.
The first result set contains the results from IFI COMMAND calls that DSNACCOR
makes. The following table shows the format of the first result set.
Table 161. Result set row for first DSNACCOR result set
Column name Data type Contents
RS_SEQUENCE INTEGER Sequence number of the output line
RS_DATA CHAR(80) A line of command output
The second result set contains DSNACCOR's recommendations. This result set
contains one or more rows for a table space or index space. A nonpartitioned table
space or nonpartitioning index space can have at most one row in the result set. A
partitioned table space or partitioning index space can have at most one row for
each partition. A table space, index space, or partition has a row in the result set if
both of the following conditions are true:
v If the Criteria input parameter contains a search condition, the search condition
is true for the table space, index space, or partition.
v DSNACCOR recommends at least one action for the table space, index space, or
partition.
The following table shows the columns of a result set row.
Table 162. Result set row for second DSNACCOR result set
Column name Data type Description
DBNAME CHAR(8) Name of the database that contains the object.
NAME CHAR(8) Table space or index space name.
PARTITION INTEGER Data set number or partition number.
OBJECTTYPE CHAR(2) DB2 object type:
v TS for a table space
v IX for an index space
OBJECTSTATUS CHAR(36) Status of the object:
v ORPHANED, if the object is an index space with no
corresponding table space, or if the object does not exist
v If the object is in a restricted state, one of the following
values:
– TS=restricted-state, if OBJECTTYPE is TS
– IX=restricted-state, if OBJECTTYPE is IX
restricted-state is one of the status codes that appear in
DISPLAY DATABASE output. See Chapter 2 of DB2
Command Reference for details.
v A, if the object is in an advisory state.
v L, if the object is a logical partition, but not in an advisory
state.
v AL, if the object is a logical partition and in an advisory
state.
IMAGECOPY CHAR(3) COPY recommendation:
v If OBJECTTYPE is TS: FUL (full image copy), INC
(incremental image copy), or NO
v If OBJECTTYPE is IX: YES or NO
RUNSTATS CHAR(3) RUNSTATS recommendation: YES or NO.
EXTENTS CHAR(3) Indicates whether the data sets for the object have exceeded
ExtentLimit: YES or NO.
REORG CHAR(3) REORG recommendation: YES or NO.
Table 162. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
INEXCEPTTABLE CHAR(40) A string that contains one of the following values:
v Text that you specify in the QUERYTYPE column of the
exception table.
v YES, if you put a row in the exception table for the object
that this result set row represents, but you specify NULL in
the QUERYTYPE column.
v NO, if the exception table exists but does not have a row for
the object that this result set row represents.
v Null, if the exception table does not exist, or if the ChkLvl
input parameter does not include the value 4.
ASSOCIATEDTS CHAR(8) If OBJECTTYPE is IX and the ChkLvl input parameter includes
the value 2, this value is the name of the table space that is
associated with the index space. Otherwise null.
COPYLASTTIME TIMESTAMP Timestamp of the last full image copy on the object. Null if
COPY was never run, or if the last COPY execution was
terminated.
LOADRLASTTIME TIMESTAMP Timestamp of the last LOAD REPLACE on the object. Null if
LOAD REPLACE was never run, or if the last LOAD
REPLACE execution was terminated.
REBUILDLASTTIME TIMESTAMP Timestamp of the last REBUILD INDEX on the object. Null if
REBUILD INDEX was never run, or if the last REBUILD
INDEX execution was terminated.
CRUPDPGSPCT INTEGER If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the ratio
of distinct updated pages to preformatted pages, expressed as
a percentage. Otherwise null.
CRCPYCHGPCT INTEGER If OBJECTTYPE is TS and IMAGECOPY is YES, the ratio of
the total number insert, update, and delete operations since
the last image copy to the total number of rows or LOBs in the
table space or partition, expressed as a percentage. If
OBJECTTYPE is IX and IMAGECOPY is YES, the ratio of the
total number of insert and delete operations since the last
image copy to the total number of entries in the index space or
partition, expressed as a percentage. Otherwise null.
CRDAYSCELSTCPY INTEGER If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the
number of days since the last image copy. Otherwise null.
CRINDEXSIZE INTEGER If OBJECTTYPE is IX and IMAGECOPY is YES, the number of
active pages in the index space or partition. Otherwise null.
REORGLASTTIME TIMESTAMP Timestamp of the last REORG on the object. Null if REORG
was never run, or if the last REORG execution was terminated.
RRTINSDELUPDPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the sum
of insert, update, and delete operations since the last REORG
to the total number of rows or LOBs in the table space or
partition, expressed as a percentage. Otherwise null.
RRTUNCINSPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the
number of unclustered insert operations to the total number of
rows or LOBs in the table space or partition, expressed as a
percentage. Otherwise null.
RRTDISORGLOBPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the
number of imperfectly chunked LOBs to the total number of
rows or LOBs in the table space or partition, expressed as a
percentage. Otherwise null.
Table 162. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
RRTMASSDELETE INTEGER If OBJECTTYPE is TS, REORG is YES, and the table space is a
segmented table space or LOB table space, the number of mass
deletes since the last REORG or LOAD REPLACE. If
OBJECTTYPE is TS, REORG is YES, and the table space is
nonsegmented, the number of dropped tables since the last
REORG or LOAD REPLACE. Otherwise null.
RRTINDREF INTEGER If OBJECTTYPE is TS, REORG is YES, the ratio of the total
number of overflow records that were created since the last
REORG or LOAD REPLACE to the total number of rows or
LOBs in the table space or partition, expressed as a percentage.
Otherwise null.
RRIINSDELPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the total
number of insert and delete operations since the last REORG
to the total number of index entries in the index space or
partition, expressed as a percentage. Otherwise null.
RRIAPPINSPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index entries that were inserted since the last
REORG, REBUILD INDEX, or LOAD REPLACE that had a key
value greater than the maximum key value in the index space
or partition, to the number of index entries in the index space
or partition, expressed as a percentage. Otherwise null.
RRIPSDDELPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index entries that were pseudo-deleted (the RID
entry was marked as deleted) since the last REORG, REBUILD
INDEX, or LOAD REPLACE to the number of index entries in
the index space or partition, expressed as a percentage.
Otherwise null.
RRIMASSDELETE INTEGER If OBJECTTYPE is IX and REORG is YES, the number of mass
deletes from the index space or partition since the last REORG,
REBUILD, or LOAD REPLACE. Otherwise null.
RRILEAF INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index page splits that occurred since the last
REORG, REBUILD INDEX, or LOAD REPLACE in which the
higher part of the split page was far from the location of the
original page, to the total number of active pages in the index
space or partition, expressed as a percentage. Otherwise null.
RRINUMLEVELS INTEGER If OBJECTTYPE is IX and REORG is YES, the number of levels
in the index tree that were added or removed since the last
REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise
null.
STATSLASTTIME TIMESTAMP Timestamp of the last RUNSTATS on the object. Null if
RUNSTATS was never run, or if the last RUNSTATS execution
was terminated.
SRTINSDELUPDPCT INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the ratio of the
total number of insert, update, and delete operations since the
last RUNSTATS on a table space or partition, to the total
number of rows or LOBs in the table space or partition,
expressed as a percentage. Otherwise null.
SRTINSDELUPDABS INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the total number
of insert, update, and delete operations since the last
RUNSTATS on a table space or partition. Otherwise null.
Table 162. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
SRTMASSDELETE INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the number of
mass deletes from the table space or partition since the last
REORG or LOAD REPLACE. Otherwise null.
SRIINSDELPCT INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the ratio of the
total number of insert and delete operations since the last
RUNSTATS on the index space or partition, to the total
number of index entries in the index space or partition,
expressed as a percentage. Otherwise null.
SRIINSDELABS INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the number
insert and delete operations since the last RUNSTATS on the
index space or partition. Otherwise null.
SRIMASSDELETE INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the number of
mass deletes from the index space or partition since the last
REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise,
this value is null.
TOTALEXTENTS SMALLINT If EXTENTS is YES, the number of physical extents in the table
space, index space, or partition. Otherwise, this value is null.
Use the DISPLAY DATABASE command to display the current status for an object.
In addition to these states, the output from the DISPLAY DATABASE command
might also indicate that an object is in logical page list (LPL) status. This state
means that the pages that are listed in the LPL PAGES column are logically in
error and are unavailable for access. DB2 writes entries for these pages in an LPL.
For more information about an LPL and on how to remove pages from the LPL,
see Part 4 of DB2 Administration Guide.
Refer to Table 163 on page 896 for information about resetting the auxiliary
CHECK-pending status. This table lists the status name, abbreviation, affected
object, and any corrective actions.
| Auxiliary ACHKP Base table space 1. Update or delete invalid LOBs and XML 1
| CHECK- objects using SQL.
pending
| 2. Run the CHECK DATA utility with the
| appropriate SCOPE option to verify the
| validity of LOBs and XML objects and reset
| ACHKP status.
Notes:
1. A base table space in the ACHKP status is unavailable for processing by SQL.
The RECOVER utility also sets AUXW status if it finds an invalid LOB column.
Invalid LOB columns might result from a situation in which all the following
actions occur:
1. LOB table space was defined with LOG NO.
2. LOB table space was recovered.
3. LOB was updated since the last image copy.
Refer to Table 164 for information about resetting the auxiliary warning status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 164. Resetting auxiliary warning status
Status Abbreviation Object affected Corrective action Notes
| Auxiliary AUXW Base table space 1. Update or delete invalid LOBs and XML 1,2,3
| warning objects using SQL.
| 2. If an orphan LOB exists or a version
| mismatch exists between the base table and
| the auxiliary index, use REPAIR to delete the
| LOB from the LOB table space.
| 3. Run CHECK DATA utility to verify the
| validity of LOBs and XML objects and reset
| AUXW status.
| Auxiliary AUXW LOB table space 1. Update or delete invalid LOBs and XML 1
| warning objects using SQL.
| 2. If an orphan LOB exists or a version
| mismatch exists between the base table and
| the auxiliary index, use REPAIR to delete the
| LOB from the LOB table space.
| 3. Run CHECK LOB utility to verify the
| validity of LOBs and XML objects and reset
| AUXW status.
Notes:
1. A base table space or LOB table space in the AUXW status is available for processing by SQL, even though it
contains invalid LOBs. However, an attempt to retrieve an invalid LOB results in a -904 SQL return code.
2. DB2 can access all rows of a base table space that are in the AUXW status. SQL can update the invalid LOB
column and delete base table rows, but the value of the LOB column cannot be retrieved. If DB2 attempts to
access an invalid LOB column, a -904 SQL code is returned. The AUXW status remains on the base table space
even when SQL deletes or updates the last invalid LOB column.
3. If CHECK DATA AUXERROR REPORT encounters only invalid LOB columns and no other LOB column errors,
the base table space is set to the auxiliary warning status.
CHECK-pending status
The CHECK-pending (CHKP) restrictive status indicates that an object might be in
an inconsistent state and must be checked.
The following utilities set the CHECK-pending status on a table space if referential
integrity constraints are encountered:
v LOAD with ENFORCE NO
v RECOVER to a point in time
v CHECK LOB
The CHECK-pending status can also affect a base table space or a LOB table space.
DB2 ignores informational referential integrity constraints and does not set
CHECK-pending status for them.
Refer to Table 165 for information about resetting the CHECK-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 165. Resetting CHECK-pending status
Status Abbreviation Object affected Corrective action Notes
CHECK- CHKP Table space, base table Check and correct referential integrity
pending space constraints using the CHECK DATA utility.
Notes:
| 1. An index might be placed in the CHECK-pending status if you recovered an index to a specific RBA or LRSN
| from a copy and applied the log records, but you did not recover the table space in the same list. The
| CHECK-pending status can also be placed on an index if you specified the table space and the index, but the
| RECOVER point in time was not a QUIESCE or COPY SHRLEVEL REFERENCE point.
COPY-pending status
The COPY-pending (COPY) restrictive status indicates that the affected object must
be copied.
DB2 ignores informational referential integrity constraints and does not set
CHECK-pending status for them.
Refer to Table 166 for information about resetting the COPY-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 166. Resetting COPY-pending status
Status Abbreviation Object affected Corrective action Notes
COPY- COPY Table space, table space Take an image copy of the affected object.
pending partition
| Refer to Table 167 on page 899 for information about resetting the DBET error
| status. This table lists the status name, abbreviation, affected objects, and any
| corrective actions.
Refer to Table 168 for information about resetting the group buffer pool
RECOVER-pending status. This table lists the status name, abbreviation, affected
objects, and any corrective actions.
Table 168. Resetting group buffer pool RECOVER-pending status
Status Abbreviation Object affected Corrective action Notes
Group buffer GRECP Object Recover the object, or use START DATABASE to
pool recover the object.
RECOVER-
pending
Refer to Table 169 for information about resetting the informational COPY-pending
status. This table lists the status name, abbreviation, affected objects, and any
corrective actions.
Table 169. Resetting informational COPY-pending status
Status Abbreviation Object affected Corrective action Notes
| Informational ICOPY NOT LOGGED table Copy the affected table space.
COPY- spaces
pending
Informational ICOPY Partitioning index, Copy the affected index.
COPY- nonpartitioning index,
pending index on the auxiliary
table
REBUILD-pending status
A REBUILD-pending restrictive status indicates that the affected index or index
partition is broken and must be rebuilt from the data.
If you alter the data type of a column to a numeric data type, RECOVER INDEX
cannot complete. You must rebuild the index.
Refer to Table 170 for information about resetting a REBUILD-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 170. Resetting REBUILD-pending status
Status Abbreviation Object affected Corrective action Notes
REBUILD- RBDP Physical or logical index Run the REBUILD utility on the affected index
pending partition partition.
REBUILD- RBDP* Logical partitions of Run REBUILD INDEX PART or RECOVER
pending star nonpartitioned secondary utility on the affected logical partitions.
indexes
Page set PSRBD Nonpartitioned Run REBUILD INDEX ALL, the RECOVER
REBUILD- secondary index, index utility, or run REBUILD INDEX listing all
pending on the auxiliary table indexes in the affected index space.
REBUILD- RBDP, RBDP*, all The following actions also reset the
pending or PSRBD REBUILD-pending status:
v Use LOAD REPLACE for the table space or
partition.
v Use REPAIR SET INDEX with NORBDPEND
on the index partition. Be aware that this
does not correct the data inconsistency in the
index partition. Use CHECK INDEX instead
of REPAIR to verify referential integrity
constraints.
v Start the database that contains the index
space with ACCESS FORCE. Be aware that
this does not correct the data inconsistency in
the index partition.
v Run REORG INDEX SORTDATA on the
affected index.
RECOVER-pending status
The RECOVER-pending (RECP) restrictive status indicates that a table space or
table space partition is broken and must be recovered.
Refer to Table 171 for information about resetting the RECOVER-pending status.
This table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 171. Resetting RECOVER-pending status
Status Abbreviation Object affected Corrective action Notes
RECOVER- RECP Table space Run the RECOVER utility on the affected object.
pending
RECOVER- RECP Table space partition Recover the partition.
pending
RECOVER- RECP Index on the auxiliary Correct the RECOVER-pending status by using
pending table one of the following utilities:
v REBUILD INDEX
v RECOVER INDEX
v REORG INDEX SORTDATA
RECOVER- RECP Index space Run one of the following utilities on the affected
pending index space to reset RECP, RBDP, RBDP*, or
PSRBDP status:
v REBUILD INDEX
v RECOVER INDEX
v REORG INDEX SORTDATA
RECOVER- RECP Any The following actions also reset the
pending RECOVER-pending status:
v Use LOAD REPLACE for the table space or
partition.
v Use REPAIR SET TABLESPACE or INDEX
with NORCVRPEND on the table space or
partition. Be aware that this does not correct
the data inconsistency in the table space or
partition.
v Start the database that contains the table
space or index space with ACCESS FORCE.
Be aware that this does not correct the data
inconsistency in the table space or partition.
REFRESH-pending status
Whenever DB2 marks an object in refresh-pending (REFP) status, it also puts the
object in RECOVER-pending (RECP) or REBUILD-pending (RBDP or PSRBD). If a
user-defined table space is in refresh-pending (REFP) status, you can replace the
data by using LOAD REPLACE. At the successful completion of the RECOVER
and LOAD REPLACE jobs, both (REFP and RECP or REFP and RBDP or PSRBD)
statuses are reset.
REORG-pending status
The REORG-pending (REORP) restrictive status indicates that a table space
partition is broken and must be reorganized.
REORP status is set on the last partition of a partitioned table space if you perform
the following actions:
v Create a partitioned table space.
v Create a partitioning index.
v Insert a row into a table.
The REORG-pending (AREO*) advisory status indicates that a table space, index,
or partition needs to be reorganized for optimal performance.
Refer to Table 172 for information about resetting the REORG-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 172. Resetting REORG-pending status
Status Abbreviation Object affected Corrective action Notes
REORG- REORP Table space Perform one of the following actions:
pending v Use LOAD REPLACE for the entire table
space.
v Run the REORG TABLESPACE utility with
SHRLEVEL NONE.
If a table space is in both REORG-pending
and CHECK-pending status (or auxiliary
CHECK-pending status), run REORG first
and then run CHECK DATA to clear the
respective states.
v Run REORG PARTm:n SHRLEVEL NONE.
REORG- REORP Partitioned table space For row lengths <= 32 KB:
pending 1. Run REORG TABLESPACE SHRLEVEL
NONE SORTDATA.
Notes:
1. You can reset AREO* for a specific partition without being restricted by
another AREO* for an adjacent partition. When you run REPAIR VERSIONS,
the utility resets the status and updates the version information in
SYSTABLEPART for table spaces and SYSINDEXES for indexes.
Restart-pending status
The restart-pending (RESTP) status is set on if an object has backout work pending
at the end of DB2 restart.
Refer to Table 173 for information about resetting the restart-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 173. Resetting restart-pending status
Status Abbreviation Object affected Corrective action Notes
Restart- RESTP Table space, table space Objects in the RESTP status remain unavailable 1,2,3
pending partitions, index spaces, until backout work is complete, or until restart
and physical index space is canceled and a conditional restart or cold
partitions start is performed in its place. See Part 4 of DB2
Administration Guide for information about the
RESTP restrictive status.
Notes:
1. Delay running REORG TABLESPACE SHRLEVEL CHANGE until all RESTP statuses are reset.
2. You cannot use LOAD REPLACE on an object that is in the RESTP status.
3. Utility activity against page sets or partitions with RESTP status is not allowed. Any attempt to access a page set
or partition with RESTP status terminates with return code 8.
Because these four programs also accept the static SQL statements CONNECT, SET
CONNECTION, and RELEASE, you can use the programs to access DB2 tables at
remote locations.
Retrieval of UTF-16 Unicode data: You can use DSNTEP2, DSNTEP4, and
DSNTIAUL to retrieve Unicode UTF-16 graphic data. However, these programs
might not be able to display some characters, if those characters have no mapping
in the target SBCS EBCDIC CCSID.
DSNTIAUL and DSNTIAD are shipped only as source code, so you must
precompile, assemble, link, and bind them before you can use them. If you want to
use the source code version of DSNTEP2 or DSNTEP4, you must precompile,
compile, link, and bind it. You need to bind the object code version of DSNTEP2 or
DSNTEP4 before you can use it. Usually a system administrator prepares the
programs as part of the installation process. Table 174 on page 906 indicates which
installation job prepares each sample program. All installation jobs are in data set
DSN910.SDSNSAMP.
Table 174. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Program preparation job
DSNTIAUL DSNTEJ2A
DSNTIAD DSNTIJTM
DSNTEP2 (source) DSNTEJ1P
DSNTEP2 (object) DSNTEJ1L
DSNTEP4 (source) DSNTEJ1P
DSNTEP4 (object) DSNTEJ1L
To run the sample programs, use the DSN RUN command. For more information
about the DSN RUN command, see the topic “RUN (DSN)” in DB2 Command
Reference.
Table 175 lists the load module name and plan name that you must specify, and
the parameters that you can specify when you run each program. See the following
sections for the meaning of each parameter.
Table 175. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Load module Plan Parameters
DSNTIAUL DSNTIAUL DSNTIB91 SQL
number of rows per fetch
TOLWARN(NO|YES)
DSNTIAD DSNTIAD DSNTIA91 RC0
SQLTERM(termchar)
DSNTEP2 DSNTEP2 DSNTEP91 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
TOLWARN(NO|YES)
| PREPWARN
DSNTEP4 DSNTEP4 DSNTP491 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
TOLWARN(NO|YES)
| PREPWARN
The remainder of this chapter contains the following information about running
each program:
v Descriptions of the input parameters
v Data sets that you must allocate before you run the program
v Return codes from the program
v Examples of invocation
See the sample jobs that are listed in Table 174 for a working example of each
program.
Running DSNTIAUL
This topic contains information that you need when you run DSNTIAUL,
including parameters, data sets, return codes, and invocation examples.
To retrieve data from a remote site by using the multi-row fetch capability for
enhanced performance, bind DSNTIAUL with the DBPROTOCOL(DRDA) option.
To run DSNTIAUL remotely when it is bound with the DBPROTOCOL(PRIVATE)
option, switch DSNTIAUL to single-row fetch mode by specifying 1 for the
number of rows per fetch parameter.
DSNTIAUL parameters:
SQL
Specify SQL to indicate that your input data set contains one or more complete
SQL statements, each of which ends with a semicolon. You can include any
SQL statement that can be executed dynamically in your input data set. In
addition, you can include the static SQL statements CONNECT, SET
CONNECTION, or RELEASE. DSNTIAUL uses the SELECT statements to
determine which tables to unload and dynamically executes all other
statements except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL
executes CONNECT, SET CONNECTION, and RELEASE statically to connect
to remote locations.
number of rows per fetch
Specify a number from 1 to 32767 to specify the number of rows per fetch that
DSNTIAUL retrieves. If you do not specify this number, DSNTIAUL retrieves
100 rows per fetch. This parameter can be specified with the SQL parameter.
Specify 1 to retrieve data from a remote site when DSNTIAUL is bound with
the DBPROTOCOL(PRIVATE) option.
TOLWARN
Specify NO (the default) or YES to indicate whether DSNTIAUL continues to
retrieve rows after receiving an SQL warning:
NO If a warning occurs when DSNTIAUL executes an OPEN or FETCH to
retrieve rows, DSNTIAUL stops retrieving rows. If the SQLWARN1,
SQLWARN2, SQLWARN6, or SQLWARN7 flag is set when DSNTIAUL
executes a FETCH to retrieve rows, DSNTIAUL continues to retrieve
rows.
Exception:
YES If a warning occurs when DSNTIAUL executes an OPEN or FETCH to
retrieve rows, DSNTIAUL continues to retrieve rows.
| LOBFILE(prefix)
| Specify LOBFILE to indicate that you want DSNTIAUL to dynamically allocate
| data sets, each to receive the full content of a LOB cell. (A LOB cell is the
| intersection of a row and a LOB column.) If you do not specify the LOBFILE
| option, you can unload up to only 32 KB of data from a LOB column.
| prefix
| Specify a high-level qualifier for these dynamically allocated data sets. You
| can specify up to 17 characters. The qualifier must conform with the rules
| for TSO data set names.
| DSNTIAUL uses a naming convention for these dynamically allocated data sets
| of prefix.Qiiiiiii.Cjjjjjjj.Rkkkkkkk, where these qualifiers have the following
| values:
| prefix
| The high-level qualifier that you specify in the LOBFILE option.
| Qiiiiiii
| The sequence number (starting from 0) of a query that returns one or more
| LOB columns
| Cjjjjjjj
| The sequence number (starting from 0) of a column in a query that returns
| one or more LOB columns
| Rkkkkkkk
| The sequence number (starting from 0) of a row of a result set that has one
| or more LOB columns.
| The generated LOAD statement contains LOB file reference variables that can
| be used to load data from these dynamically allocated data sets.
If you do not specify the SQL parameter, your input data set must contain one or
more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause
SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a
SELECT statement for each input statement by appending your input line to
SELECT * FROM, then uses the result to determine which tables to unload. For this
input format, the text for each table specification can be a maximum of 72 bytes
and must not span multiple lines.
You can use the input statements to specify SELECT statements that join two or
more tables or select specific columns from a table. If you specify columns, you
need to modify the LOAD statement that DSNTIAUL generates.
Define all data sets as sequential data sets. You can specify the record length and
block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
Example of using DSNTIAUL to unload rows in more than one table: Suppose that
you also want to use DSNTIAUL to perform the following actions:
v Unload all rows from the project table
v Unload only rows from the employee table for employees in departments with
department numbers that begin with D, and order the unloaded rows by
employee number
v Lock both tables in share mode before you unload them
v Retrieve 250 rows per fetch
For these activities, you must specify the SQL parameter and specify the number of
rows per fetch when you run DSNTIAUL. Your DSNTIAUL invocation is shown in
Figure 164 on page 910:
| Example of using DSNTIAUL to unload LOB data: This example uses the sample
| LOB table with the following structure:
| CREATE TABLE DSN8910.EMP_PHOTO_RESUME
| ( EMPNO CHAR(06) NOT NULL,
| EMP_ROWID ROWID NOT NULL GENERATED ALWAYS,
| PSEG_PHOTO BLOB(500K),
| BMP_PHOTO BLOB(100K),
| RESUME CLOB(5K),
| PRIMARY KEY (EMPNO))
| IN DSN8D91L.DSN8S91B
| CCSID EBCDIC;
| The following call to DSNTIAUL unloads the sample LOB table. The parameters
| for DSNTIAUL indicate the following options:
| v The input data set (SYSIN) contains SQL.
| v DSNTIAUL is to retrieve 2 rows per fetch.
| v DSNTIAUL places the LOB data in data sets with a high-level qualifier of
| DSN8UNLD.
| //UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
| //SYSTSPRT DD SYSOUT=*
| //SYSTSIN DD *
| DSN SYSTEM(DSN)
| RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) -
| PARMS(’SQL,2,LOBFILE(DSN8UNLD)’) -
| LIB(’DSN910.RUNLIB.LOAD’)
| //SYSPRINT DD SYSOUT=*
| //SYSUDUMP DD SYSOUT=*
| //SYSREC00 DD DSN=DSN8UNLD.SYSREC00,
| // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
| // VOL=SER=SCR03,RECFM=FB
| //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
| // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
| // VOL=SER=SCR03,RECFM=FB
| //SYSIN DD *
| SELECT * FROM DSN8910.EMP_PHOTO_RESUME;
| Given that the sample LOB table has 4 rows of data, DSNTIAUL produces the
| following output:
| v Data for columns EMPNO and EMP_ROWID are placed in the data set that is
| allocated according to the SYSREC00 DD statement. The data set name is
| DSN8UNLD.SYSREC00
| v A generated LOAD statement is placed in the data set that is allocated according
| to the SYSPUNCH DD statement. The data set name is DSN8UNLD.SYSPUNCH
| v The following data sets are dynamically created to store LOB data:
| – DSN8UNLD.Q0000000.C0000002.R0000000
| – DSN8UNLD.Q0000000.C0000002.R0000001
| – DSN8UNLD.Q0000000.C0000002.R0000002
| – DSN8UNLD.Q0000000.C0000002.R0000003
| – DSN8UNLD.Q0000000.C0000003.R0000000
| – DSN8UNLD.Q0000000.C0000003.R0000001
| – DSN8UNLD.Q0000000.C0000003.R0000002
| – DSN8UNLD.Q0000000.C0000003.R0000003
| – DSN8UNLD.Q0000000.C0000004.R0000000
| – DSN8UNLD.Q0000000.C0000004.R0000001
| – DSN8UNLD.Q0000000.C0000004.R0000002
| – DSN8UNLD.Q0000000.C0000004.R0000003
| For example, DSN8UNLD.Q0000000.C0000004.R0000001 means that the data set
| contains data that is unloaded from the second row (R0000001) and the fifth
| column (C0000004) of the result set for the first query (Q0000000).
Running DSNTIAD
This section contains information that you need when you run DSNTIAD,
including parameters, data sets, return codes, and invocation examples.
DSNTIAD parameters:
RC0
If you specify this parameter, DSNTIAD ends with return code 0, even if the
program encounters SQL errors. If you do not specify RC0, DSNTIAD ends
with a return code that reflects the severity of the errors that occur. Without
RC0, DSNTIAD terminates if more than 10 SQL errors occur during a single
execution.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any special character except one of those listed in
Table 177. SQLTERM(;) is the default.
Table 177. Invalid special characters for the SQL terminator
Name Character Hexadecimal representation
blank X'40'
comma , X'6B'
double quotation mark " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quotation mark ' X'7D'
underscore _ X'6D'
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
Example: Suppose that you specify the parameter SQLTERM(#) to indicate that
the character # is the statement terminator. Then a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
ALIGN(MID)
Specifies that DSNTEP2 or DSNTEP4 output should be centered.
ALIGN(MID) is the default.
ALIGN(LHS)
Specifies that the DSNTEP2 or DSNTEP4 output should be left-justified.
NOMIXED or MIXED
Specifies whether DSNTEP2 or DSNTEP4 contains any DBCS characters.
NOMIXED
Specifies that the DSNTEP2 or DSNTEP4 input contains no DBCS
characters. NOMIXED is the default.
MIXED
Specifies that the DSNTEP2 or DSNTEP4 input contains some DBCS
characters.
| PREPWARN
| Specifies that DSNTEP2 or DSNTEP4 is to display the PREPARE
| SQLWARNING message and set the return code to 4 when an SQLWARNING
| is encountered at PREPARE.
SQLTERM(termchar)
Specifies the character that you use to end each SQL statement. You can use
any character except one of those that are listed in Table 177 on page 912.
SQLTERM(;) is the default.
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
Example: Suppose that you specify the parameter SQLTERM(#) to indicate that
the character # is the statement terminator. Then a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
If you want to change the SQL terminator within a series of SQL statements,
you can use the --#SET TERMINATOR control statement.
Example: Suppose that you have an existing set of SQL statements to which
you want to add a CREATE TRIGGER statement that has embedded
semicolons. You can use the default SQLTERM value, which is a semicolon, for
all of the existing SQL statements. Before you execute the CREATE TRIGGER
statement, include the --#SET TERMINATOR # control statement to change the
SQL terminator to the character #:
SELECT * FROM DEPT;
SELECT * FROM ACT;
SELECT * FROM EMPPROJACT;
SELECT * FROM PROJ;
SELECT * FROM PROJACT;
--#SET TERMINATOR #
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
See the following discussion of the SYSIN data set for more information about
the --#SET control statement.
TOLWARN
Specify NO (the default) or YES to indicate whether DSNTEP2 or DSNTEP4
continues to process SQL SELECT statements after receiving an SQL warning:
NO If a warning occurs when DSNTEP2 or DSNTEP4 executes an OPEN or
FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 stops
processing the SELECT statement. If SQLCODE +445 or SQLCODE
+595 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a
SELECT statement, DSNTEP2 or DSNTEP4 continues to process the
SELECT statement. If SQLCODE +802 occurs when DSNTEP2 or
DSNTEP4 executes a FETCH for a SELECT statement, DSNTEP2 or
DSNTEP4 continues to process the SELECT statement if the
TOLARTHWRN control statement is set to YES.
YES If a warning occurs when DSNTEP2 or DSNTEP4 executes an OPEN or
FETCH for a SELECT statement, DSNTEP2 or DSNTEP4 continues to
process the SELECT statement.
character other than one of those that are listed in Table 177 on
page 912. The default is the value of the SQLTERM parameter.
ROWS_FETCH
The number of rows that are to be fetched from the result
table. value is a numeric literal between -1 and the number of
rows in the result table. -1 means that all rows are to be
fetched. The default is -1.
ROWS_OUT
The number of fetched rows that are to be sent to the output
data set. value is a numeric literal between -1 and the number
of fetched rows. -1 means that all fetched rows are to be sent to
the output data set. The default is -1.
MULT_FETCH
This option is valid only for DSNTEP4. Use MULT_FETCH to
specify the number of rows that are to be fetched at one time
from the result table. The default fetch amount for DSNTEP4 is
100 rows, but you can specify from 1 to 32676 rows.
TOLWARN
Indicates whether DSNTEP2 and DSNTEP4 continue to process
an SQL SELECT after an SQL warning is returned. value is
either NO (the default) or YES.
| PREPWARN
| Indicates how DSNTEP2 and DSNTEP4 is to handle a
| PREPARE SQLWARNING message.
| NO
| Indicates that DSNTEP2 and DSNTEP4 does not display
| the PREPARE SQLWARNING message and does not set
| the return code to 4 when an SQLWARNING is
| encountered at PREPARE. The default is NO.
| YES
| Indicates that DSNTEP2 and DSNTEP4 displays the
| PREPARE SQLWARNING message and sets the return
| code to 4 when an SQLWARNING is encountered at
| PREPARE.
SYSPRINT Output data set. DSNTEP2 and DSNTEP4 write informational and
error messages in this data set. DSNTEP2 and DSNTEP4 write
output records of no more than 133 bytes.
Figure 167. DSNTEP2 invocation with the ALIGN(LHS) and MIXED parameters
Figure 168. DSNTEP4 invocation with the ALIGN(MID) and MIXED parameters and using the
MULT_FETCH control option
In a data sharing environment, each member has its own interval for writing
real-time statistics.
For complete descriptions of the contents of real-time statistics tables, see “DB2
catalog tables”in DB2 SQL Reference.
The table below shows how running LOAD affects the SYSINDEXSPACESTATS
statistics for an index space or physical index partition.
Table 181. Changed SYSINDEXSPACESTATS values during LOAD
Settings for LOAD REPLACE after BUILD
Column name phase
TOTALENTRIES Number of index entries added1
NLEVELS Actual value
NACTIVE Actual value
SPACE Actual value
EXTENTS Actual value
LOADRLASTTIME Current timestamp
REORGINSERTS 0
REORGDELETES 0
REORGAPPENDINSERT 0
REORGPSEUDODELETES 0
REORGMASSDELETE 0
REORGLEAFNEAR 0
REORGLEAFFAR 0
REORGNUMLEVELS 0
STATSLASTTIME Current timestamp2
STATSINSERTS 02
STATSDELETES 02
STATSMASSDELETE 02
COPYLASTTIME Current timestamp3
COPYUPDATEDPAGES 03
COPYCHANGES 03
COPYUPDATELRSN Null3
COPYUPDATETIME Null3
Notes:
1. Under certain conditions, such as a utility restart, the LOAD utility might not have an
accurate count of loaded records. In those cases, DB2 sets this value to null.
2. DB2 sets this value only if the LOAD invocation includes the STATISTICS option.
3. DB2 sets this value only if the LOAD invocation includes the COPYDDN option.
The table below shows how running REORG affects the SYSINDEXSPACESTATS
statistics for an index space or physical index partition.
Table 183. Changed SYSINDEXSPACESTATS values during REORG
Settings for REORG Settings for REORG SHRLEVEL
SHRLEVEL NONE after REFERENCE or CHANGE after SWITCH
Column name RELOAD phase phase
TOTALENTRIES Number of index entries For SHRLEVEL REFERENCE: Number of
added1 added index entries during BUILD phase
For a logical index partition, DB2 does not reset the nonpartitioned index when it
does a REORG on a partition. Therefore, DB2 does not reset the statistics for the
index. The REORG counters and REORGLASTTIME are relative to the last time the
entire nonpartitioned index is reorganized. In addition, the REORG counters might
be low because, due to the methodology, some index entries are changed during
REORG of a partition.
The table below shows how running RUNSTATS UPDATE ALL on a table space or
table space partition affects the SYSTABLESPACESTATS statistics.
Table 185. Changed SYSTABLESPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSUPDATES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
The table below shows how running RUNSTATS UPDATE ALL on an index affects
the SYSINDEXSPACESTATS statistics.
Table 186. Changed SYSINDEXSPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
The table below shows how running COPY on a table space or table space
partition affects the SYSTABLESPACESTATS statistics.
The table below shows how running COPY on an index affects the
SYSINDEXSPACESTATS statistics.
Table 188. Changed SYSINDEXSPACESTATS values during COPY
Column name During UTILINIT phase After COPY phase
1
COPYLASTTIME Current timestamp Timestamp of the start of
COPY phase
COPYUPDATEDPAGES Actual value1 Actual value2
COPYCHANGES Actual value1 Actual value2
COPYUPDATELRSN Actual value1 Actual value3
COPYUPDATETIME Actual value1 Actual value3
Note:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
If a row still exists in the real-time statistics tables for a dropped table space or
index, and if you create a new object with the same DBID and PSID as the
dropped object, DB2 reinitializes the row before it updates any values in that row.
UPDATE: When you perform an UPDATE, DB2 increments the update counters.
INSERT: When you perform an INSERT, DB2 increments the insert counters. DB2
keeps separate counters for clustered and unclustered INSERTs.
DELETE: When you perform a DELETE, DB2 increments the delete counters.
Notice that for INSERT and DELETE, the counter for the inverse operation is
incremented. For example, if two INSERT statements are rolled back, the delete
counter is incremented by 2.
If an update to a partitioning key does not cause rows to move to a new partition,
the counts are accumulated as expected:
Mass DELETE:Performing a mass delete operation on a table space does not cause
DB2 to reset the counter columns in the real-time statistics tables. After a mass
delete operation, the value in a counter column includes the count from a time
prior to the mass delete operation, as well as the count after the mass delete
operation.
DB2 does locking based on the lock size of the DSNRTSDB.DSNRTSTS table space.
DB2 uses cursor stability isolation and CURRENTDATA(YES) when it reads the
statistics tables.
At the beginning of a RUNSTATS job, all data sharing members externalize their
statistics to the real-time statistics tables and reset their in-memory statistics. If all
members cannot externalize their statistics, DB2 sets STATSLASTTIME to null. An
error in gathering and externalizing statistics does not prevent RUNSTATS from
running.
Utilities that reset page sets to empty can invalidate the in-memory statistics of
other DB2 members. The member that resets a page set notifies the other DB2
members that a page set has been reset to empty, and the in-memory statistics are
invalidated. If the notify process fails, the utility that resets the page set does not
fail. DB2 sets the appropriate timestamp (REORGLASTTIME, STATSLASTTIME, or
COPYLASTTIME) to null in the row for the empty page set to indicate that the
statistics for that page set are unknown.
Statistics accuracy
In general, the real-time statistics are accurate values. However, several factors can
affect the accuracy of the statistics:
v Certain utility restart scenarios
v Certain utility operations that leave indexes in a database restrictive state, such
as RECOVER-pending (RECP)
Always consider the database restrictive state of objects before accepting a utility
recommendation that is based on real-time statistics.
v A DB2 subsystem failure
v A notify failure in a data sharing environment
If you think that some statistics values might be inaccurate, you can correct the
statistics by running REORG, RUNSTATS, or COPY on the objects for which DB2
generated the statistics.
All characters in all records are in the same CCSID. If EBCDIC or ASCII data
contains DBCS characters, the data must be in an appropriate mixed CCSID. If the
data is Unicode it must be in CCSID 1208.
Figure 169 describes the format of delimited files that can be loaded into or
unloaded from tables by using the LOAD and UNLOAD utilities.
Character string delimiter ::= Character specified by CHARDEL option; the default
value is a double quotation mark (")
Restrictions
For delimiter restrictions, see “Loading delimited files” on page 261 or “Unloading
delimited files” on page 708.
Notes:
1. Field specifications of INTEGER or SMALLINT are treated as INTEGER EXTERNAL.
2. Field specifications of DECIMAL, DECIMAL PACKED, or DECIMAL ZONED are treated
as DECIMAL EXTERNAL.
3. Field specifications of FLOAT, REAL, or DOUBLE are treated as FLOAT EXTERNAL.
4. EBCID graphic data must be enclosed in shift-out and shift-in characters.
"Smith, Bob",4973,15.46
"Jones, Bill",12345,16.34
"Williams, Sam",452,193.78
Smith, Bob;4973;15.46
Jones, Bill;12345;16.34
Williams, Sam;452;193.78
| If you are new to DB2 for z/OS, Introduction to DB2 for z/OS provides a
| comprehensive introduction to DB2 Version 9.1 for z/OS. Topics included in this
| book explain the basic concepts that are associated with relational database
| management systems in general, and with DB2 for z/OS in particular.
The most rewarding task associated with a database management system is asking
questions of it and getting answers, the task called end use. Other tasks are also
necessary—defining the parameters of the system, putting the data in place, and so
on. The tasks that are associated with DB2 are grouped into the following major
categories.
Installation: If you are involved with DB2 only to install the system, DB2
Installation Guide might be all you need.
If you will be using data sharing capabilities you also need DB2 Data Sharing:
Planning and Administration, which describes installation considerations for data
sharing.
End use: End users issue SQL statements to retrieve data. They can also insert,
update, or delete data, with SQL statements. They might need an introduction to
SQL, detailed instructions for using SPUFI, and an alphabetized reference to the
types of SQL statements. This information is found in DB2 Application Programming
and SQL Guide, and DB2 SQL Reference.
End users can also issue SQL statements through the DB2 Query Management
Facility (QMF) or some other program, and the library for that licensed program
might provide all the instruction or reference material they need. For a list of the
titles in the DB2 QMF library, see the bibliography at the end of this book.
Application programming: Some users access DB2 without knowing it, using
programs that contain SQL statements. DB2 application programmers write those
programs. Because they write SQL statements, they need the same resources that
end users do.
The material needed for writing a host program containing SQL is in DB2
Application Programming and SQL Guide.
The material needed for writing applications that use JDBC and SQLJ to access
DB2 servers is in DB2 Application Programming Guide and Reference for Java. The
material needed for writing applications that use DB2 CLI or ODBC to access DB2
| servers is in DB2 ODBC Guide and Reference. The material needed for working with
| XML data in DB2 is in DB2 XML Guide. For handling errors, see DB2 Messages and
DB2 Codes.
If you will be working in a distributed environment, you will need DB2 Reference
for Remote DRDA Requesters and Servers.
If you will be using the RACF access control module for DB2 authorization
checking, you will need DB2 RACF Access Control Module Guide.
If you are involved with DB2 only to design the database, or plan operational
procedures, you need DB2 Administration Guide. If you also want to carry out your
own plans by creating DB2 objects, granting privileges, running utility jobs, and so
on, you also need:
v DB2 SQL Reference, which describes the SQL statements you use to create, alter,
and drop objects and grant and revoke privileges
v DB2 Utility Guide and Reference, which explains how to run utilities
v DB2 Command Reference, which explains how to run commands
If you will be using data sharing, you need DB2 Data Sharing: Planning and
Administration, which describes how to plan for and implement data sharing.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Trademarks
Company, product, or service names identified in the DB2 Version 9.1 for z/OS
information may be trademarks or service marks of International Business
Machines Corporation or other companies. Information about the trademarks of
IBM Corporation in the United States, other countries, or both is located at
http://www.ibm.com/legal/copytrade.shtml.
Notices 941
942 Utility Guide and Reference
Glossary
abend See abnormal end of task.
abend reason code
A 4-byte hexadecimal code that uniquely identifies a problem with DB2.
abnormal end of task (abend)
Termination of a task, job, or subsystem because of an error condition that
recovery facilities cannot resolve during execution.
access method services
| The facility that is used to define, alter, delete, print, and reproduce VSAM
| key-sequenced data sets.
access path
The path that is used to locate data that is specified in SQL statements. An
access path can be indexed or sequential.
active log
The portion of the DB2 log to which log records are written as they are
generated. The active log always contains the most recent log records. See
also archive log.
address space
| A range of virtual storage pages that is identified by a number (ASID) and
| a collection of segment and page tables that map the virtual pages to real
| pages of the computer’s memory.
address space connection
The result of connecting an allied address space to DB2. See also allied
address space and task control block.
address space identifier (ASID)
A unique system-assigned identifier for an address space.
| AFTER trigger
| A trigger that is specified to be activated after a defined trigger event (an
| insert, update, or delete operation on the table that is specified in a trigger
| definition). Contrast with BEFORE trigger and INSTEAD OF trigger.
agent In DB2, the structure that associates all processes that are involved in a
DB2 unit of work. See also allied agent and system agent.
aggregate function
An operation that derives its result by using values from one or more
rows. Contrast with scalar function.
| alias An alternative name that can be used in SQL statements to refer to a table
| or view in the same or a remote DB2 subsystem. An alias can be qualified
| with a schema qualifier and can thereby be referenced by other users.
| Contrast with synonym.
allied address space
An area of storage that is external to DB2 and that is connected to DB2. An
allied address space can request DB2 services. See also address space.
allied agent
An agent that represents work requests that originate in allied address
spaces. See also system agent.
Glossary 945
| BEFORE trigger
| A trigger that is specified to be activated before a defined trigger event (an
| insert, an update, or a delete operation on the table that is specified in a
| trigger definition). Contrast with AFTER trigger and INSTEAD OF trigger.
binary large object (BLOB)
| A binary string data type that contains a sequence of bytes that can range
| in size from 0 bytes to 2 GB, less 1 byte. This string does not have an
| associated code page and character set. BLOBs can contain, for example,
| image, audio, or video data. In general, BLOB values are used whenever a
| binary string might exceed the limits of the VARBINARY type.
binary string
| A sequence of bytes that is not associated with a CCSID. Binary string data
| type can be further classified as BINARY, VARBINARY, or BLOB.
| bind A process by which a usable control structure with SQL statements is
| generated; the structure is often called an access plan, an application plan,
| or a package. During this bind process, access paths to the data are
| selected, and some authorization checking is performed. See also automatic
| bind.
bit data
| v Data with character type CHAR or VARCHAR that is defined with the
| FOR BIT DATA clause. Note that using BINARY or VARBINARY rather
| than FOR BIT DATA is highly recommended.
| v Data with character type CHAR or VARCHAR that is defined with the
| FOR BIT DATA clause.
| v A form of character data. Binary data is generally more highly
| recommended than character-for-bit data.
BLOB See binary large object.
block fetch
| A capability in which DB2 can retrieve, or fetch, a large set of rows
| together. Using block fetch can significantly reduce the number of
| messages that are being sent across the network. Block fetch applies only
| to non-rowset cursors that do not update data.
bootstrap data set (BSDS)
A VSAM data set that contains name and status information for DB2 and
RBA range specifications for all active and archive log data sets. The BSDS
also contains passwords for the DB2 directory and catalog, and lists of
conditional restart and checkpoint records.
BSAM
See basic sequential access method.
BSDS See bootstrap data set.
buffer pool
| An area of memory into which data pages are read, modified, and held
| during processing.
built-in data type
| A data type that IBM supplies. Among the built-in data types for DB2 for
| z/OS are string, numeric, XML, ROWID, and datetime. Contrast with
| distinct type.
built-in function
| A function that is generated by DB2 and that is in the SYSIBM schema.
Glossary 947
central processor complex (CPC)
A physical collection of hardware that consists of main storage, one or
more central processors, timers, and channels.
central processor (CP)
The part of the computer that contains the sequencing and processing
facilities for instruction execution, initial program load, and other machine
operations.
CFRM See coupling facility resource management.
CFRM policy
The allocation rules for a coupling facility structure that are declared by a
z/OS administrator.
character conversion
The process of changing characters from one encoding scheme to another.
Character Data Representation Architecture (CDRA)
An architecture that is used to achieve consistent representation,
processing, and interchange of string data.
character large object (CLOB)
| A character string data type that contains a sequence of bytes that
| represent characters (single-byte, multibyte, or both) that can range in size
| from 0 bytes to 2 GB, less 1 byte. In general, CLOB values are used
| whenever a character string might exceed the limits of the VARCHAR
| type.
character set
A defined set of characters.
character string
| A sequence of bytes that represent bit data, single-byte characters, or a
| mixture of single-byte and multibyte characters. Character data can be
| further classified as CHARACTER, VARCHAR, or CLOB.
check constraint
A user-defined constraint that specifies the values that specific columns of
a base table can contain.
check integrity
The condition that exists when each row in a table conforms to the check
constraints that are defined on that table.
check pending
A state of a table space or partition that prevents its use by some utilities
and by some SQL statements because of rows that violate referential
constraints, check constraints, or both.
checkpoint
A point at which DB2 records status information on the DB2 log; the
recovery process uses this information if DB2 abnormally terminates.
child lock
For explicit hierarchical locking, a lock that is held on either a table, page,
row, or a large object (LOB). Each child lock has a parent lock. See also
parent lock.
CI See control interval.
CICS Represents (in this information): CICS Transaction Server for z/OS:
Customer Information Control System Transaction Server for z/OS.
Glossary 949
C++ object
A region of storage. An object is created when a variable is defined or a
new function is invoked.
An instance of a class.
coded character set
A set of unambiguous rules that establish a character set and the
one-to-one relationships between the characters of the set and their coded
representations.
coded character set identifier (CCSID)
A 16-bit number that uniquely identifies a coded representation of graphic
characters. It designates an encoding scheme identifier and one or more
pairs that consist of a character set identifier and an associated code page
identifier.
code page
A set of assignments of characters to code points. Within a code page, each
code point has only one specific meaning. In EBCDIC, for example, the
character A is assigned code point X’C1’, and character B is assigned code
point X’C2’.
code point
In CDRA, a unique bit pattern that represents a character in a code page.
code unit
The fundamental binary width in a computer architecture that is used for
representing character data, such as 7 bits, 8 bits, 16 bits, or 32 bits.
Depending on the character encoding form that is used, each code point in
a coded character set can be represented by one or more code units.
coexistence
During migration, the period of time in which two releases exist in the
same data sharing group.
cold start
A process by which DB2 restarts without processing any log records.
Contrast with warm start.
collection
A group of packages that have the same qualifier.
column
The vertical component of a table. A column has a name and a particular
data type (for example, character, decimal, or integer).
column function
See aggregate function.
″come from″ checking
An LU 6.2 security option that defines a list of authorization IDs that are
allowed to connect to DB2 from a partner LU.
command
A DB2 operator command or a DSN subcommand. A command is distinct
from an SQL statement.
command prefix
A 1- to 8-character command identifier. The command prefix distinguishes
the command as belonging to an application or subsystem rather than to
z/OS.
Glossary 951
connection
In SNA, the existence of a communication path between two partner LUs
that allows information to be exchanged (for example, two DB2 subsystems
that are connected and communicating by way of a conversation).
connection context
In SQLJ, a Java object that represents a connection to a data source.
connection declaration clause
In SQLJ, a statement that declares a connection to a data source.
connection handle
The data object containing information that is associated with a connection
that DB2 ODBC manages. This includes general status information,
transaction status, and diagnostic information.
connection ID
An identifier that is supplied by the attachment facility and that is
associated with a specific address space connection.
consistency token
A timestamp that is used to generate the version identifier for an
application. See also version.
constant
A language element that specifies an unchanging value. Constants are
classified as string constants or numeric constants. Contrast with variable.
constraint
A rule that limits the values that can be inserted, deleted, or updated in a
table. See referential constraint, check constraint, and unique constraint.
context
An application’s logical connection to the data source and associated DB2
ODBC connection information that allows the application to direct its
operations to a data source. A DB2 ODBC context represents a DB2 thread.
contracting conversion
A process that occurs when the length of a converted string is smaller than
that of the source string. For example, this process occurs when an
EBCDIC mixed-data string that contains DBCS characters is converted to
ASCII mixed data; the converted string is shorter because the shift codes
are removed.
control interval (CI)
v A unit of information that VSAM transfers between virtual and auxiliary
storage.
v In a key-sequenced data set or file, the set of records that an entry in the
sequence-set index record points to.
conversation
Communication, which is based on LU 6.2 or Advanced
Program-to-Program Communication (APPC), between an application and
a remote transaction program over an SNA logical unit-to-logical unit
(LU-LU) session that allows communication while processing a transaction.
coordinator
The system component that coordinates the commit or rollback of a unit of
work that includes work that is done on one or more other systems.
coprocessor
See SQL statement coprocessor.
Glossary 953
created temporary tables is stored in the DB2 catalog and can be shared
across application processes. Contrast with declared temporary table. See
also temporary table.
cross-system coupling facility (XCF)
A component of z/OS that provides functions to support cooperation
between authorized programs that run within a Sysplex.
cross-system extended services (XES)
A set of z/OS services that allow multiple instances of an application or
subsystem, running on different systems in a Sysplex environment, to
implement high-performance, high-availability data sharing by using a
coupling facility.
CS See cursor stability.
CSA See common service area.
CT See cursor table.
current data
Data within a host structure that is current with (identical to) the data
within the base table.
current status rebuild
The second phase of restart processing during which the status of the
subsystem is reconstructed from information on the log.
cursor A control structure that an application program uses to point to a single
row or multiple rows within some ordered set of rows of a result table. A
cursor can be used to retrieve, update, or delete rows from a result table.
cursor sensitivity
The degree to which database updates are visible to the subsequent
FETCH statements in a cursor.
cursor stability (CS)
The isolation level that provides maximum concurrency without the ability
to read uncommitted data. With cursor stability, a unit of work holds locks
only on its uncommitted changes and on the current row of each of its
cursors. See also read stability, repeatable read, and uncommitted read.
cursor table (CT)
| The internal representation of a cursor.
cycle A set of tables that can be ordered so that each table is a descendent of the
one before it, and the first table is a descendent of the last table. A
self-referencing table is a cycle with a single member. See also referential
cycle.
database
A collection of tables, or a collection of table spaces and index spaces.
database access thread (DBAT)
A thread that accesses data at the local subsystem on behalf of a remote
subsystem.
database administrator (DBA)
An individual who is responsible for designing, developing, operating,
safeguarding, maintaining, and using a database.
Glossary 955
data source
A local or remote relational or non-relational data manager that is capable
of supporting data access via an ODBC driver that supports the ODBC
APIs. In the case of DB2 for z/OS, the data sources are always relational
database managers.
data type
An attribute of columns, constants, variables, parameters, special registers,
and the results of functions and expressions.
data warehouse
A system that provides critical business information to an organization.
The data warehouse system cleanses the data for accuracy and currency,
and then presents the data to decision makers so that they can interpret
and use it effectively and efficiently.
DBA See database administrator.
DBAT See database access thread.
DB2 catalog
A collection of tables that are maintained by DB2 and contain descriptions
of DB2 objects, such as tables, views, and indexes.
DBCLOB
See double-byte character large object.
DB2 command
An instruction to the DB2 subsystem that a user enters to start or stop
DB2, to display information on current users, to start or stop databases, to
display information on the status of databases, and so on.
DBCS See double-byte character set.
DBD See database descriptor.
DB2I See DB2 Interactive.
DBID See database identifier.
DB2 Interactive (DB2I)
An interactive service within DB2 that facilitates the execution of SQL
statements, DB2 (operator) commands, and programmer commands, and
the invocation of utilities.
DBMS
See database management system.
DBRM
See database request module.
DB2 thread
| The database manager structure that describes an application’s connection,
| traces its progress, processes resource functions, and delimits its
| accessibility to the database manager resources and services. Most DB2 for
| z/OS functions execute under a thread structure.
DCLGEN
See declarations generator.
DDF See distributed data facility.
deadlock
Unresolvable contention for the use of a resource, such as a table or an
index.
Glossary 957
dependent row
A row that contains a foreign key that matches the value of a primary key
in the parent row.
dependent table
A table that is a dependent in at least one referential constraint.
descendent
An object that is a dependent of an object or is the dependent of a
descendent of an object.
descendent row
A row that is dependent on another row, or a row that is a descendent of a
dependent row.
descendent table
A table that is a dependent of another table, or a table that is a descendent
of a dependent table.
deterministic function
A user-defined function whose result is dependent on the values of the
input arguments. That is, successive invocations with the same input
values produce the same answer. Sometimes referred to as a not-variant
function. Contrast with nondeterministic function (sometimes called a
variant function).
dimension
A data category such as time, products, or markets. The elements of a
dimension are referred to as members. See also dimension table.
dimension table
The representation of a dimension in a star schema. Each row in a
dimension table represents all of the attributes for a particular member of
the dimension. See also dimension, star schema, and star join.
directory
The DB2 system database that contains internal objects such as database
descriptors and skeleton cursor tables.
disk A direct-access storage device that records data magnetically.
distinct type
A user-defined data type that is represented as an existing type (its source
type), but is considered to be a separate and incompatible type for
semantic purposes.
distributed data
Data that resides on a DBMS other than the local system.
distributed data facility (DDF)
A set of DB2 components through which DB2 communicates with another
relational database management system.
Distributed Relational Database Architecture (DRDA)
A connection protocol for distributed relational database processing that is
used by IBM relational database products. DRDA includes protocols for
communication between an application and a remote relational database
management system, and for communication between relational database
management systems. See also DRDA access.
DNS See domain name server.
Glossary 959
dynamic SQL
| SQL statements that are prepared and executed at run time. In dynamic
| SQL, the SQL statement is contained as a character string in a host variable
| or as a constant, and it is not precompiled.
EA-enabled table space
A table space or index space that is enabled for extended addressability
and that contains individual partitions (or pieces, for LOB table spaces)
that are greater than 4 GB.
EB See exabyte.
EBCDIC
Extended binary coded decimal interchange code. An encoding scheme
that is used to represent character data in the z/OS, VM, VSE, and iSeries
environments. Contrast with ASCII and Unicode.
embedded SQL
SQL statements that are coded within an application program. See static
SQL.
| enabling-new-function mode (ENFM)
| A transitional stage of the version-to-version migration process during
| which the DB2 subsystem or data sharing group is preparing to use the
| new functions of the new version. When in enabling-new-function mode, a
| DB2 data sharing group cannot coexist with members that are still at the
| prior version level. Fallback to a prior version is not supported, and new
| functions of the new version are not available for use in
| enabling-new-function mode. Contrast with compatibility mode,
| compatibility mode*, enabling-new-function mode*, and new-function
| mode.
| enabling-new-function mode* (ENFM*)
| A transitional stage of the version-to-version migration process that applies
| to a DB2 subsystem or data sharing group that was in new-function mode
| (NFM) at one time. When in enabling-new-function mode*, a DB2
| subsystem or data sharing group is preparing to use the new functions of
| the new version but cannot yet use them. A data sharing group that is in
| enabling-new-function mode* cannot coexist with members that are still at
| the prior version level. Fallback to a prior version is not supported.
| Contrast with compatibility mode, compatibility mode*,
| enabling-new-function mode, and new-function mode.
enclave
In Language Environment , an independent collection of routines, one of
which is designated as the main routine. An enclave is similar to a
program or run unit. See also WLM enclave.
encoding scheme
A set of rules to represent character data (ASCII, EBCDIC, or Unicode).
| ENFM See enabling-new-function mode.
ENFM*
| See enabling-new-function mode*.
| entity A person, object, or concept about which information is stored. In a
| relational database, entities are represented as tables. A database includes
| information about the entities in an organization or business, and their
| relationships to each other.
Glossary 961
explicit hierarchical locking
Locking that is used to make the parent-child relationship between
resources known to IRLM. This kind of locking avoids global locking
overhead when no inter-DB2 interest exists on a resource.
explicit privilege
| A privilege that has a name and is held as the result of an SQL GRANT
| statement and revoked as the result of an SQL REVOKE statement. For
| example, the SELECT privilege.
exposed name
A correlation name or a table or view name for which a correlation name is
not specified.
expression
An operand or a collection of operators and operands that yields a single
value.
Extended Recovery Facility (XRF)
A facility that minimizes the effect of failures in z/OS, VTAM, the host
processor, or high-availability applications during sessions between
high-availability applications and designated terminals. This facility
provides an alternative subsystem to take over sessions from the failing
subsystem.
Extensible Markup Language (XML)
A standard metalanguage for defining markup languages that is a subset
of Standardized General Markup Language (SGML).
external function
| A function that has its functional logic implemented in a programming
| language application that resides outside the database, in the file system of
| the database server. The association of the function with the external code
| application is specified by the EXTERNAL clause in the CREATE
| FUNCTION statement. External functions can be classified as external
| scalar functions and external table functions. Contrast with sourced
| function, built-in function, and SQL function.
external procedure
| A procedure that has its procedural logic implemented in an external
| programming language application. The association of the procedure with
| the external application is specified by a CREATE PROCEDURE statement
| with a LANGUAGE clause that has a value other than SQL and an
| EXTERNAL clause that implicitly or explicitly specifies the name of the
| external application. Contrast with external SQL procedure and native SQL
| procedure.
external routine
A user-defined function or stored procedure that is based on code that is
written in an external programming language.
| external SQL procedure
| An SQL procedure that is processed using a generated C program that is a
| representation of the procedure. When an external SQL procedure is called,
| the C program representation of the procedure is executed in a stored
| procedures address space. Contrast with external procedure and native
| SQL procedure.
Glossary 963
same descriptions, as the primary key of the parent table. Each foreign key
value must either match a parent key value in the related parent table or
be null.
forest An ordered set of subtrees of XML nodes.
forward log recovery
The third phase of restart processing during which DB2 processes the log
in a forward direction to apply all REDO log records.
free space
The total amount of unused space in a page; that is, the space that is not
used to store records or control information is free space.
full outer join
The result of a join operation that includes the matched rows of both tables
that are being joined and preserves the unmatched rows of both tables. See
also join, equijoin, inner join, left outer join, outer join, and right outer join.
| fullselect
| A subselect, a fullselect in parentheses, or a number of both that are
| combined by set operators. Fullselect specifies a result table. If a set
| operator is not used, the result of the fullselect is the result of the specified
| subselect or fullselect.
fully escaped mapping
A mapping from an SQL identifier to an XML name when the SQL
identifier is a column name.
function
A mapping, which is embodied as a program (the function body) that is
invocable by means of zero or more input values (arguments) to a single
value (the result). See also aggregate function and scalar function.
Functions can be user-defined, built-in, or generated by DB2. (See also
built-in function, cast function, external function, sourced function, SQL
function, and user-defined function.)
function definer
The authorization ID of the owner of the schema of the function that is
specified in the CREATE FUNCTION statement.
function package
A package that results from binding the DBRM for a function program.
function package owner
The authorization ID of the user who binds the function program’s DBRM
into a function package.
function signature
The logical concatenation of a fully qualified function name with the data
types of all of its parameters.
GB Gigabyte. A value of (1 073 741 824 bytes).
GBP See group buffer pool.
GBP-dependent
The status of a page set or page set partition that is dependent on the
group buffer pool. Either read/write interest is active among DB2
subsystems for this page set, or the page set has changed pages in the
group buffer pool that have not yet been cast out to disk.
Glossary 965
handle
In DB2 ODBC, a variable that refers to a data structure and associated
resources. See also statement handle, connection handle, and environment
handle.
help panel
A screen of information that presents tutorial text to assist a user at the
workstation or terminal.
heuristic damage
The inconsistency in data between one or more participants that results
when a heuristic decision to resolve an indoubt LUW at one or more
participants differs from the decision that is recorded at the coordinator.
heuristic decision
A decision that forces indoubt resolution at a participant by means other
than automatic resynchronization between coordinator and participant.
| histogram statistics
| A way of summarizing data distribution. This technique divides up the
| range of possible values in a data set into intervals, such that each interval
| contains approximately the same percentage of the values. A set of
| statistics are collected for each interval.
hole A row of the result table that cannot be accessed because of a delete or an
update that has been performed on the row. See also delete hole and
update hole.
home address space
The area of storage that z/OS currently recognizes as dispatched.
host The set of programs and resources that are available on a given TCP/IP
instance.
host expression
A Java variable or expression that is referenced by SQL clauses in an SQLJ
application program.
host identifier
A name that is declared in the host program.
host language
A programming language in which you can embed SQL statements.
host program
An application program that is written in a host language and that
contains embedded SQL statements.
host structure
In an application program, a structure that is referenced by embedded SQL
statements.
host variable
In an application program written in a host language, an application
variable that is referenced by embedded SQL statements.
host variable array
An array of elements, each of which corresponds to a value for a column.
The dimension of the array determines the maximum number of rows for
which the array can be used.
IBM System z9 Integrated Processor (zIIP)
| A specialized processor that can be used for some DB2 functions.
Glossary 967
index partition
A VSAM data set that is contained within a partitioning index space.
index space
A page set that is used to store the entries of one index.
indicator column
A 4-byte value that is stored in a base table in place of a LOB column.
indicator variable
A variable that is used to represent the null value in an application
program. If the value for the selected column is null, a negative value is
placed in the indicator variable.
indoubt
A status of a unit of recovery. If DB2 fails after it has finished its phase 1
commit processing and before it has started phase 2, only the commit
coordinator knows if an individual unit of recovery is to be committed or
rolled back. At restart, if DB2 lacks the information it needs to make this
decision, the status of the unit of recovery is indoubt until DB2 obtains this
information from the coordinator. More than one unit of recovery can be
indoubt at restart.
indoubt resolution
The process of resolving the status of an indoubt logical unit of work to
either the committed or the rollback state.
inflight
A status of a unit of recovery. If DB2 fails before its unit of recovery
completes phase 1 of the commit process, it merely backs out the updates
of its unit of recovery at restart. These units of recovery are termed inflight.
inheritance
The passing downstream of class resources or attributes from a parent class
in the class hierarchy to a child class.
initialization file
For DB2 ODBC applications, a file containing values that can be set to
adjust the performance of the database manager.
inline copy
A copy that is produced by the LOAD or REORG utility. The data set that
the inline copy produces is logically equivalent to a full image copy that is
produced by running the COPY utility with read-only access (SHRLEVEL
REFERENCE).
inner join
The result of a join operation that includes only the matched rows of both
tables that are being joined. See also join, equijoin, full outer join, left outer
join, outer join, and right outer join.
inoperative package
A package that cannot be used because one or more user-defined functions
or procedures that the package depends on were dropped. Such a package
must be explicitly rebound. Contrast with invalid package.
insensitive cursor
A cursor that is not sensitive to inserts, updates, or deletes that are made
to the underlying rows of a result table after the result table has been
materialized.
Glossary 969
operations of other units of work. See also cursor stability, read stability,
repeatable read, and uncommitted read.
ISPF See Interactive System Productivity Facility.
iterator
In SQLJ, an object that contains the result set of a query. An iterator is
equivalent to a cursor in other host languages.
iterator declaration clause
In SQLJ, a statement that generates an iterator declaration class. An iterator
is an object of an iterator declaration class.
JAR See Java Archive.
Java Archive (JAR)
A file format that is used for aggregating many files into a single file.
JDBC A Sun Microsystems database application programming interface (API) for
Java that allows programs to access database management systems by
using callable SQL.
join A relational operation that allows retrieval of data from two or more tables
based on matching column values. See also equijoin, full outer join, inner
join, left outer join, outer join, and right outer join.
KB Kilobyte. A value of 1024 bytes.
Kerberos
A network authentication protocol that is designed to provide strong
authentication for client/server applications by using secret-key
cryptography.
Kerberos ticket
A transparent application mechanism that transmits the identity of an
initiating principal to its target. A simple ticket contains the principal’s
identity, a session key, a timestamp, and other information, which is sealed
using the target’s secret key.
| key A column, an ordered collection of columns, or an expression that is
| identified in the description of a table, index, or referential constraint. The
| same column or expression can be part of more than one key.
key-sequenced data set (KSDS)
A VSAM file or data set whose records are loaded in key sequence and
controlled by an index.
KSDS See key-sequenced data set.
large object (LOB)
A sequence of bytes representing bit data, single-byte characters,
double-byte characters, or a mixture of single- and double-byte characters.
A LOB can be up to 2 GB minus 1 byte in length. See also binary large
object, character large object, and double-byte character large object.
last agent optimization
An optimized commit flow for either presumed-nothing or presumed-abort
protocols in which the last agent, or final participant, becomes the commit
coordinator. This flow saves at least one message.
latch A DB2 mechanism for controlling concurrent events or the use of system
resources.
LCID See log control interval definition.
Glossary 971
local lock
A lock that provides intra-DB2 concurrency control, but not inter-DB2
concurrency control; that is, its scope is a single DB2.
local subsystem
The unique relational DBMS to which the user or application program is
directly connected (in the case of DB2, by one of the DB2 attachment
facilities).
location
The unique name of a database server. An application uses the location
name to access a DB2 database server. A database alias can be used to
override the location name when accessing a remote server.
location alias
Another name by which a database server identifies itself in the network.
Applications can use this name to access a DB2 database server.
lock A means of controlling concurrent events or access to data. DB2 locking is
performed by the IRLM.
lock duration
The interval over which a DB2 lock is held.
lock escalation
The promotion of a lock from a row, page, or LOB lock to a table space
lock because the number of page locks that are concurrently held on a
given resource exceeds a preset limit.
locking
The process by which the integrity of data is ensured. Locking prevents
concurrent users from accessing inconsistent data. See also claim, drain,
and latch.
lock mode
A representation for the type of access that concurrently running programs
can have to a resource that a DB2 lock is holding.
lock object
The resource that is controlled by a DB2 lock.
lock promotion
The process of changing the size or mode of a DB2 lock to a higher, more
restrictive level.
lock size
The amount of data that is controlled by a DB2 lock on table data; the
value can be a row, a page, a LOB, a partition, a table, or a table space.
lock structure
A coupling facility data structure that is composed of a series of lock
entries to support shared and exclusive locking for logical resources.
log A collection of records that describe the events that occur during DB2
execution and that indicate their sequence. The information thus recorded
is used for recovery in the event of a failure during DB2 execution.
log control interval definition
A suffix of the physical log record that tells how record segments are
placed in the physical control interval.
logical claim
A claim on a logical partition of a nonpartitioning index.
Glossary 973
LU name
Logical unit name, which is the name by which VTAM refers to a node in
a network.
LUW See logical unit of work.
LUWID
See logical unit of work identifier.
mapping table
A table that the REORG utility uses to map the associations of the RIDs of
data records in the original copy and in the shadow copy. This table is
created by the user.
mass delete
The deletion of all rows of a table.
materialize
v The process of putting rows from a view or nested table expression into
a work file for additional processing by a query.
v The placement of a LOB value into contiguous storage. Because LOB
values can be very large, DB2 avoids materializing LOB data until doing
so becomes absolutely necessary.
materialized query table
A table that is used to contain information that is derived and can be
summarized from one or more source tables. Contrast with base table.
MB Megabyte (1 048 576 bytes).
MBCS See multibyte character set.
member name
The z/OS XCF identifier for a particular DB2 subsystem in a data sharing
group.
menu A displayed list of available functions for selection by the operator. A
menu is sometimes called a menu panel.
metalanguage
A language that is used to create other specialized languages.
migration
The process of converting a subsystem with a previous release of DB2 to
an updated or current release. In this process, you can acquire the
functions of the updated or current release without losing the data that
you created on the previous release.
mixed data string
A character string that can contain both single-byte and double-byte
characters.
mode name
A VTAM name for the collection of physical and logical characteristics and
attributes of a session.
modify locks
An L-lock or P-lock with a MODIFY attribute. A list of these active locks is
kept at all times in the coupling facility lock structure. If the requesting
DB2 subsystem fails, that DB2 subsystem’s modify locks are converted to
retained locks.
Glossary 975
nonleaf page
A page that contains keys and page numbers of other pages in the index
(either leaf or nonleaf pages). Nonleaf pages never point to actual data.
Contrast with leaf page.
nonpartitioned index
An index that is not physically partitioned. Both partitioning indexes and
secondary indexes can be nonpartitioned.
nonpartitioned secondary index (NPSI)
An index on a partitioned table space that is not the partitioning index and
is not partitioned. Contrast with data-partitioned secondary index.
nonpartitioning index
See secondary index.
nonscrollable cursor
A cursor that can be moved only in a forward direction. Nonscrollable
cursors are sometimes called forward-only cursors or serial cursors.
normalization
A key step in the task of building a logical relational database design.
Normalization helps you avoid redundancies and inconsistencies in your
data. An entity is normalized if it meets a set of constraints for a particular
normal form (first normal form, second normal form, and so on). Contrast
with denormalization.
not-variant function
See deterministic function.
NPSI See nonpartitioned secondary index.
NUL The null character (’\0’), which is represented by the value X’00’. In C, this
character denotes the end of a string.
null A special value that indicates the absence of information.
null terminator
| In C, the value that indicates the end of a string. For EBCDIC, ASCII, and
| Unicode UTF-8 strings, the null terminator is a single-byte value (X’00’).
| For Unicode UTF-16 or UCS-2 (wide) strings, the null terminator is a
| double-byte value (X’0000’).
ODBC
See Open Database Connectivity.
ODBC driver
A dynamically-linked library (DLL) that implements ODBC function calls
and interacts with a data source.
| OLAP See online analytical processing.
| online analytical processing (OLAP)
| The process of collecting data from one or many sources; transforming and
| analyzing the consolidated data quickly and interactively; and examining
| the results across different dimensions of the data by looking for patterns,
| trends, and exceptions within complex relationships of that data.
Open Database Connectivity (ODBC)
A Microsoft database application programming interface (API) for C that
allows access to database management systems by using callable SQL.
ODBC does not require the use of an SQL preprocessor. In addition, ODBC
provides an architecture that lets users add modules called database drivers,
Glossary 977
parallel complex
A cluster of machines that work together to handle multiple transactions
and applications.
parallel group
A set of consecutive operations that execute in parallel and that have the
same number of parallel tasks.
parallel I/O processing
A form of I/O processing in which DB2 initiates multiple concurrent
requests for a single user query and performs I/O processing concurrently
(in parallel) on multiple data partitions.
parallelism assistant
In Sysplex query parallelism, a DB2 subsystem that helps to process parts
of a parallel query that originates on another DB2 subsystem in the data
sharing group.
parallelism coordinator
In Sysplex query parallelism, the DB2 subsystem from which the parallel
query originates.
Parallel Sysplex
A set of z/OS systems that communicate and cooperate with each other
through certain multisystem hardware components and software services
to process customer workloads.
parallel task
The execution unit that is dynamically created to process a query in
parallel. A parallel task is implemented by a z/OS service request block.
parameter marker
| A question mark (?) that appears in a statement string of a dynamic SQL
| statement. The question mark can appear where a variable could appear if
| the statement string were a static SQL statement.
parameter-name
| An SQL identifier that designates a parameter in a routine that is written
| by a user. Parameter names are required for SQL procedures and SQL
| functions, and they are used in the body of the routine to refer to the
| values of the parameters. Parameter names are optional for external
| routines.
parent key
A primary key or unique key in the parent table of a referential constraint.
The values of a parent key determine the valid values of the foreign key in
the referential constraint.
parent lock
For explicit hierarchical locking, a lock that is held on a resource that
might have child locks that are lower in the hierarchy. A parent lock is
usually the table space lock or the partition intent lock. See also child lock.
parent row
A row whose primary key value is the foreign key value of a dependent
row.
parent table
A table whose primary key is referenced by the foreign key of a dependent
table.
Glossary 979
piece A data set of a nonpartitioned page set.
plan See application plan.
plan allocation
The process of allocating DB2 resources to a plan in preparation for
execution.
plan member
The bound copy of a DBRM that is identified in the member clause.
plan name
The name of an application plan.
P-lock See physical lock.
point of consistency
A time when all recoverable data that an application accesses is consistent
with other data. The term point of consistency is synonymous with sync
point or commit point.
policy See CFRM policy.
postponed abort UR
A unit of recovery that was inflight or in-abort, was interrupted by system
failure or cancellation, and did not complete backout during restart.
precision
In SQL, the total number of digits in a decimal number (called the size in
the C language). In the C language, the number of digits to the right of the
decimal point (called the scale in SQL). The DB2 information uses the SQL
terms.
precompilation
A processing of application programs containing SQL statements that takes
place before compilation. SQL statements are replaced with statements that
are recognized by the host language compiler. Output from this
precompilation includes source code that can be submitted to the compiler
and the database request module (DBRM) that is input to the bind process.
predicate
An element of a search condition that expresses or implies a comparison
operation.
prefix A code at the beginning of a message or record.
preformat
| The process of preparing a VSAM linear data set for DB2 use, by writing
| specific data patterns.
prepare
The first phase of a two-phase commit process in which all participants are
requested to prepare for commit.
prepared SQL statement
A named object that is the executable form of an SQL statement that has
been processed by the PREPARE statement.
primary authorization ID
The authorization ID that is used to identify the application process to
DB2.
primary group buffer pool
For a duplexed group buffer pool, the structure that is used to maintain
Glossary 981
query block
The part of a query that is represented by one of the FROM clauses. Each
FROM clause can have multiple query blocks, depending on DB2
processing of the query.
query CP parallelism
Parallel execution of a single query, which is accomplished by using
multiple tasks. See also Sysplex query parallelism.
query I/O parallelism
Parallel access of data, which is accomplished by triggering multiple I/O
requests within a single query.
queued sequential access method (QSAM)
An extended version of the basic sequential access method (BSAM). When
this method is used, a queue of data blocks is formed. Input data blocks
await processing, and output data blocks await transfer to auxiliary storage
or to an output device.
quiesce point
A point at which data is consistent as a result of running the DB2
QUIESCE utility.
RACF Resource Access Control Facility. A component of the z/OS Security Server.
| range-partitioned table space
| A type of universal table space that is based on partitioning ranges and
| that contains a single table. Contrast with partition-by-growth table space.
| See also universal table space.
RBA See relative byte address.
RCT See resource control table.
| RDO See resource definition online.
read stability (RS)
An isolation level that is similar to repeatable read but does not completely
isolate an application process from all other concurrently executing
application processes. See also cursor stabilityrepeatable read, and
uncommitted read.
rebind
The creation of a new application plan for an application program that has
been bound previously. If, for example, you have added an index for a
table that your application accesses, you must rebind the application in
order to take advantage of that index.
rebuild
The process of reallocating a coupling facility structure. For the shared
communications area (SCA) and lock structure, the structure is
repopulated; for the group buffer pool, changed pages are usually cast out
to disk, and the new structure is populated only with changed pages that
were not successfully cast out.
record The storage representation of a row or other data.
record identifier (RID)
A unique identifier that DB2 uses to identify a row of data in a table.
Compare with row identifier.
Glossary 983
| descendent of itself. The tables that are involved in a referential cycle are
| ordered so that each table is a descendent of the one before it, and the first
| table is a descendent of the last table.
referential integrity
The state of a database in which all values of all foreign keys are valid.
Maintaining referential integrity requires the enforcement of referential
constraints on all operations that change the data in a table on which the
referential constraints are defined.
referential structure
A set of tables and relationships that includes at least one table and, for
every table in the set, all the relationships in which that table participates
and all the tables to which it is related.
refresh age
The time duration between the current time and the time during which a
materialized query table was last refreshed.
registry
See registry database.
registry database
A database of security information about principals, groups, organizations,
accounts, and security policies.
relational database
A database that can be perceived as a set of tables and manipulated in
accordance with the relational model of data.
relational database management system (RDBMS)
A collection of hardware and software that organizes and provides access
to a relational database.
| relational schema
| See SQL schema.
relationship
A defined connection between the rows of a table or the rows of two
tables. A relationship is the internal representation of a referential
constraint.
relative byte address (RBA)
The offset of a data record or control interval from the beginning of the
storage space that is allocated to the data set or file to which it belongs.
remigration
The process of returning to a current release of DB2 following a fallback to
a previous release. This procedure constitutes another migration process.
remote
Any object that is maintained by a remote DB2 subsystem (that is, by a
DB2 subsystem other than the local one). A remote view, for example, is a
view that is maintained by a remote DB2 subsystem. Contrast with local.
remote subsystem
Any relational DBMS, except the local subsystem, with which the user or
application can communicate. The subsystem need not be remote in any
physical sense, and might even operate on the same processor under the
same z/OS system.
reoptimization
The DB2 process of reconsidering the access path of an SQL statement at
Glossary 985
resource definition online (RDO)
| The recommended method of defining resources to CICS by creating
| resource definitions interactively, or by using a utility, and then storing
| them in the CICS definition data set. In earlier releases of CICS, resources
| were defined by using the resource control table (RCT), which is no longer
| supported.
resource limit facility (RLF)
A portion of DB2 code that prevents dynamic manipulative SQL statements
from exceeding specified time limits. The resource limit facility is
sometimes called the governor.
resource limit specification table (RLST)
A site-defined table that specifies the limits to be enforced by the resource
limit facility.
resource manager
v A function that is responsible for managing a particular resource and
that guarantees the consistency of all updates made to recoverable
resources within a logical unit of work. The resource that is being
managed can be physical (for example, disk or main storage) or logical
(for example, a particular type of system service).
v A participant, in the execution of a two-phase commit, that has
recoverable resources that could have been modified. The resource
manager has access to a recovery log so that it can commit or roll back
the effects of the logical unit of work to the recoverable resources.
restart pending (RESTP)
A restrictive state of a page set or partition that indicates that restart
(backout) work needs to be performed on the object.
RESTP
See restart pending.
result set
The set of rows that a stored procedure returns to a client application.
result set locator
A 4-byte value that DB2 uses to uniquely identify a query result set that a
stored procedure returns.
result table
The set of rows that are specified by a SELECT statement.
retained lock
A MODIFY lock that a DB2 subsystem was holding at the time of a
subsystem failure. The lock is retained in the coupling facility lock
structure across a DB2 for z/OS failure.
RID See record identifier.
RID pool
See record identifier pool.
right outer join
The result of a join operation that includes the matched rows of both tables
that are being joined and preserves the unmatched rows of the second join
operand. See also join, equijoin, full outer join, inner join, left outer join,
and outer join.
RLF See resource limit facility.
Glossary 987
scalar function
An SQL operation that produces a single value from another value and is
expressed as a function name, followed by a list of arguments that are
enclosed in parentheses.
scale In SQL, the number of digits to the right of the decimal point (called the
precision in the C language). The DB2 information uses the SQL definition.
schema
The organization or structure of a database.
A collection of, and a way of qualifying, database objects such as tables,
views, routines, indexes or triggers that define a database. A database
schema provides a logical classification of database objects.
scrollability
The ability to use a cursor to fetch in either a forward or backward
direction. The FETCH statement supports multiple fetch orientations to
indicate the new position of the cursor. See also fetch orientation.
scrollable cursor
A cursor that can be moved in both a forward and a backward direction.
search condition
A criterion for selecting rows from a table. A search condition consists of
one or more predicates.
secondary authorization ID
An authorization ID that has been associated with a primary authorization
ID by an authorization exit routine.
secondary group buffer pool
For a duplexed group buffer pool, the structure that is used to back up
changed pages that are written to the primary group buffer pool. No page
registration or cross-invalidation occurs using the secondary group buffer
pool. The z/OS equivalent is new structure.
secondary index
A nonpartitioning index that is useful for enforcing a uniqueness
constraint, for clustering data, or for providing access paths to data for
queries. A secondary index can be partitioned or nonpartitioned. See also
data-partitioned secondary index (DPSI) and nonpartitioned secondary
index (NPSI).
section
The segment of a plan or package that contains the executable structures
for a single SQL statement. For most SQL statements, one section in the
plan exists for each SQL statement in the source program. However, for
cursor-related statements, the DECLARE, OPEN, FETCH, and CLOSE
statements reference the same section because they each refer to the
SELECT statement that is named in the DECLARE CURSOR statement.
SQL statements such as COMMIT, ROLLBACK, and some SET statements
do not use a section.
| security label
| A classification of users’ access to objects or data rows in a multilevel
| security environment.″
segment
A group of pages that holds rows of a single table. See also segmented
table space.
Glossary 989
share lock
A lock that prevents concurrently executing application processes from
changing data, but not from reading data. Contrast with exclusive lock.
shift-in character
A special control character (X’0F’) that is used in EBCDIC systems to
denote that the subsequent bytes represent SBCS characters. See also
shift-out character.
shift-out character
A special control character (X’0E’) that is used in EBCDIC systems to
denote that the subsequent bytes, up to the next shift-in control character,
represent DBCS characters. See also shift-in character.
sign-on
A request that is made on behalf of an individual CICS or IMS application
process by an attachment facility to enable DB2 to verify that it is
authorized to use DB2 resources.
simple page set
A nonpartitioned page set. A simple page set initially consists of a single
data set (page set piece). If and when that data set is extended to 2 GB,
another data set is created, and so on, up to a total of 32 data sets. DB2
considers the data sets to be a single contiguous linear address space
containing a maximum of 64 GB. Data is stored in the next available
location within this address space without regard to any partitioning
scheme.
simple table space
A table space that is neither partitioned nor segmented. Creation of simple
table spaces is not supported in DB2 Version 9.1 for z/OS. Contrast with
partitioned table space, segmented table space, and universal table space.
single-byte character set (SBCS)
A set of characters in which each character is represented by a single byte.
Contrast with double-byte character set or multibyte character set.
single-precision floating point number
A 32-bit approximate representation of a real number.
SMP/E
See System Modification Program/Extended.
SNA See Systems Network Architecture.
SNA network
The part of a network that conforms to the formats and protocols of
Systems Network Architecture (SNA).
socket A callable TCP/IP programming interface that TCP/IP network
applications use to communicate with remote TCP/IP partners.
sourced function
| A function that is implemented by another built-in or user-defined function
| that is already known to the database manager. This function can be a
| scalar function or an aggregate function; it returns a single value from a set
| of values (for example, MAX or AVG). Contrast with built-in function,
| external function, and SQL function.
source program
A set of host language statements and SQL statements that is processed by
an SQL precompiler.
Glossary 991
SQL path
An ordered list of schema names that are used in the resolution of
unqualified references to user-defined functions, distinct types, and stored
procedures. In dynamic SQL, the SQL path is found in the CURRENT
PATH special register. In static SQL, it is defined in the PATH bind option.
SQL procedure
| A user-written program that can be invoked with the SQL CALL statement.
| An SQL procedure is written in the SQL procedural language. Two types of
| SQL procedures are supported: external SQL procedures and native SQL
| procedures. See also external procedure and native SQL procedure.
SQL processing conversation
Any conversation that requires access of DB2 data, either through an
application or by dynamic query requests.
SQL Processor Using File Input (SPUFI)
A facility of the TSO attachment subcomponent that enables the DB2I user
to execute SQL statements without embedding them in an application
program.
SQL return code
Either SQLCODE or SQLSTATE.
SQL routine
A user-defined function or stored procedure that is based on code that is
written in SQL.
| SQL schema
| A collection of database objects such as tables, views, indexes, functions,
| distinct types, schemas, or triggers that defines a database. An SQL schema
| provides a logical classification of database objects.
SQL statement coprocessor
An alternative to the DB2 precompiler that lets the user process SQL
statements at compile time. The user invokes an SQL statement coprocessor
by specifying a compiler option.
SQL string delimiter
A symbol that is used to enclose an SQL string constant. The SQL string
delimiter is the apostrophe (’), except in COBOL applications, where the
user assigns the symbol, which is either an apostrophe or a double
quotation mark (″).
SRB See service request block.
stand-alone
An attribute of a program that means that it is capable of executing
separately from DB2, without using DB2 services.
star join
A method of joining a dimension column of a fact table to the key column
of the corresponding dimension table. See also join, dimension, and star
schema.
star schema
The combination of a fact table (which contains most of the data) and a
number of dimension tables. See also star join, dimension, and dimension
table.
statement handle
In DB2 ODBC, the data object that contains information about an SQL
Glossary 993
subcomponent
A group of closely related DB2 modules that work together to provide a
general function.
subject table
The table for which a trigger is created. When the defined triggering event
occurs on this table, the trigger is activated.
subquery
A SELECT statement within the WHERE or HAVING clause of another
SQL statement; a nested SQL statement.
subselect
| That form of a query that includes only a SELECT clause, FROM clause,
| and optionally a WHERE clause, GROUP BY clause, HAVING clause,
| ORDER BY clause, or FETCH FIRST clause.
substitution character
A unique character that is substituted during character conversion for any
characters in the source program that do not have a match in the target
coding representation.
subsystem
A distinct instance of a relational database management system (RDBMS).
surrogate pair
A coded representation for a single character that consists of a sequence of
two 16-bit code units, in which the first value of the pair is a
high-surrogate code unit in the range U+D800 through U+DBFF, and the
second value is a low-surrogate code unit in the range U+DC00 through
U+DFFF. Surrogate pairs provide an extension mechanism for encoding
917 476 characters without requiring the use of 32-bit characters.
SVC dump
A dump that is issued when a z/OS or a DB2 functional recovery routine
detects an error.
sync point
See commit point.
syncpoint tree
The tree of recovery managers and resource managers that are involved in
a logical unit of work, starting with the recovery manager, that make the
final commit decision.
synonym
| In SQL, an alternative name for a table or view. Synonyms can be used to
| refer only to objects at the subsystem in which the synonym is defined. A
| synonym cannot be qualified and can therefore not be used by other users.
| Contrast with alias.
Sysplex
See Parallel Sysplex.
Sysplex query parallelism
Parallel execution of a single query that is accomplished by using multiple
tasks on more than one DB2 subsystem. See also query CP parallelism.
system administrator
The person at a computer installation who designs, controls, and manages
the use of the computer system.
Glossary 995
TCP/IP
A network communication protocol that computer systems use to exchange
information across telecommunication links.
TCP/IP port
A 2-byte value that identifies an end user or a TCP/IP network application
within a TCP/IP host.
template
A DB2 utilities output data set descriptor that is used for dynamic
allocation. A template is defined by the TEMPLATE utility control
statement.
temporary table
A table that holds temporary data. Temporary tables are useful for holding
or sorting intermediate results from queries that contain a large number of
rows. The two types of temporary table, which are created by different
SQL statements, are the created temporary table and the declared
temporary table. Contrast with result table. See also created temporary
table and declared temporary table.
thread See DB2 thread.
threadsafe
A characteristic of code that allows multithreading both by providing
private storage areas for each thread, and by properly serializing shared
(global) storage areas.
three-part name
The full name of a table, view, or alias. It consists of a location name, a
schema name, and an object name, separated by a period.
time A three-part value that designates a time of day in hours, minutes, and
seconds.
timeout
Abnormal termination of either the DB2 subsystem or of an application
because of the unavailability of resources. Installation specifications are set
to determine both the amount of time DB2 is to wait for IRLM services
after starting, and the amount of time IRLM is to wait if a resource that an
application requests is unavailable. If either of these time specifications is
exceeded, a timeout is declared.
Time-Sharing Option (TSO)
An option in z/OS that provides interactive time sharing from remote
terminals.
timestamp
A seven-part value that consists of a date and time. The timestamp is
expressed in years, months, days, hours, minutes, seconds, and
microseconds.
trace A DB2 facility that provides the ability to monitor and collect DB2
monitoring, auditing, performance, accounting, statistics, and serviceability
(global) data.
| transaction
| An atomic series of SQL statements that make up a logical unit of work.
| All of the data modifications made during a transaction are either
| committed together as a unit or rolled back as a unit.
Glossary 997
triggered SQL statements
The set of SQL statements that is executed when a trigger is activated and
its triggered action condition evaluates to true. Triggered SQL statements
are also called the trigger body.
trigger granularity
In SQL, a characteristic of a trigger, which determines whether the trigger
is activated:
v Only once for the triggering SQL statement
v Once for each row that the SQL statement modifies
triggering event
| The specified operation in a trigger definition that causes the activation of
| that trigger. The triggering event is comprised of a triggering operation
| (insert, update, or delete) and a subject table or view on which the
| operation is performed.
triggering SQL operation
| The SQL operation that causes a trigger to be activated when performed on
| the subject table or view.
trigger package
A package that is created when a CREATE TRIGGER statement is
executed. The package is executed when the trigger is activated.
| trust attribute
| An attribute on which to establish trust. A trusted relationship is
| established based on one or more trust attributes.
| trusted connection
| A database connection whose attributes match the attributes of a unique
| trusted context defined at the DB2 database server.
| trusted connection reuse
| The ability to switch the current user ID on a trusted connection to a
| different user ID.
| trusted context
| A database security object that enables the establishment of a trusted
| relationship between a DB2 database management system and an external
| entity.
| trusted context default role
| A role associated with a trusted context. The privileges granted to the
| trusted context default role can be acquired only when a trusted
| connection based on the trusted context is established or reused.
| trusted context user
| A user ID to which switching the current user ID on a trusted connection
| is permitted.
| trusted context user-specific role
| A role that is associated with a specific trusted context user. It overrides
| the trusted context default role if the current user ID on the trusted
| connection matches the ID of the specific trusted context user.
| trusted relationship
| A privileged relationship between two entities such as a middleware server
| and a database server. This relationship allows for a unique set of
| interactions between the two entities that would be impossible otherwise.
TSO See Time-Sharing Option.
Glossary 999
| partitioned table space, segmented table space, partition-by-growth table
| space, and range-partitioned table space.
unlock
The act of releasing an object or system resource that was previously
locked and returning it to general availability within DB2.
untyped parameter marker
A parameter marker that is specified without its target data type. It has the
form of a single question mark (?).
updatability
The ability of a cursor to perform positioned updates and deletes. The
updatability of a cursor can be influenced by the SELECT statement and
the cursor sensitivity option that is specified on the DECLARE CURSOR
statement.
update hole
The location on which a cursor is positioned when a row in a result table
is fetched again and the new values no longer satisfy the search condition.
See also delete hole.
update trigger
A trigger that is defined with the triggering SQL operation update.
UR See uncommitted read.
user-defined data type (UDT)
See distinct type.
user-defined function (UDF)
A function that is defined to DB2 by using the CREATE FUNCTION
statement and that can be referenced thereafter in SQL statements. A
user-defined function can be an external function, a sourced function, or an
SQL function. Contrast with built-in function.
user view
In logical data modeling, a model or representation of critical information
that the business requires.
UTF-8 Unicode Transformation Format, 8-bit encoding form, which is designed
for ease of use with existing ASCII-based systems. The CCSID value for
data in UTF-8 format is 1208. DB2 for z/OS supports UTF-8 in mixed data
fields.
UTF-16
Unicode Transformation Format, 16-bit encoding form, which is designed
to provide code values for over a million characters and a superset of
UCS-2. The CCSID value for data in UTF-16 format is 1200. DB2 for z/OS
supports UTF-16 in graphic data fields.
value The smallest unit of data that is manipulated in SQL.
variable
A data element that specifies a value that can be changed. A COBOL
elementary data item is an example of a host variable. Contrast with
constant.
variant function
See nondeterministic function.
Glossary 1001
| blocks and tasks) in multiple address spaces, allowing them to be reported
| on and managed by WLM as part of a single work request.
write to operator (WTO)
An optional user-coded service that allows a message to be written to the
system console operator informing the operator of errors and unusual
system conditions that might need to be corrected (in z/OS).
WTO See write to operator.
WTOR
Write to operator (WTO) with reply.
XCF See cross-system coupling facility.
XES See cross-system extended services.
XML See Extensible Markup Language.
XML attribute
A name-value pair within a tagged XML element that modifies certain
features of the element.
| XML column
| A column of a table that stores XML values and is defined using the data
| type XML. The XML values that are stored in XML columns are internal
| representations of well-formed XML documents.
| XML data type
| A data type for XML values.
XML element
A logical structure in an XML document that is delimited by a start and an
end tag. Anything between the start tag and the end tag is the content of
the element.
| XML index
| An index on an XML column that provides efficient access to nodes within
| an XML document by providing index keys that are based on XML
| patterns.
| XML lock
| A column-level lock for XML data. The operation of XML locks is similar
| to the operation of LOB locks.
XML node
The smallest unit of valid, complete structure in a document. For example,
a node can represent an element, an attribute, or a text string.
| XML node ID index
| An implicitly created index, on an XML table that provides efficient access
| to XML documents and navigation among multiple XML data rows in the
| same document.
| XML pattern
| A slash-separated list of element names, an optional attribute name (at the
| end), or kind tests, that describe a path within an XML document in an
| XML column. The pattern is a restrictive form of path expressions, and it
| selects nodes that match the specifications. XML patterns are specified to
| create indexes on XML columns in a database.
XML publishing function
| A function that returns an XML value from SQL values. An XML
| publishing function is also known as an XML constructor.
Glossary 1003
1004 Utility Guide and Reference
Information resources for DB2 for z/OS and related products
Many information resources are available to help you use DB2 for z/OS and many
related products. A large amount of technical information about IBM products is
now available online in information centers or on library Web sites.
Disclaimer: Any Web addresses that are included here are accurate at the time this
information is being published. However, Web addresses sometimes
change. If you visit a Web address that is listed here but that is no
longer valid, you can try to find the current Web address for the
product information that you are looking for at either of the following
sites:
v http://www.ibm.com/support/publications/us/library/
index.shtml, which lists the IBM information centers that are
available for various IBM products
v http://www.elink.ibmlink.ibm.com/public/applications/
publications/cgibin/pbi.cgi, which is the IBM Publications Center,
where you can download online PDF books or order printed books
for various IBM products
The primary place to find and use information about DB2 for z/OS is the
Information Management Software for z/OS Solutions Information Center
(http://publib.boulder.ibm.com/infocenter/imzic), which also contains information
about IMS, QMF, and many DB2 and IMS Tools products. The majority of the DB2
for z/OS information in this information center is also available in the books that
are identified in the following table. You can access these books at the DB2 for
z/OS library Web site (http://www.ibm.com/software/data/db2/zos/library.html)
or at the IBM Publications Center (http://www.elink.ibmlink.ibm.com/public/
applications/publications/cgibin/pbi.cgi).
Table 190. DB2 Version 9.1 for z/OS book titles
Available in Available in
Publication information Available in BookManager® Available in
Title number center PDF format printed book
DB2 Version 9.1 for z/OS SC18-9840 X X X X
Administration Guide
DB2 Version 9.1 for z/OS Application SC18-9841 X X X X
Programming & SQL Guide
DB2 Version 9.1 for z/OS Application SC18-9842 X X X X
Programming Guide and Reference for
Java
DB2 Version 9.1 for z/OS Codes GC18-9843 X X X X
DB2 Version 9.1 for z/OS Command SC18-9844 X X X X
Reference
DB2 Version 9.1 for z/OS Data Sharing: SC18-9845 X X X X
Planning and Administration
DB2 Version 9.1 for z/OS Diagnosis LY37-3218 X X X
Guide and Reference 1
Notes:
1. DB2 Version 9.1 for z/OS Diagnosis Guide and Reference is available in PDF
and BookManager formats on the DB2 Version 9.1 for z/OS Licensed
Collection kit, LK3T-7195. You can order this License Collection kit on
the IBM Publications Center site (http://www.elink.ibmlink.ibm.com/
public/applications/publications/cgibin/pbi.cgi). This book is also
available in online format in DB2 data set DSN910.SDSNIVPD(DSNDR).
2. DB2 Version 9.1 for z/OS Reference Summary will be available in 2007.
In the following table, related product names are listed in alphabetic order, and the
associated Web addresses of product information centers or library Web pages are
indicated.
These resources include information about the following products and others:
v DB2 Administration Tool
v DB2 Automation Tool
v DB2 DataPropagator (also known as WebSphere® Replication Server for z/OS)
v DB2 Log Analysis Tool
v DB2 Object Restore Tool
v DB2 Query Management Facility
v DB2 SQL Performance Analyzer
v DB2 Tivoli OMEGAMON for XE Performance Expert on z/OS (includes Buffer Pool
Analyzer and Performance Monitor)
DB2 Universal Database™ Information center: http://www.ibm.com/systems/i/infocenter/
for iSeries™
Debug Tool for z/OS Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
Enterprise COBOL for Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
z/OS
Enterprise PL/I for z/OS Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
IMS Information center: http://publib.boulder.ibm.com/infocenter/imzic
Information resources for DB2 for z/OS and related products 1007
Table 191. Related product information resource locations (continued)
Related product Information resources
IMS Tools One of the following locations:
v Information center: http://publib.boulder.ibm.com/infocenter/imzic
v Library Web site: http://www.ibm.com/software/data/db2imstools/library.html
These resources have information about the following products and others:
v IMS Batch Terminal Simulator for z/OS
v IMS Connect
v IMS HALDB Conversion and Maintenance Aid
v IMS High Performance Utility products
v IMS DataPropagator
v IMS Online Reorganization Facility
v IMS Performance Analyzer
PL/I Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
This resource includes information about the following z/OS elements and components:
v Character Data Representation Architecture
v Device Support Facilities
v DFSORT
v Fortran
v High Level Assembler
v NetView®
v SMP/E for z/OS
v SNA
v TCP/IP
v TotalStorage® Enterprise Storage Server®
v VTAM
v z/OS C/C++
v z/OS Communications Server
v z/OS DCE
v z/OS DFSMS
v z/OS DFSMS Access Method Services
v z/OS DFSMSdss
v z/OS DFSMShsm
v z/OS DFSMSdfp™
v z/OS ICSF
v z/OS ISPF
v z/OS JES3
v z/OS Language Environment®
v z/OS Managed System Infrastructure
v z/OS MVS
v z/OS MVS JCL
v z/OS Parallel Sysplex®
v z/OS RMF™
v z/OS Security Server
v z/OS UNIX System Services
z/OS XL C/C++ http://www.ibm.com/software/awdtools/czos/library/
The following information resources from IBM are not necessarily specific to a
single product:
v The DB2 for z/OS Information Roadmap; available at: http://www.ibm.com/
software/data/db2/zos/roadmap.html
v DB2 Redbooks™ and Redbooks about related products; available at:
http://www.ibm.com/redbooks
v IBM Educational resources:
– Information about IBM educational offerings is available on the Web at:
http://www.ibm.com/software/sw-training/
Information resources for DB2 for z/OS and related products 1009
– A collection of glossaries of IBM terms in multiple languages is available on
the IBM Terminology Web site at: http://www.ibm.com/ibm/terminology/
index.html
v National Language Support information; available at the IBM Publications
Center at: http://www.elink.ibmlink.ibm.com/public/applications/publications/
cgibin/pbi.cgi
v SQL Reference for Cross-Platform Development; available at the following
developerWorks® site: http://www.ibm.com/developerworks/db2/library/
techarticle/0206sqlref/0206sqlref.html
The following information resources are not published by IBM but can be useful to
users of DB2 for z/OS and related products:
v Database design topics:
– DB2 for z/OS and OS/390® Development for Performance Volume I, by Gabrielle
Wiorkowski, Gabrielle & Associates, ISBN 0-96684-605-2
– DB2 for z/OS and OS/390 Development for Performance Volume II, by Gabrielle
Wiorkowski, Gabrielle & Associates, ISBN 0-96684-606-0
– Handbook of Relational Database Design, by C. Fleming and B. Von Halle,
Addison Wesley, ISBN 0-20111-434-8
v Distributed Relational Database Architecture™ (DRDA) specifications;
http://www.opengroup.org
v Domain Name System: DNS and BIND, Third Edition, Paul Albitz and Cricket
Liu, O’Reilly, ISBN 0-59600-158-4
v Microsoft Open Database Connectivity (ODBC) information;
http://msdn.microsoft.com/library/
v Unicode information; http://www.unicode.org
Index X-3
CHECK LOB utility (continued) compatibility
option descriptions 105 CATENFM utility 55
output 103 CATMAINT utility 60
restarting 111 CHECK DATA utility 81
syntax diagram 104 CHECK INDEX utility 98
terminating 110 CHECK LOB utility 111
CHECK-pending (CHECKP) status COPY utility 142
resetting COPYTOCOPY utility 168
for a LOB table space 110 declared temporary table 4
CHECK-pending (CHKP) status DEFINE NO objects 4
CHECK DATA utility 61, 78 DIAGNOSE utility 178
description 897 EXEC SQL utility 183
indoubt referential integrity 288 LISTDEF utility 200
resetting 897 LOAD utility 285
for a table space 288 MERGECOPY utility 314
CHECK, option of DSN1COPY utility 786 MODIFY RECOVERY utility 324
CHECKPAGE, option of COPY utility 122 MODIFY STATISTICS utility 334
checkpoint queue OPTIONS utility 341
printing contents 753 QUIESCE utility 351
updating 742 REBUILD INDEX utility 373
CHECKPT, option of DSNJU003 utility 742 RECOVER utility 415
CHKP REORG INDEX utility 443
See CHECK-pending (CHKP) status REORG TABLESPACE utility 514
CHKPTRBA, option of DSNJU003 utility 738 REPAIR utility 554
CLOB REPORT utility 569
option of LOAD utility 247 RESTORE SYSTEM utility 590
option of UNLOAD utility 692 RUNSTATS utility 613
CLOBF STOSPACE utility 638
option of LOAD utility for CHAR 237, 239 TEMPLATE utility 658
CLONE UNLOAD utility 713
option of CHECK DATA utility 64 utilities access description 36
option of CHECK INDEX utility 87 COMPLETE, option of CATENFM utility 54
option of COPY utility 117 compression
option of DIAGNOSE utility 176 data, UNLOAD utility description 712
option of MERGECOPY utility 309 estimating disk savings 775
option of MODIFY RECOVERY utility 320 compression dictionary, building 501
option of QUIESCE utility 347 concurrency
option of REBUILD INDEX 360 BACKUP SYSTEM utility 50
option of RECOVER utility 388 utilities access description 36
option of REORG INDEX utility 424 utility jobs 37
option of REORG TABLESPACE utility 459 with real-time statistics 930
CLONED, option of LISTDEF utility 192 concurrent copies
CLUSTERING column of SYSINDEXES catalog table, use by COPYTOCOPY utility restriction 155
RUNSTATS 622 invoking 123
CLUSTERRATIOF column, SYSINDEXES catalog table 622 making 135
CMON, option of CATENFM utility 54 CONCURRENT, option of COPY utility 123, 135
COLCARDF column conditional restart control record
SYSCOLUMNS catalog table 620 creating 736, 747
cold start reading 764
example, creating a conditional restart control record 747 sample 764
specifying for conditional restart 734 status printed by print log map utility 753
COLDEL connection-name, naming convention xii
option of LOAD utility 220 CONSTANT, option of UNLOAD utility 691
option of UNLOAD utility 671 constraint violations, checking 61
COLGROUP, option of RUNSTATS utility 598 CONTINUE, option of RECOVER utility 395
COLGROUPCOLNO column, SYSCOLDIST catalog table 621 CONTINUEIF, option of LOAD utility 224
COLUMN continuous operation, recovering an error range 395
option of LOAD STATISTICS 214 control interval
option of RUNSTATS utility 598 LOAD REPLACE, effect of 256, 291
COLVALUE column, SYSCOLDIST catalog table 621 RECOVER utility, effect of 416
COMMAND, option of DSN1SDMP utility 841 REORG TABLESPACE, effect of 520
commit point control statement
DSNU command 28 See utility control statement
REPAIR utility LOCATE statement 540 CONTROL, option of DSNU CLIST command 26
restarting after out-of-space condition 41 conversion of data, LOAD utility 273
comparison operators 472 CONVERT, option of CATENFM utility 54
Index X-5
COPYTOCOPY utility (continued) DATA ONLY, option of BACKUP SYSTEM utility 47
generation data groups, defining 165 data set
input copy, determining which to use 165 name format in ICF catalog 121
instructions 161 name limitations 656
JCL parameters 162 data sets
lists, copying 158 BACKUP SYSTEM utility 49
making copies 163 CATENFM utility 54
multiple statements, using 164 CATMAINT utility 59
objects, copying from tape 166 change log inventory utility (DSNJU003) 743
option descriptions 158 CHECK DATA utility 73
output 155 CHECK INDEX utility 90
output data sets CHECK LOB utility 107, 110
size 162 concatenating 19, 21
specifying 160 COPY utility 125
partitions, copying 158 copying partition-by-growth table spaces 134
restarting 167 copying table space in separate jobs 133
restrictions 155 COPYTOCOPY utility 162
syntax diagram 156 definitions, changing during REORG 500
SYSIBM.SYSCOPY records, updating 164 DIAGNOSE utility 178
tape mounts, retaining 165 disposition
terminating 167 defaults for dynamically allocated data sets 649
using TEMPLATE 164 defaults for dynamically allocated data sets on
correlation ID, naming convention xii RESTART 650
COUNT disposition, controlling 20
option of LOAD STATISTICS 215 DSNJCNVB utility 725
option of REBUILD INDEX utility 362 for copies, naming 130
option of REORG INDEX utility 430 input, using 19
option of RUNSTATS utility 598 LOAD utility 250
COUNT option MERGECOPY utility 311
option of RUNSTATS utility 600, 605 MODIFY RECOVERY utility 322
CREATE option of DSNJU003 utility 736 MODIFY STATISTICS utility 332
CRESTART, option of DSNJU003 utility 736 naming convention xiii
cross loader function 181 output, using 19
CSRONLY, option of DSNJU003 utility 739 QUIESCE utility 348
CURRENT DATE, incrementing and decrementing value 698 REBUILD INDEX utility 364, 366
CURRENT option of REPORT utility 565 RECOVER utility 391
current restart, description 39 recovering, partition 394
CURRENTCOPYONLY option of RECOVER utility 385 REORG INDEX utility 433
cursor REORG TABLESPACE utility 486, 493
naming convention xiii REPAIR utility 549
CYL, option of TEMPLATE statement 652 REPORT utility 566
RESTORE SYSTEM utility 589
RUNSTATS utility 608
D security 20
space parameter, changing 439
data
space parameter, changing during REORG 500
adding 259
specifying 19
compressing 266
STOSPACE utility 636
converting 267
UNLOAD utility 699
converting with LOAD utility 273
data sharing
deleting 259
backing up group 45
DATA
real-time statistics 929
option of CHECK DATA utility 64
restoring data 589
option of LOAD utility 210
running online utilities 37
option of REPAIR DUMP 546
data type, specifying with LOAD utility 237
option of REPAIR REPLACE 544
data-only backup
option of REPAIR VERIFY 543
example 50
option of UNLOAD utility 665
explanation 47
data compression
database
dictionary
limits 851
building 266, 501
naming convention xiii
number of records needed 266
DATABASE
using again 266
option of LISTDEF utility 190
LOAD utility
option of REPAIR utility 547
description 266
DATACLAS, option of TEMPLATE statement 648
KEEPDICTIONARY option 216, 266
DATAONLY, option of DSN1LOGP utility 810
REORG TABLESPACE utility, KEEPDICTIONARY
DataRefresher 267
option 476
Index X-7
DIAGNOSE utility (continued) DSN1CHKR utility (continued)
examples (continued) dump format, printing 768
forcing a dump 179 environment 769
forcing an abend 179 examples
service level, finding 179 table space 772
suspending utility execution 180 temporary data set 770
TYPE 179 formatting table space pages on output 768
WAIT 180 hash value, specifying for DBID 768
forcing an abend 178 option descriptions 767
instructions 178 output 773
option descriptions 175 pointers, following 768
restarting 178 restrictions 770
syntax diagram 174 running 769
terminating 178 syntax diagram 767
WAIT statement SYSPRINT DD name 769
description 176 SYSUT1 DD name 769
syntax diagram 174 valid table spaces 770
DIAGNOSE, option of REPAIR utility 547 DSN1COMP utility
DIR, option of TEMPLATE statement 652 authorization required 778
directory control statement 778
integrity, verifying 767 data set size, specifying 776
MERGECOPY utility, restrictions 314 data sets required 779
order of recovering objects 397 DD statements
disability xvi SYSPRINT 779
discard data set, specifying DD statement for LOAD SYSUT1 779
utility 223 description 775
DISCARD, option of REORG TABLESPACE utility 482 environment 778
DISCARDDN examples
option of LOAD PART 232 free space 782
option of LOAD utility 223 FREEPAGE 782
option of REORG TABLESPACE utility 480 full image copy 781
DISCARDS, option of LOAD utility 223 FULLCOPY 781
DISCDSN, option of DSNU CLIST command 27 LARGE 782
DISP, option of TEMPLATE statement 649 NUMPARTS 782
DISPLAY DATABASE command, displaying range of pages in PCTFREE 782
error 395 REORG 782
DISPLAY Utility command ROWLIMIT 782
using with BACKUP SYSTEM for data sharing group 49 free pages, specifying 777
DISPLAY UTILITY command free space
description 35 including in compression calculations 780
using with RESTORE SYSTEM utility on a data sharing specifying 777
group 589 FREEPAGE 780
DISPLAY, option of DIAGNOSE utility 175 full image copy as input, specifying 777
displaying status of DB2 utilities 35 identical data rows 781
disposition, data sets, controlling 20 LARGE data sets, specifying 776
DL/I, loading data 267 maximum number of rows to evaluate, specifying 777
DOUBLE, option of UNLOAD utility 690 message DSN1941 783
DRAIN option descriptions 776
option of REORG INDEX utility 427 output
option of REORG TABLESPACE utility 466 example 783
DRAIN_WAIT interpreting 783
option of CHECK DATA utility 64 sample 780, 783
option of CHECK INDEX utility 88 page size of input data set, specifying 776
option of CHECK LOB utility 105 partitions, specifying number 776
option of REBUILD INDEX utility 360 PCTFREE 780
option of REORG INDEX utility 426 prerequisite actions 778
option of REORG TABLESPACE utility 465 recommendations 779
DROP, option of REPAIR utility 547 REORG 780
DSN, option of TEMPLATE statement 644 running 778, 780
DSN1CHKR utility savings comparable to REORG 777
anchor point, mapping 768 savings estimate 780
authorization 769 syntax diagram 775
concurrent copy, compatibility 770 DSN1COPY utility
control statement 769 additional volumes, for SYSUT2 796
data sets needed 769 altering a table before running 799
description 767 authorization required 792
DSN1COPY utility, running before 770 checking validity of input 786
Index X-9
DSN1PRNT utility (continued) DSNDB01.SYSCOPYs
examples copying restrictions 129
printing a data set in hexadecimal format 835 DSNDB01.SYSUTILX
printing a nonpartitioning index 836 copying restrictions 129
printing a partitioned data set 836 recovery information 569
printing a single page of an image copy 836 DSNDB06.SYSCOPY
filtering pages by value 832 recovery information 569
formatting output 833 DSNJCNVB utility
full image copy, specifying 829 authorization required 725
incremental copy, specifying 829 control statement 725
inline copy, specifying 829 data sets used 725
LARGE data set, specifying 829 description 725
LOB table space, specifying 829 dual BSDSs, converting 726
number of partitions, specifying 831 environment 725
option descriptions 828 example 726
output 836 output 726
page size, determining 835 prerequisite actions 725
page size, specifying 829 running 726
piece size, specifying 830 SYSPRINT DD name 726
processing encrypted data 835 SYSUT1 DD name 725
recommendations 835 SYSUT2 DD name 726
running 834 DSNJLOGF utility
syntax diagram 828 control statement 727
SYSUT1 data set, printing on SYSPRINT data set 831 data sets required 727
DSN1SDMP utility description 727
action, specifying 840, 841 environment 727
authorization required 842 example 727
buffers, assigning 843 output 728
control statement 842 SYSPRINT DD name 727
DD statements SYSUT1 DD name 727
SDMPIN 843 DSNJU003 (change log inventory) utility
SDMPPRNT 843 See change log inventory utility
SDMPTRAC 843 DSNJU003 utility
SYSABEND 843 active logs
SYSTSIN 843 adding 745
description 837 changing 745
dump, generating 844 deleting 745
environment 842 enlarging 745
examples recording 745
abend 845, 846 altering references 748
dump 847 archive logs
second trace 848 adding 746
skeleton JCL 845 changing 746
instructions 843 deleting 746
option descriptions 838 authorization required 743
output 848 BSDS timestamp field, updating 744
required data sets 843 comment, in SYSIN records 744
running 842 control statement 743
selection criteria, specifying 838 data sets
syntax diagram 837 cataloging 734
trace destination 838 declaring 732
traces data sets needed 743
modifying 844 DELETE statement 748
stopping 844 description 729
DSN8G810, updating space information 638 environment 743
DSN8S81E table space, finding information about space examples
utilization 638 adding a communication record to BSDS 750
DSNACCOR stored procedure adding a communication record with an alias to
description 873 BSDS 750
example call 886 adding active log 745
option descriptions 874 adding archive log 746
output 890 adding archive log data set 750
syntax diagram 874 alias ports 750
DSNAME, option of DSNJU003 utility 733 changing high-level qualifier 749
DSNDB01.DBD01 creating conditional restart control record 750
copying restrictions 129 deleting a data set 750
recovery information 569 deleting active log 745
Index X-11
encrypted data EXEC SQL utility (continued)
running DSN1PRNT on 835 option descriptions 182
running REORG TABLESPACE on 484 output 181
running REPAIR on 549 restarting 183
running UNLOAD on 699 syntax diagram 182
running utilities on 5 terminating 183
encryption EXEC statement
DSN1PRNT utility effect on 835 built by CLIST 30
REORG TABLESPACE utility effect on 484 description 34
REPAIR utility effect on 549 executing
UNLOAD utility effect on 699 utilities, creating JCL 34
utilities effect on 5 utilities, DB2I 21
END FCINCREMENTAL utilities, JCL procedure (DSNUPROC) 31
explanation 47 exit procedure, LOAD utility 279
END FCINCREMENTAL, option of BACKUP SYSTEM EXPDL, option of TEMPLATE statement 648
utility 47 EXTENTS column
END, option of DIAGNOSE utility 177 SYSINDEXPART catalog table, use by RUNSTATS 627
ENDLRSN, option of DSNJU003 utility 735 SYSTABLEPART catalog table 625
ENDRBA, option of DSNJU003 utility 734 extracted key, calculating, LOAD utility 253
ENDTIME, option of DSNJU003 utility 736, 737
ENFMON, option of CATENFM utility 54
ENFORCE, option of LOAD utility 222, 265
ERRDDN
F
fallback recovery considerations 498
option of CHECK DATA utility 68
FARINDREF column of SYSTABLEPART catalog table, use by
option of LOAD utility 222
RUNSTATS 625
error data set
FAROFFPOSF column of SYSINDEXPART catalog table
CHECK DATA utility 68, 74
catalog query to retrieve value for 494
error range recovery 395
description 628
ERROR RANGE, option of RECOVER utility 388
field procedure, LOAD utility 279
error, calculating, LOAD utility 254
filter data set, determining size 126
ESA data compression, estimating disk savings 775
FILTER, option of DSN1LOGP utility 814
ESCAPE clause 475, 696
FILTER, option of DSN1SDMP utility 841
ESTABLISH FCINCREMENTAL
FILTERDDN, option of COPY utility 123
explanation 47
FIRSTKEYCARDF column, SYSINDEXES catalog table 622
ESTABLISH FCINCREMENTAL, option of BACKUP SYSTEM
FLOAT
utility 47
option of LOAD utility 221
EVENT, option of OPTIONS statement 339
option of UNLOAD utility 672, 690
exception table
FLOAT EXTERNAL, option of LOAD utility 246
columns 71
FLOAT, option of LOAD utility 245
creating 71
FOR EXCEPTION, option of CHECK DATA utility 66
definition 74
FOR, option of DSN1SDMP utility 841
example 72
FOR2, option of DSN1SDMP utility 841
with LOB columns 72
force
EXCEPTIONS
example 51
option of CHECK DATA utility 68
FORCE, option of BACKUP SYSTEM utility 47
option of CHECK LOB utility 106
FORCEROLLUP
exceptions, specifying the maximum number
option of LOAD STATISTICS 216
CHECK DATA utility 68
option of REBUILD INDEX utility 363
CHECK LOB utility 106
option of REORG INDEX utility 431
EXCLUDE
option of REORG TABLESPACE utility 480
option of LISTDEF 194
option of RUNSTATS utility 602, 607
EXCLUDE, option of LISTDEF utility 187
foreign key, calculating, LOAD utility 254
EXEC SQL utility
FORMAT
authorization 181
option of DSN1CHKR utility 768
compatibility 183
option of DSN1PRNT utility 833
cursors 182
option of LOAD utility 219
declare cursor statement
FORMAT SQL/DS, option of LOAD utility 219
description 182
FORMAT UNLOAD, option of LOAD utility 219
syntax diagram 182
FORWARD, option of DSNJU003 utility 738
description 181
free space
dynamic SQL statements 182
REORG INDEX utility 445
examples
FREEPAGE, option of DSN1COMP utility 777
creating a table 183
FREESPACE column of SYSLOBSTATS catalog table 630
declaring a cursor 183
FREQUENCYF column, SYSCOLDIST catalog table 621
inserting rows into a table 183
FREQVAL
using a mapping table 529
option of LOAD STATISTICS 215
execution phase 181
option of REBUILD INDEX utility 362
Index X-13
index (continued) INTEGER EXTERNAL
rebuilding in parallel 368 option of LOAD utility 243
rebuilt, recoverability 372 option of UNLOAD utility 686
version numbers, recycling 291 Interactive System Productivity Facility (ISPF) 21
INDEX INTO TABLE, option of LOAD utility 226
option of CHECK INDEX utility 86 invalid LOB 79
option of COPY utility 118 invalid SQL terminator characters 912
option of COPYTOCOPY utility 158 invalidated plans and packages
option of LISTDEF utility 191 identifying 60
option of MODIFY STATISTICS utility 331 ISPF (Interactive System Productivity Facility), utilities
option of REORG INDEX utility 424 panels 21
option of REPAIR utility ITEMERROR, option of OPTIONS statement 339
LEVELID statement 536
LOCATE statement 542
SET statement 538
option of REPORT utility 564
J
JCL (job control language)
option of RUNSTATS utility 599, 604
COPYTOCOPY utility 164
INDEX ALL, option of REPORT utility 564
DSNUPROC utility 24
INDEX NONE, option of REPORT utility 564
JCL PARM statement 338
index partitions, rebuilding 367
JES3 environment, making copies 314
index space
JES3DD, option of TEMPLATE statement 654
recovering 355
job control language
index space status, resetting 550
See JCL (job control language)
INDEX
job control language (JCL)
option of RECOVER utility 383
See JCL (job control language)
option of REORG TABLESPACE utility 478
JOB statement, built by CLIST 29
indexes
copying 134
INDEXSPACE
option of COPY utility 118 K
option of COPYTOCOPY utility 158 KEEPDICTIONARY
option of LISTDEF utility 190 option of LOAD PART 231
option of MODIFY STATISTICS utility 331 option of LOAD utility 216, 266
option of REBUILD INDEX utility 358 option of REORG TABLESPACE utility 266, 476
option of RECOVER utility 383 key
option of REORG INDEX utility 424 calculating, LOAD utility 253
option of REPAIR utility foreign, LOAD operation 264
SET statement 538 length
option of REPAIR utility for LEVELID statement 536 maximum 853
option of REPORT utility 564 primary, LOAD operation 264, 265
INDEXSPACES, option of LISTDEF utility 188 KEY
indoubt state 738 option of OPTIONS utility 340
INDSN, option of DSNU CLIST command 26 option of REPAIR utility on LOCATE statement 541
inflight state 738 KEYCARD
informational COPY-pending (ICOPY) status option of LOAD STATISTICS 215
COPY utility 119 option of REBUILD INDEX utility 362
description 899 option of REORG INDEX utility 430
resetting 113, 138, 899 option of RUNSTATS utility 599, 604
informational referential constraints, LOAD utility 205
INLCOPY
option of DSN1COPY utility 787
option of DSN1PRNT utility 829
L
labeled-duration expression 472
inline COPY
LARGE
base table space 282
option of DSN1COMP utility 776
copying 164
option of DSN1COPY utility 788
creating with LOAD utility 269
option of DSN1PRNT utility 829
creating with REORG TABLESPACE utility 503
large partitioned table spaces, RUNSTATS utility 615
inline statistics
LAST
collecting during LOAD 282
option of MODIFY RECOVERY utility 321
using in place of RUNSTATS 612
LEAFDIST column of SYSINDEXPART catalog table 629
input fields, specifying 274
LEAFDISTLIMIT, option of REORG INDEX utility 429
INPUT, option of CATENFM utility 54
LEAFFAR column of SYSINDEXPART catalog table 628
INSTANCE, option of DIAGNOSE utility 177, 178
LEAFLIM, option of DSN1COMP utility 777
INTEGER
LEAFNEAR column of SYSINDEXPART catalog table 628
option of LOAD utility 243
LEAST, option of RUNSTATS utility 599, 605
option of UNLOAD utility 686
LENGTH, option of REPAIR utility 545
level identifier, resetting 535
Index X-15
LOAD utility (continued) LOAD utility (continued)
delimited files 261 foreign keys
delimiters 261 calculating 254
description 205 invalid values 264
DFSORT data sets, device type 223 format, specifying 219
discard data set free space 278
declaring 223 identity columns 258
maximum number of records 223 improving parallel processing 270
discarded rows, inline statistics 282 improving performance 271
duplicate keys, effects 255 informational referential constraints 205
dynamic SQL 268 inline copy 282
effect on real-time statistics 921 inline COPY 269
ENFORCE NO inline statistics, collecting 282
actions to take 289 input data set, specifying 210
consequences 265 input data, preparing 250
enforcing constraints 222 input fields, specifying 274
error work data set, specifying 223 instructions 254
error, calculating 254 into-table spec 226
examples KEEPDICTIONARY option 266
CHECK DATA 289 keys
CHECK DATA after LOAD RESUME 290 calculating 253
concatenating records 295 estimating number 271
CONTINUEIF 295 LOAD INTO TABLE options 229
COPYDDN 299 loading data from DL/I 267
CURSOR 304 LOB column 279
data 293, 294, 303 LOG, using on LOB table space 280
declared cursors 304 LOG, using on XML table space 281
default values, loading 296 logging 217
DEFAULTIF 296 map, calculating 254
DELIMITED 294 multilevel security restriction on REPLACE option 205
delimited files 294 multiple tables, loading 226
ENFORCE CONSTRAINTS 296 null values, setting criteria for 248
ENFORCE NO 297 option descriptions 210, 229
field positions, specifying 292 ordering records 255
inline copies, creating 299 output 205
KEEPDICTIONARY 266 parallel index build
loading 305 data sets used 276
loading by partition 259 sort subtasks 276, 277
LOBs 305 sort work file, estimating size 277
null values, loading 296 partitions
NULLIF 296 copying 287
parallel index build 298 loading 230, 259
PART 293 performance recommendations 269
partition parallelism 303, 304 preprocessing 250
POSITION 292 primary key
referential constraints 296, 297 duplicate values 264
REPLACE 293 missing values 265
replace table in single-table table space 256 REBUILD-pending status 278
replace tables in multi-table table space 257 resetting 288
replacing data in a given partition 293 RECOVER-pending status 278
selected records, loading 293 recovering failed job 290
SORTKEYS 298 recycling version numbers 291
STATISTICS 300 referential constraints 264
statistics, collecting 300 REORG-pending status
Unicode input, loading 302 loading data in 278
UNICODE option 302 REPLACE option 256
EXEC SQL statements 268 replacing data 212
exit procedure 279 restarting 283
extracted keys, calculating number 253 restrictive states, compatibility 256
failed job, recovering 290 RESUME YES SHRLEVEL CHANGE, without logging 281
field length reusing data sets 217
defaults 236 row selection criteria 233
determining 235 ROWID columns 258, 279
field names, specifying 235 skipping fields 230
field position, specifying 236 sort work file, specifying 218
field specifications 234 SORTKEYS NO 250
statistics, gathering 214
Index X-17
M MESSAGE, option of DIAGNOSE utility 177
MGMTCLAS, option of TEMPLATE statement 648
MAP missing LOB 78
option of DSN1CHKR utility 768 MIXED
option of REPAIR utility 546 option of LOAD utility 247
map, calculating, LOAD utility 254 option of LOAD utility for CHAR 237
MAPDDN, option of LOAD utility 223 MIXED, option of LOAD utility for VARCHAR 239
MAPPINGTABLE, option of REORG TABLESPACE MODELDCB, option of TEMPLATE statement 648
utility 465 MODIFY RECOVERY utility
MAXERR, option of UNLOAD utility 672 age criteria 320
MAXPRIME, option of TEMPLATE statement 652 authorization 317
MAXRO compatibility 324
option of REBUILD INDEX utility 359 copies, deleting 323
option of REORG INDEX utility 427 data sets needed 322
option of REORG TABLESPACE utility 466 date criteria 320
MAXROWS, option of DSN1COMP utility 777 DBD, reclaiming space 323
MB, option of TEMPLATE statement 652 description 317
media failure, resolving 110 examples
member name, naming convention xiii AGE 325
MEMBER option of DSNJU004 utility 754 CLONE 326
MERGECOPY utility DATE 325
authorization 307 DELETE 325, 327
compatibility 314 deleting all SYSCOPY records 326
COPY utility, when to use 313 deleting SYSCOPY records by age 325
data sets needed 311 deleting SYSCOPY records by date 325
DBD01 309, 314 DSNUM 326
description 307 partitions 326
different types, merging restrictions 313 RETAIN 327
directory table spaces 314 GDG limit 321
examples instructions 322
merged full image copy 316 lists, using 319
merged incremental copy 315 log limit 321
NEWCOPY NO 315 option descriptions 319
NEWCOPY YES 316 partitions, processing 319
TEMPLATE 315 phases of execution 318
full image copy, merging with increment image recent records 321
copies 310 records, deleting 320
individual data sets 313 records, retaining 320
instructions 312 RECOVER-pending status, restriction 322
JES3 environment, making copies 314 recovery index rows, deleting 323
lists, using 308 REORG after adding column, improving performance 324
LOG information, deleting 313 restarting 324
LOG RBA inconsistencies, avoiding 313 syntax diagram 319
NEWCOPY option 312 SYSCOPY records, viewing 322
online copies, merging 312, 313 SYSCOPY, deleting rows 322
option descriptions 308 SYSLGRNX, deleting rows 322
output 307 SYSOBDS entries, deleting 323
output data set terminating 324
local, specifying 310, 311 version numbers, recycling 325
remote, specifying 310, 311 MODIFY STATISTICS utility
partitions, merging copies 309 authorization 329
phases of execution 307 compatibility 334
restarting 314 data sets needed 332
restrictions 307 description 329
syntax diagram 308 examples
SYSCOPY 309, 314 ACCESSPATH 334
SYSUTILX 309, 314 AGE 334
temporary data set, specifying 309, 311 DATE 334
terminating 314 deleting access path records by date 334
type of copy, specifying 312 deleting history records by age 334
work data set, specifying 311 deleting index statistics 335
message deleting space statistics records by age 334
DSNU command 30 SPACE 334
MERGECOPY utility 309 instructions 333
MODIFY RECOVERY utility 317 lists, using 330
QUIESCE utility 345 option descriptions 330
RECOVER utility 379 output 329
REORG INDEX utility 419
Index X-19
OPTIONS utility (continued) partition-by-growth table space
PREVIEW with TEMPLATE 338 loading 260
restarting 341 partition-by-growth table spaces, rebuilding 367
syntax diagram 337 partition-by-growth table spaces, reorganizing 508
TEMPLATE definition library, specifying 339 partition, copying 129
terminating 341 partitioned table space
order of recovering objects 397 loading 259
ORGRATIO column of SYSLOBSTATS catalog table 630 replacing a partition 259
orphan LOB 78 unloading 700
out-of-synch LOB 78 partitioned table spaces, reorganizing 508
OUTDDN, option of REPAIR utility 548 partitions
owner, creator, and schema concatenating copies with UNLOAD utility 702
renaming 59 rebalancing with REORG 501
ownership of objects PARTLEVEL, option of LISTDEF utility 191
changing from an authorization ID to a role 59 PASSWORD, option of DSNJU003 utility 742
pattern-matching characters, LISTDEF 196
patterns
P advanced information 476, 697
PCTFREE, option of DSN1COMP utility 777
page
PCTPRIME, option of TEMPLATE statement 652
checking 122
PCTROWCOMP column, SYSTABLES catalog table 620
damaged, repairing 550
pending status, resetting 895
recovering 395
PERCACTIVE column of SYSTABLEPART catalog table, use by
size, relationship to number of pages 832
RUNSTATS 626
PAGE
PERCDROP column of SYSTABLEPART catalog table, use by
option of DSN1CHKR utility 769
RUNSTATS 626
option of DSN1LOGP utility 811
performance
option of RECOVER utility 384
affected by
option of REPAIR utility on LOCATE statement 540
I/O activity 494
PAGE option
table space organization 495
RECOVER utility 395
COPY utility 138
REPAIR utility 542
LOAD utility, improving 269
page set REBUILD-pending (PSRBD) status
monitoring with the STOSPACE utility 637
description 372, 899
RECOVER utility 409
resetting 372, 899
REORG INDEX utility, improving 440
PAGES, option of REPAIR utility 546
REORG TABLESPACE utility, improving 503
PAGESAVE column of SYSTABLEPART catalog table, use by
RUNSTATS utility 612
RUNSTATS 626
phase restart, description 39
PAGESIZE
phases of execution
option of DSN1COMP utility 776
BACKUP SYSTEM utility 45
option of DSN1COPY utility 787
CHECK DATA utility 62
option of DSN1PRNT utility 829
CHECK INDEX utility 85
panel
CHECK LOB utility 103
Control Statement Data Set Names 24
COPY utility 114
Data Set Names 23
COPYTOCOPY utility 156
DB2 Utilities 21, 22
description 36
PARALLEL
EXEC SQL utility 181
option of COPY utility 113, 122
LISTDEF utility 185
option of RECOVER utility 385
LOAD utility 205
parallel index build 368
MERGECOPY utility 307
parsing rules, utility control statements 18, 723
MODIFY RECOVERY utility 318
PART
MODIFY STATISTICS utility 329
option of CHECK DATA utility 64
OPTIONS utility 337
option of CHECK INDEX utility 87
QUIESCE utility 345
option of LOAD utility 230, 259
REBUILD INDEX utility 355
option of QUIESCE utility 347
RECOVER utility 380
option of REBUILD INDEX utility 358
REORG INDEX utility 420
option of REORG INDEX utility 424
REORG TABLESPACE utility 451
option of REORG TABLESPACE utility 460
REPAIR utility 533
option of REPAIR utility
REPORT utility 561
LOCATE INDEX and LOCATE INDEXSPACE
RESTORE SYSTEM utility 585
statements 542
RUNSTATS utility 594
LOCATE TABLESPACE statement 540
STOSPACE utility 635
SET TABLESPACE and SETINDEX options 538
TEMPLATE utility 641
option of REPAIR utility for LEVELID 536
UNLOAD utility 663
option of RUNSTATS utility 597, 599, 604
utilities
option of UNLOAD utility 666, 700
CATENFM 53
Index X-21
real-time statistics tables (continued) records, loaded, ordering 255
effect of SQL operations 928 RECOVER utility
effect of updating partitioning keys 929 authorization 379
recovering 930 catalog and directory objects 397
setting up 919 catalog table spaces, recovering 401
setting update interval 919 CHECK-pending status, resetting 407
starting 919 compatibility 415
REAL, option of UNLOAD utility 690 compressed data, recovering 408
REBALANCE, option of REORG TABLESPACE utility 460 concurrent copies, improving recovery performance 385
rebinding, recommended after LOAD 282 damaged media, avoiding 413
REBUILD INDEX utility data sets needed 391
access, specifying 366 description 379
authorization 355 DFSMShsm data sets 410
building indexes in parallel 368 effects 416
catalog indexes 372 error range, recovering 395
compatibility 373 examples
control statement, creating 366 CLONE 418
data sets needed 364, 366 concurrent copies 417
description 355 CURRENTCOPYONLY 417
DRAIN_WAIT, when to use 367 different tape devices 418
dynamic DFSORT and SORTDATA allocation, DSNUM 393, 416
overriding 371 error range 395
effect on real-time statistics 925 index image copy, recovering to 417
effects 375 last image copy, recovering to 416
examples LIST 418
all indexes in a table space, rebuilding 377 list of objects, recovering in parallel 418
CLONE 378 list of objects, recovering to point in time 418
index partitions, rebuilding 376 LRSN, recovering to 417
inline statistics 377 multiple table spaces 393
multiple partitions, rebuilding 376 PARALLEL 418
partitions, rebuilding all 376 partition, recovering 416
restrictive states, condition 377 partitions 393
SHRLEVEL CHANGE 378 point-in-time recovery 417
single index, rebuilding 376 RESTOREBEFORE 418
index partitions 367 single table space 393
instructions 366 table space, recovering 416
option descriptions 357 TAPEUNITS 418
partition-by-growth table spaces 367 TOLASTCOPY 416
performance recommendations 367 TOLASTFULLCOPY 417
phases of execution 355 TOLOGPOINT 417
prerequisites 364 TORBA 393
REBUILD-pending status, resetting 372 fallback 412
recoverability of rebuilt index 372 hierarchy of dependencies 399
recycling version numbers 375 incremental image copies 394
restarting 373 input data sets 392
several indexes instructions 392
performance 367 JES3 environment 411
SHRLEVEL CHANGE lists of objects 393
log processing 366 lists, using 383
when to use 367 LOB data 402
SHRLEVEL option 366 LOGAPPLY phase, optimizing 410
slow log processing, operator actions 366 mixed volume IDs 413
sort subtasks for parallel build 371 non-DB2 data sets 396
sort subtasks for parallel index build, determining option descriptions 383
number 371 order of recovery 399
sort work file size 371 output 379
syntax diagram 356 pages, recovering 384, 395
terminating 373 parallel recovery 385, 394
work data sets, calculating size 365 partitions, recovering 384, 394
REBUILD-pending (RBDP) status performance recommendations 409
description 372, 899 phases of execution 380
resetting 412, 899 point-in-time recovery
REBUILD-pending (RDBP) status performing 401
set by LOAD utility 288 planning for 400
REBUILD-pending star (RBDP*) status, resetting 372 point-time-recovery
REBUILD, option of REPAIR utility 548 planning for 406
RECDS, option of DSNU CLIST command 27 RBA, recovering to 384
Index X-23
REORG INDEX utility (continued) REORG TABLESPACE utility (continued)
time-out condition, actions for 428 examples (continued)
unloading data, action after 429 rebalancing partitions 501
version numbers, recycling 445 RETRY 526
versions, effect on 445 RETRY_DELAY 526
waiting time when draining for SQL 426 sample REORG output for conditional REORG 525
REORG TABLESPACE utility sample REORG output for draining table space 529
access, specifying 463, 495 sample REORG output that shows if REORG limits
actions after running 518 have been met 524
authorization 450 SCOPE PENDING 532
building indexes in parallel 505 SHRLEVEL CHANGE 522
catalog and directory sort input data set, specifying 521
considerations 482 statistics, updating 523
determining when to reorganize 499 table space, reorganizing 521
limitations for reorganizing 499 unload data set, specifying 521
phases for reorganizing 500 failed job, recovering 511
reorganizing 498 fallback recovery considerations 498
compatibility indexes, building in parallel 505
with all other utilities 514 inline copy 503
with CHECK-pending status 486 instructions 493
with REBUILD-pending status 485 interrupting temporarily 500
with RECOVER-pending status 485 lists, using 459
with REORG-pending status 486 LOB table space
compression dictionary reorganizing 510
building 501 restriction 452
not building new 476 log processing, specifying max time 466
control statement, creating 493 logging, specifying 461
CURRENT DATE option long logs, action taken 467
decrementing 473 LONGLOG action, specifying interval 467
incrementing 473 mapping table
data set example 483
copy, specifying 462 preventing concurrent use 483
discard, specifying 480 specifying name 465
shadow, determining name 491 using 483
unload 497 multilevel security restrictions 450
data sets option descriptions 459
shadow 492, 493 output 449, 518
unload 488 partition-by-growth table spaces, reorganizing 508
unload, specifying name 480 partitioned table spaces, reorganizing 508
work 489 partitions in parallel 503
data sets needed 486, 493 partitions, REORG-pending status considerations 502
deadline for SWITCH phase, specifying 464 performance recommendations
description 449 after adding column 324
DFSORT messages, specifying destination 490 general 503
drain behavior, specifying 466 phases of execution
DRAIN_WAIT, when to use 504 BUILD phase 451
DSNDB07 database, restriction 449 LOG phase 451
dynamic DFSORT and SORTDATA allocation, RELOAD phase, description 451
overriding 501 RELOAD phase, error 508
effects 519 SORT phase 451
error in RELOAD phase 508 SORTBLD phase 451
examples SWITCH phase 451, 452
CLONE 532 UNLOAD phase 451
conditional reorganization 524 UTILINIT phase 451
DEADLINE 522 UTILTERM phase 451
deadline for SWITCH phase, specifying 522 preformatting pages 481
determining whether to reorganize 523 processing encrypted data 484
discarding records 530, 531 REBALANCE
DRAIN_WAIT 526 restrictions 484
draining table space 526 rebalancing partitions 501
LONGLOG 522 reclaiming space from dropped tables 498
mapping table, using 529 records, discarding 482
maximum processing time, specifying 522 recycling version numbers 519
parallel index build 521 region size recommendation 483
partition, reorganizing 521 RELOAD phase
range of partitions, reorganizing 523 counting records loaded 509
read-write access, allowing 522 RELOAD phase, encountering an error in 508
Index X-25
REPAIR utility (continued) resetting (continued)
syntax diagram 542 pending status (continued)
VERIFY statement, using with REPLACE and group buffer pool RECOVER-pending (GRECP) 899
DELETE 552 informational COPY-pending (ICOPY) 138, 899
version information page set REBUILD-pending (PSRBD) 899
updating on the same system 536 REBUILD-pending (RBDP) 372
version information, updating when moving to another REBUILD-pending (RBDP), for the RECOVER
system 552 utility 412
warning 548 REBUILD-pending (RBDP), summary 899
REPLACE RECOVER-pending (RECP), for the RECOVER
option of LOAD PART 231 utility 412
option of LOAD utility 212 RECOVER-pending (RECP), summary 900
statement of REPAIR utility REORG-pending (REORP) 901
description 543 restart-pending 903
used in LOCATE block 539 refresh status, REFRESH-pending (REFP) 901
replacing data in a table space 256 warning status, auxiliary warning (AUXW) 896
REPORT RESPORT, option of DSNJU003 utility 739
option of LOAD STATISTICS 215 restart
option of REBUILD INDEX utility 361 conditional control record
option of REORG INDEX utility 430 reading 764
option of REORG TABLESPACE utility 479 sample 764
option of RUNSTATS utility 601, 606 restart-pending (RESTP) status
REPORT utility description 903
authorization 561 resetting 903
catalog and directory 569 RESTART, option of DSNU CLIST command 27
compatibility 569 restarting
control statement, creating 567 performing first two phases only 739
data sets needed 566 problems
description 561 cannot restart REPAIR 553
examples cannot restart REPORT 569
recovery information for index 582 utilities
recovery information for partition 580, 583 BACKUP SYSTEM 50
recovery information for table space 577 CATMAINT 60
referential relationships 579 CHECK DATA 80
SHOWDSNS 584 CHECK INDEX 98
TABLESPACESET 579 CHECK LOB 111
instructions 567 COPY 140
option descriptions 563 COPYTOCOPY 167
output 561 COPYTOCOPY utility 167
phases of execution 561 creating your own JCL 41
RECOVERY current restart 39
output 570 data set name 42
sample output 567, 571 data sharing 37
recovery information, reporting 563 DIAGNOSE 178
restarting 569 EXEC SQL 183
syntax diagram 562 EXEC statement 34
table space recovery information 567 JCL, updating 40
TABLESPACESET LISTDEF 200
output 569 LISTS 42
sample output 569 LOAD 283
terminating 569 MERGECOPY 314
REPORTONLY methods of restart 40
option of COPY utility 121 MODIFY RECOVERY utility 324
option of REORG INDEX utility 429 MODIFY STATISTICS 333
option of REORG TABLESPACE utility 468 OPTIONS 341
REPORTONLY, option of COPY utility 136 out-of-space condition 41
RESET phase restart 39
option of DSN1COPY utility 791 QUIESCE 351
option of REPAIR utility 543 REBUILD INDEX 373
resetting RECOVER 414
DBETE status REORG INDEX 441, 442
DBET error 898 REORG TABLESPACE 511
pending status RESTORE SYSTEM 589
advisory 895 RUNSTATS 613
auxiliary CHECK-pending (ACHKP) 895 STATISTICS keyword 42
CHECK-pending (CHKP) 897 STOSPACE 638
COPY-pending 898 TEMPLATE 658
Index X-27
RUNSTATS utility (continued) shadow data sets (continued)
LOB table space, space statistics 613 CHECK INDEX utility 91
option descriptions CHECK LOB utility 108
options for RUNSTATS INDEX 604 defining
options for RUNSTATS TABLESPACE 596 REORG INDEX utility 436
output 593, 615 REORG TABLESPACE utility 492
partitioned table space, updating statistics 611 estimating size, REORG INDEX utility 436
performance recommendations 612 shift-in character, LOAD utility 234
phases of execution 594 shift-out character, LOAD utility 234
preparation 608 shortcut keys
reporting information 601, 606 keyboard xvi
restarting 613 SHRLEVEL
sample of columns, gathering statistics 598 option of CHECK DATA utility 64
SAMPLE option 598 option of CHECK INDEX utility 87
sort work data sets, specifying number 602, 607 option of CHECK LOB utility 105
space columns updated 624 option of COPY utility
table space partitions, gathering statistics 597 CHANGE 124, 132
TABLESPACE option 593 REFERENCE 124, 132
TABLESPACE syntax diagram 595 option of LOAD utility 212
terminating 613 option of REBUILD INDEX utility 359
updating catalog information 601, 606 option of REORG INDEX utility 425
work data sets option of REORG TABLESPACE utility 463
using for frequency statistics 612 option of REPAIR utility on LOCATE statement 541
option of RUNSTATS utility 600, 606
option of UNLOAD utility 672
S SHRLEVEL CHANGE
option of REPAIR utility on LOCATE statement 541
SAMPLE
SIZE, option of DSNUPROC utility 32
option of LOAD STATISTICS 214
SKIP, option of OPTIONS statement 339
option of REORG TABLESPACE utility 478
SMALLINT
option of RUNSTATS utility 598
option of LOAD utility 243
option of UNLOAD utility 680
option of UNLOAD utility 686
scanning rules, utility control statements 18, 723
SORTDATA, option of REORG TABLESPACE utility 461
SCOPE
SORTDEVT
option of CHECK DATA utility 65, 76
option of CHECK DATA utility 69
option of COPY utility
option of CHECK INDEX utility 89
ALL 124
option of CHECK LOB utility 106
PENDING 124
option of LOAD utility 223
option of REBUILD INDEX utility 360
option of REBUILD INDEX 361
option of REORG TABLESPACE utility 460
option of REORG INDEX 430
SCOPE PENDING, CHECK DATA after LOAD utility 290
option of REORG TABLESPACE utility 481
SECPORT, option of DSNJU003 utility 739
option of RUNSTATS utility 602, 606
SECQTYI column
SORTKEYS
SYSINDEXPART catalog table 630
option of LOAD utility 218, 271
SYSTABLEPART catalog table, use by RUNSTATS 626
SORTNUM
security
option of CHECK DATA utility 69
multilevel with row-level granularity
option of CHECK INDEX utility 89
authorization restrictions for online utilities 20
option of CHECK LOB utility 106
authorization restrictions for stand-alone utilities 724
option of LOAD utility 224
security, data sets 20
option of REBUILD INDEX 361
SEGMENT, option of DSN1COPY utility 787
option of REORG INDEX 430
segmented table spaces, reorganizing 509
option of REORG TABLESPACE utility 481
SELECT statement
option of RUNSTATS utility 602, 607
list
SORTOUT
maximum number of elements 853
data set of LOAD utility, estimating size 252
SYSIBM.SYSTABLESPACE, example 638
space
select-statement, option of EXEC SQL utility 182
DBD, reclaiming 323
SELECT, option of DSN1SDMP utility 838
unused, finding for nonsegmented table space 494
SELECT2, option of DSN1SDMP utility 842
SPACE
semicolon
option of MODIFY STATISTICS utility 331
embedded 912
option of REORG TABLESPACE utility 480
SET INDEX statement of REPAIR utility 537
option of TEMPLATE utility 651
SET INDEXSPACE statement of REPAIR utility 537
SPACE column
SET TABLESPACE statement of REPAIR utility 537
analyzing values 638
setting SQL terminator
SYSTABLEPART catalog table, use by RUNSTATS 626
DSNTIAD 912
SPACE column of SYSINDEXPART catalog table 629
shadow data sets
space statistics 624
CHECK DATA utility 74
Index X-29
syntax diagram (continued)
DSNUPROC JCL procedure 31
T
DSNUTILS stored procedure 862, 871 table
EXEC SQL utility 182 dropping, reclaiming space 498
how to read xiv exception, creating 71
LISTDEF utility 186 multiple, loading 226
LOAD utility 208 replacing 256
MERGECOPY utility 308 replacing data 256
MODIFY RECOVERY utility 319 TABLE
MODIFY STATISTICS utility 330 option of LISTDEF utility 191
OPTIONS statement 337 option of LOAD STATISTICS 214
print log map utility 753 option of REORG TABLESPACE utility 477
QUIESCE utility 346 option of RUNSTATS utility 597
REBUILD INDEX utility 356 table name, naming convention xiv
RECOVER utility 381 table space
REORG INDEX utility 421 assessing status with RUNSTATS 611
REORG TABLESPACE utility 453 checking 61
REPAIR utility 534 checking multiple 77
REPORT utility 562 determining when to reorganize 438, 494
RESTORE SYSTEM utility 586 LOAD LOG 292
RUNSTATS INDEX 603 merging copies 307
RUNSTATS TABLESPACE 595 mixed volume IDs, copying 139
STOSPACE utility 635 naming convention xiv
TEMPLATE statement 642 nonsegmented, finding unused space 494
UNLOAD utility 664 partitioned, updating statistics 611
SYSCOPY REORG LOG 520
catalog table, information from REPORT utility 567 reorganizing
directory table space, MERGECOPY restrictions 309, 314 determining when to reorganize 494
option of DSN1LOGP utility 810 using SORTDATA option of REORG utility 495
SYSCOPY, deleting rows 322 utilization 438
SYSDISC data set segmented
LOAD utility, estimating size 252 copying 133
SYSERR data set LOAD utility 256
LOAD utility, estimating size 252 status, resetting 550
SYSIBM.SYSCOPY table spaces
ICBACKUP column 130 LOAD on NOT LOGGED, effect of 291, 520
ICUNIT column 130 TABLESPACE
SYSIN DD statement, built by CLIST 30 option of CHECK DATA utility 64
SYSLGRNX directory table, information from REPORT option of CHECK INDEX utility 87
utility 567 option of CHECK LOB utility 105
SYSLGRNX, deleting rows 322 option of COPY utility 117
SYSMAP data set option of COPYTOCOPY utility 158
estimating size 252 option of LISTDEF utility 190
SYSOBDS entries, deleting 323 option of MERGECOPY utility 309
SYSPITR, option of DSNJU003 utility 736 option of MODIFY RECOVERY utility 319
SYSPITRT, option of DSNJU003 utility 737 option of MODIFY STATISTICS utility 330
SYSPRINT DD statement, built by CLIST 30 option of QUIESCE utility 346
SYSTABLESPACESTATS option of REBUILD INDEX utility 358
contents 919 option of RECOVER utility 383
SYSTEM option of REORG TABLESPACE utility 459
option of DSNU CLIST command 28 option of REPAIR utility
option of DSNUPROC utility 32 general description 535
system data sets, renaming 749 on LOCATE TABLESPACE statement 540
system monitoring on SET TABLESPACE and SET INDEX statements 537
index organization 437 option of REPORT utility 563
table space organization 438, 494 option of RUNSTATS utility 597, 604
system point in time, creating 736 option of UNLOAD utility 666
system-level TABLESPACES, option of LISTDEF utility 188
backup 765 TABLESPACESET
system, limits 851 option of QUIESCE utility 347
SYSTEMPAGES, option of COPY utility 123 option of REPORT utility 566
SYSUT1 data set for LOAD utility, estimating size 252 TAPEUNITS
SYSUTILX directory table space option of COPY utility 122
MERGECOPY restrictions 309, 314 option of RECOVER utility 386
order of recovering 397 TEMPLATE library 655
TEMPLATE library, specifying 340
TEMPLATE utility
authorization 641
Index X-31
TOLOGPOINT, option of RECOVER utility 385 UNLOAD utility (continued)
TORBA option of RECOVER utility 384 converting data types 703
TOSEQNO, option of RECOVER utility 388 copies, concatenating 702
TOVOLUME, option of RECOVER utility 388 data sets used 699
TRACEID, option of DIAGNOSE utility 177, 178 data type compatibility 704
TRK, option of TEMPLATE statement 652 data, identifying 665
TRTCH, option of TEMPLATE statement 654 DBCLOB format, specifying 693
TRUNCATE DBCS string, truncating 693
option of LOAD utility DD name of unload data set, specifying 668
BINARY data type 243 DD statement for image copy, specifying 667
CHAR data type 238, 274 decimal format, specifying 688
GRAPHIC data type 241, 274 decimal point character, specifying for delimited
GRAPHIC EXTERNAL data type 242 formats 672
VARBINARY data type 244 delimited files 708
VARCHAR data type 240, 274 delimited format, specifying 671
VARGRAPHIC data type 242, 274 delimiters
TYPE column 671
option of DIAGNOSE utility 175 string 671
option of DSN1LOGP utility 812 description 663
EBCDIC format, specifying 669
examples
U CLONE 719
delimited file format 718
UID
FROMCOPY option 715
option of DSNU command 28
HEADER option 716
option of DSNUPROC utility 32
LOBs 719
UNCNT, option of TEMPLATE statement 649
partitioned table space 716
UNICODE
SAMPLE option 716
option of LOAD utility 221
specifying a header 716
option of UNLOAD utility 669
unloading a sample of rows 716
UNIT
unloading all columns 714
option of DSNJU003 utility 734
unloading data from an image copy 715
option of DSNU CLIST command 29
unloading data in parallel 716
option of TEMPLATE statement 647
unloading from two tables 716
unit of recovery
unloading LOBs 719
in-abort 738
unloading multiple table spaces 717
inflight 738
unloading specific columns 715
unit of work
unloading specified partitions 717, 719
See also unit of recovery
using a field specification list 715
in-commit 738
using LISTDEF 717, 719
indoubt, conditional restart 738
using TEMPLATE 716
UNLDDN
field position, specifying 681
option of REORG TABLESPACE utility 480
field specification errors, interpreting 712
option of UNLOAD utility 668
field specifications 674
UNLOAD
floating-point data, specifying format 672
option of REORG INDEX utility 429
FROM TABLE clause 701
option of REORG TABLESPACE utility 468
compatibility with LIST 675
UNLOAD utility
parentheses 674
64-bit floating point notation, specifying 690
FROM TABLE option descriptions 678
access, specifying 672
FROM TABLE syntax diagram 675
ASCII format, specifying 669
graphic type, specifying 684
authorization required 663
graphic type, truncating 684
binary floating-point number format, specifying 690
header field, specifying 679
blanks in VARBINARY fields, removing 687
image copies, unloading 702
blanks in VARCHAR fields, removing 683
image copy, specifying 666
blanks in VARGRAPHIC fields, removing 685
instructions 700
BLOB data type, specifying 692
integer format, specifying 686
BLOB strings, truncating 692
labeled duration expression 697
CCSID format, specifying 669
lists, specifying 667
CHAR data type, specifying 682
LOAD statements, generating 711
character string representation of date, specifying 690
LOAD statements, specifying data set for 668
character string representation of time, specifying 690
maximum errors allowed, specifying 672
character strings, truncating 682
maximum number of rows to unload, specifying 680
CLOB data type, specifying 692
multilevel security restrictions 663
CLOB strings, truncating 692
multiple tables, unloading 674
compatibility 713
option descriptions 665
compressed data 712
output 663
constant field, specifying 691
Index X-33
VARCHAR (continued) XML column
option of LOAD utility 238 loading 281
option of UNLOAD utility 683 XML columns
VARGRAPHIC loading 260
data type, loading 255 XML data
option of LOAD utility 242 loading 260
option of UNLOAD utility 685 XML table space
varying-length rows, relocated to other pages, finding number copying 134
of 494 LOAD LOG 281
VERIFY, statement of REPAIR utility 539, 542 XMLERROR, option of CHECK DATA utility 66
version information
updating when moving to another system 552
version number management 325
LOAD utility 291
REBUILD INDEX utility 375
REORG INDEX utility 445
REORG TABLESPACE utility 519
version numbers, recycling 325
VERSION, option of REPAIR utility on LOCATE
statement 541
VERSIONS, option of REPAIR utility 536
versions, REORG TABLESPACE effect on 519
violation messages 77
violations
correcting 77
finding 77
virtual storage access method (VSAM)
See VSAM (virtual storage access method)
VOLCNT, option of TEMPLATE statement 648
VOLUME, option of DSNU CLIST command 29
VOLUMES, option of TEMPLATE statement 648
VSAM (Virtual Storage Access Method)
used by REORG TABLESPACE 489
used by STOSPACE 637
VSAMCAT, option of DSNJU003 utility 739
W
WAIT, option of DIAGNOSE utility 176
WARNING, option of OPTIONS statement 339
WHEN
option of LOAD utility 233
option of REORG TABLESPACE utility 471
option of UNLOAD utility 694
WHITESPACE
option of LOAD utility 248
work data sets
CHECK DATA utility 68, 74
CHECK INDEX utility 91
LOAD utility 252
WORKDDN
option of CHECK DATA utility 68
option of CHECK INDEX utility 89
option of CHECK LOB utility 110
option of LOAD utility 218
option of MERGECOPY utility 309
option of REBUILD INDEX utility 366
option of REORG INDEX utility 431
option of REORG TABLESPACE utility 493
WRITE, option of QUIESCE utility 347
X
XML
option of LISTDEF utility 193
option of LOAD utility 247
Printed in USA
SC18-9855-00
Spine information:
DB2 Version 9.1 for z/OS Utility Guide and Reference