db2 Fundamentals Aix PDF
db2 Fundamentals Aix PDF
db2 Fundamentals Aix PDF
Education
academy.avnet.com
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
cover
Front cover
Student Notebook
ERC 11.0
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Trademarks
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AIX 5L
DB
DRDA
i5/OS
Power Systems
pureXML
Tivoli
z/OS
AIX
DB2 Connect
Express
i5/OS
Power
System i
U
Command Center
DB2
InfoSphere
Optim
pureScale
System z
WebSphere
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of
Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
TOC
Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Unit 1. Overview of DB2 10 on Linux, UNIX and Windows . . . . . . . . . . . . . . . . . . . . 1-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
DB2 family product platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
DB2: The scalable database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Examples of features and functions by DB2 LUW Editions . . . . . . . . . . . . . . . . . . . 1-8
DB2 server connectivity options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
DB2 Connect V10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Preparing to install DB2 database servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
DB2 software Installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18
Student exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage . . . . . . . . . . . . . 2-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
CLP Command Line Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
CLP syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Online reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Using the CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
CLP command options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Modify CLP options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
Input file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Input file: Operating system commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
QUIT/TERMINATE/CONNECT RESET differences . . . . . . . . . . . . . . . . . . . . . . . 2-14
CLPPlus command line processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
CLPPlus features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17
DB2 GUI Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Data Studio - Database Connection profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
Data Studio - Selection of Database tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
Data Studio - Setting Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23
Data Studio - Selecting tasks for an object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24
Data Studio - Create or Alter object properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-25
Data Studio - Review, Edit, Save or Schedule generated DDL statements . . . . . 2-26
Data Studio - working with generated change plans . . . . . . . . . . . . . . . . . . . . . . . 2-27
Data Studio - Running SQL Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-28
Data Studio - Visual Explain for SQL queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30
Contents
iii
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
TOC
4-62
4-63
4-64
4-66
4-67
Contents
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
TOC
7-30
7-32
7-34
7-36
7-38
7-40
7-43
7-45
7-47
7-49
7-51
7-52
Contents
vii
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
TOC
10-41
10-43
10-44
10-45
10-46
10-47
10-49
10-50
10-51
10-52
10-53
10-55
10-56
10-59
10-60
10-62
10-63
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
Contents
ix
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
TMK
Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
AIX
CICS
AIX 5L
ClearCase
DB2 Connect
DRDA
InfoSphere
Lotus
OS/390
S/390
WebSphere
1-2-3
Express
iSeries
MQSeries
OS/400
Symphony
z/OS
Approach
DB2
Distributed Relational
Database Architecture
Informix
LotusScript
Optim
pSeries
Tivoli
zSeries
Trademarks
xi
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
xii
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
pref
xiii
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
xiv
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
pref
Course description
DB2 10 for LUW: Basic Administration for AIX
Duration: 4 days
Purpose
This course teaches you to perform, basic database administrative
tasks using DB2 10.1 for Linux, UNIX, and Windows. These tasks
include creating and populating databases and implementing a logical
design to support recovery requirements. The access strategies
selected by the DB2 Optimizer will be examined using the DB2 Explain
tools. Various diagnostic methods will be presented, including using
the db2diag.log file messages to direct your investigation of problems,
as well as using the db2pd commands.
Audience
System administrators, database administrators, and technical
personnel involved in planning, implementing, and maintaining DB2
databases.
Prerequisites
Before taking this course you should be able to:
Use basic OS functions such as utilities, file permissions,
hierarchical file system, commands, and editor
State the functions of the Structured Query Language (SQL), and
be able to construct DDL, DML, and authorization statements
Discuss basic relational database concepts and objects such as
tables, indexes, views, and joins
These skills can be developed by taking:
OS Training:
- AIX 5L Basics
- Linux Basics and Administration
- Windows Systems Administration
- Or by having equivalent HP-UX or Solaris administration
experience
DB2 SQL Workshop
Copyright IBM Corp. 1999, 2012
Course description
xv
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DB2 Fundamentals
Objectives
After completing this course, you should be able to:
Administer a DB2 database system using commands and GUI
tools
Compare DMS, SMS and Automatic storage management for table
space storage
Implement a given logical database design using DB2 to support
integrity and concurrency requirements
List and describe the components of DB2
Define a DB2 recovery strategy and perform the tasks necessary to
support the strategy
Use autonomic features of DB2
Examine Explain output to determine access strategy chosen by
Optimizer
Investigate current application activity that might indicate
performance problems using SQL statements
Implement DB2 security
Contents
Overview of DB2 on Linux, UNIX and Windows
Command Line Processor (CLP) and GUI usage
The DB2 Database Manager Instance
Creating databases and data placement
Creating database objects
Moving data
Backup and recovery
Database Maintenance, Monitoring and Problem Determination
Locking and concurrency
Security
xvi
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
pref
Agenda
The planned agenda follows. Here are the considerations:
The first five units must be taught in the order specified:
Overview of DB2
Command Line Processor (CLP) and GUI usage
The DB2 Database Manager Instance
Creating databases and data placement
Creating database objects
Moving data
Backup and recovery
Database Maintenance, Monitoring and Problem Determination
Locking and concurrency
Security
Day 1
Welcome
Unit 1: Overview of DB2 on Linux, UNIX and Windows
Unit 2: Command Line Processor (CLP) and GUI usage
Unit 3: The DB2 Database Manager Instance
Exercise 1: Create a New DB2 Instance
Unit 4: Creating databases and data placement
Exercise 2: Creating databases and data placement
Day 2
Unit 5: Creating database objects
Exercise 3: Create objects
Unit 6: Moving data
Exercise 4: Moving data
Day 3
Unit 7: Backup and recovery
Exercise 5: Backup and recovery
Unit 8: Database Maintenance, Monitoring and Problem Determination
Exercise 6: Using DB2 Tools for Performance
Agenda
xvii
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Day 4
Unit 9: Locking and concurrency
Exercise 7: Investigating DB2 Locking
Unit 10: Security
Exercise 8: Database Security
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
IBM DB2 Database for Linux, UNIX, and Windows Information Center
(10.1)
1-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
List some of the features provided by the different editions of
DB2 for Linux, UNIX and Windows
Compare the software options for DB2 client systems
List some of the pre-installation planning considerations for
DB2 servers
Explore DB2 installation methods
CL21311.0
Notes:
These are the objectives for this unit.
1-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
DB2
DB2 for i
DB2
DB2
CL21311.0
Notes:
DB2 database software offers industry leading performance, scale, and reliability on your
choice of platform from Linux, Unix and Windows to z/OS.
DB2 for Linux, UNIX, and Windows
The DB2 LUW product provides industry-leading performance for mixed workloads on
distributed systems, offering unparalleled efficiencies for staffing and storage.
DB2 for z/OS
The DB2 for z/OS database software is the gold standard for reliability, availability, and
scalability. Optimized for SOA, CRM and data warehousing.
DB2 for i (formerly known as DB2 for i5/OS) is an advanced, 64-bit Relational Database
Management System (RDBMS) that leverages the high performance, virtualization, and
energy efficiency features of IBM's Power Systems. A member of IBMs leading edge
family of DB2 products, DB2 for i supports a broad range of applications and development
environments at a low cost of ownership due to its unique autonomic computing
(self-managing) features.
Copyright IBM Corp. 1999, 2012
1-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DB2 pureScale
feature
Continuous Availability
and High Scalability
AIX and Linux based
Clusters
IBM InfoSphere
Warehouse
DB2 Express
DB2 data server, entry-level pricing, small and
medium business
DB2 Express-C
Free, entry-level edition of the DB2 data server
for the developer and partner community
Copyright IBM Corporation 2010
CL21311.0
Notes:
The graphic shows the scalability of the DB2 database server. DB2 is capable of supporting
hardware platforms from uniprocessor laptops to massively parallel systems with hundreds
of nodes and many processors per node. In between, it can support SMP machines or
clusters of SMP machines. This provides both extensive and granular growth.
There are multiple DB2 database product editions, each with a unique combination of
features and functionality.
DB2 Advanced Enterprise Server Edition and DB2 Enterprise Server Edition
Ideal for high-performing, robust, on-demand enterprise solutions.
DB2 Enterprise Server Edition is designed to meet the data server needs of mid-size to
large-size businesses. It can be deployed on Linux, UNIX, or Windows servers of any
size, from one processor to hundreds of processors, and from physical to virtual
servers. DB2 Enterprise Server Edition is an ideal foundation for building on demand
enterprise-wide solutions such as high-performing 24 x 7 available high-volume
1-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
1-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
capabilities of DB2 for Linux, UNIX, and Windows such as pureXML. Solutions
developed using DB2 Express-C can be seamlessly deployed using more scalable DB2
editions without modifications to the application code.
DB2 Express-C can be used for development and deployment at no charge, and can
also be distributed with third-party solutions without any royalties to IBM. It can be
installed on physical or virtual systems with any amount of CPU and RAM, and is
optimized to utilize up to a maximum of two processor cores and 2 GB of memory.
DB2 Database Partitioning Feature (DPF) is no longer included in or available for any DB2
database editions. It is included in all IBM InfoSphere Warehouse product editions.
InfoSphere Warehouse, V10.1 includes DB2, V10.1.
IBM DB2 pureScale Feature
In a competitive, ever-changing global business environment, you cannot afford to let your
IT infrastructure slow you down. This reality demands IT systems that provide capacity as
needed, exceptional levels of availability, and transparency toward your existing
applications.
When workloads grow, does your distributed database system require you to change your
applications or change how data is distributed? If so, your system does not scale
transparently. Even simple application changes incur time and cost penalties and can pose
risks to system availability. The stakes are always high: Every second lost in system
availability can have a direct bearing on customer retention, compliance with service level
agreements, and your bottom line.
The IBM DB2 pureScale Feature might help reduce the risk and cost associated with
growing your distributed database solution by providing extreme capacity and application
transparency. Designed for continuous availability, high availability capable of exceeding
even the strictest industry standard, this feature tolerates both planned maintenance and
component failure with ease.
With the DB2 pureScale Feature, scaling your database solution is simple. Multiple
database servers, known as members, process incoming database requests; these
members operate in a clustered system and share data. You can transparently add more
members to scale out to meet even the most demanding business needs. There are no
application changes to make, data to redistribute, or performance tuning to do.
To deliver on a design capable of exceptional levels of database availability, the DB2
pureScale Feature builds on familiar and proven design features from DB2 for z/OS
database software. By also integrating several advanced hardware and software
technologies, the DB2 pureScale Feature supports the strictest requirements for high fault
tolerance and can sustain processing of database requests even under extreme
circumstances.
1-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
In DB2 Version 10, you can install the IBM DB2 pureScale Feature while installing DB2
Enterprise Server Edition, DB2 Workgroup Server Edition, and DB2 Advanced Enterprise
Server Edition.
The DB2 pureScale Feature is supported only on AIX and Linux x86_64 operating
systems.
You cannot install a DB2 product with the DB2 pureScale Feature in the same path as an
existing DB2 Enterprise Server Edition, DB2 Workgroup Server Edition, or DB2 Advanced
Enterprise Server Edition installation. Conversely, you cannot install DB2 Enterprise Server
Edition, DB2 Workgroup Server Edition, or DB2 Advanced Enterprise Server Edition in the
same path as an existing installation of a DB2 product with the DB2 pureScale Feature.
1-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Express
Workgroup
Enterprise
Advanced
Enterprise
Storage
Optimization
No
No
No
Feature
Included
pureScale Cluster
No
No
Limited
Feature
Feature
High Availability
Disaster Recovery
No
Yes
Yes
Yes
Yes
Multi-Temperature
Storage
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Range Partitioned
Tables
No
No
No
Yes
Yes
DB2 Workload
Management
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Temporal Tables
CL21311.0
Notes:
The visual lists a few of the DB2 LUW product features and functions and maps them to the
different DB2 LUW editions. The DB2 Information Center can be used to review a more
complete list.
The DB2 Storage Optimization feature includes data row compression and other
compression types to help maximize the use of existing storage. DB2 10.1 added adaptive
compression capabilities to the existing compression for data, index and temporary data
provided in previous releases. This is a chargeable feature for Enterprise Edition and is
included in Advanced Enterprise Edition.
The DB2 pureScale feature can only be used with Workgroup, Enterprise and Advanced
Enterprise editions.
The Multi-temperature storage support added with DB2 10.1 is only available with
Enterprise and Advanced Enterprise editions.
All DB2 LUW editions support the definition and use of the temporal tables and time travel
query functions added in DB2 10.1.
1-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
IBM Data Studio is available for all editions of DB2 at no extra charge.
The Data Studio 3.1.1 release includes enhancements throughout the product as well as
the added support for DB2 V10.1 for Linux, UNIX, and Windows databases which include
the following features:
- Adaptive compression for table rows
- Special registers for temporal tables in server profiles
- Time-based data management with temporal tables
- Data management using multi-temperature storage
- Data security with row and column access control (RCAC)
These DB2 V10.1 features are fully supported by Data Studio making it easier for you to
take advantage of them.
1-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DB2 Servers
Linux, UNIX
Windows
CL21311.0
Notes:
There are several types of IBM data server clients and drivers available. Each provides a
particular type of support.
The IBM data server client and driver types are as follows:
- IBM Data Server Driver Package
- IBM Data Server Driver for JDBC and SQLJ
- IBM Data Server Driver for ODBC and CLI
- IBM Data Server Runtime Client
- IBM Data Server Client
Each IBM data server client and driver provides a particular type of support:
For Java applications only, use IBM Data Server Driver for JDBC and SQLJ.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
- For applications using ODBC or CLI only, use IBM Data Server Driver for ODBC and
CLI. (Also referred to as cli driver.)
- For applications using ODBC, CLI, .NET, OLE DB, PHP, Ruby, JDBC, or SQLJ, use
IBM Data Server Driver Package.
- For applications using DB2CI, use IBM Data Server Client.
- If you need DB2 Command Line Processor Plus (CLPPlus) support, use IBM Data
Server Driver Package.
- To have command line processor (CLP) support and basic client support for running
and deploying applications, use IBM Data Server Runtime Client. Alternatively use
CLPPlus, which is a component of the recommended IBM Data Server Driver
Package.
- To have support for database administration, and application development using an
application programming interface (API), such as ODBC, CLI, .NET, or JDBC, use
IBM Data Server Client.
IBM Data Server Client
IBM Data Server Client includes all the functionality of IBM Data Server Runtime Client,
plus functionality for database administration, application development, and client/server
configuration.
1-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
IBM System z
IBM System i
Client based
applications
Application
Server UNIX
Client Systems
DB2 connect
function
Application
Server
DB2 Connect
Gateway Server
DB2 z/OS
Data sources
CL21311.0
Notes:
IBM DB2 Connect V10.1 provides fast and robust connectivity to IBM DB2 databases
deployed on either IBM System z or IBM System i. DB2 Connect has a number of DB2
Connect editions to best meet your company's needs. With most editions, DB2 Connect
client components can be used to directly connect applications running on Linux (including
Linux on System z), UNIX, and Windows to DB2 servers on System z or IBM i. All editions,
except the DB2 Connect Personal Edition, also include an optional DB2 Connect server
component that can be used as a gateway to concentrate and manage connections from
multiple desktop clients and applications to DB2 databases on IBM System z or IBM
System i servers.
IBM DB2 Connect 10.1 enables client applications to create, access, update, control, and
manage DB2 databases on host systems using:
- Structured Query Language (SQL)
- DB2 Application Programming Interfaces (APIs )
- Open Database Connectivity (ODBC)
1-12 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
1-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Before installing DB2 database server, ensure that the necessary prerequisites are met,
such as disk, memory, and paging space requirements. There are also additional
prerequisites that depend on your operating system.
You can also install multiple DB2 copies on the same computer. For Windows systems,
there is a difference between installing one or multiple DB2 copies. Each DB2 copy can be
at the same or different code levels. A DB2 copy is a group of DB2 products that are
installed at the same location. For Linux and UNIX systems, each DB2 copy can be at the
same or different code levels. Root installation of DB2 products can be installed to an
installation path of your choice.
The DB2 Information Center provides detailed pre-installation planning steps for each of
the supported operating system types.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Installation Method
Windows
UNIX / Linux
Yes
Yes
Yes
Yes
db2_install command
No
Yes
CL21311.0
Notes:
The following list describes DB2 installation methods.
DB2 Setup wizard
The DB2 Setup wizard is a GUI installer available on Linux, UNIX, and Windows
operating systems. The DB2 Setup wizard provides an easy-to-use interface for
installing DB2 database products and for performing initial setup and configuration
tasks.
The DB2 Setup wizard can also create DB2 instances and response files that can be
used to duplicate this installation on other machines.
On Linux and UNIX operating systems, an X server is required to display the DB2 Setup
wizard.
1-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Note
For non-root installations on Linux and UNIX operating systems, only one DB2 instance
can exist. The DB2 Setup wizard automatically creates the non-root instance.
You can export a client or server profile with the db2cfexp command to save your client
or server configuration. Import the profile by using the db2cfimp command. A client or
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
server profile exported with the db2cfexp command can also be imported during a
response file installation by using the CLIENT_IMPORT_PROFILE keyword.
You should export the client or server profile after performing the installation and
cataloging any data sources.
Customizing the sample response files that are provided for each DB2 database
product
An alternative to using the response file generator or the DB2 Setup wizard to create a
response file is to manually modify a sample response file. Sample response files are
provided on the DB2 database product DVD. The sample response files provide details
about all the valid keywords for each product.
db2_install command (Linux and UNIX operating systems only)
The db2_install command installs all components for the DB2 database product you
specify with the English interface support. You can select additional languages to
support with the -L parameter. You cannot select or clear components.
Although the db2_install command installs all components for the DB2 database
product you specify, it does not perform user and group creation, instance creation, or
configuration. This method of installation might be preferred in cases where
configuration is to be done after installation. To configure your DB2 database product
while installing it, consider using the DB2 Setup wizard.
On Linux and UNIX operating systems, if you embed the DB2 installation image in your
own application, it is possible for your application to receive installation progress
information and prompts from the installer in computer-readable form.
This installation method requires manual configuration after the product files are
deployed.
Information
1-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit summary
Having completed this unit, you should be able to:
List some of the features provided by the different editions of
DB2 for Linux, UNIX and Windows
Compare the software options for DB2 client systems
List some of the pre-installation planning considerations for
DB2 servers
Explore DB2 installation methods
CL21311.0
Notes:
This is the summary of topics covered in this unit.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Student exercise
CL21311.0
Notes:
Please perform the exercise in your Exercises Guide.
1-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Utilize the DB2 Command Line Processor to run DB2
commands and SQL statements
Use CLPPlus to connect databases and to define, edit, and run
statements, scripts, and commands
Describe the GUI tools available that support administration
and development with DB2 LUW servers
Use Data Studio to perform database administration tasks and
execute SQL scripts
CL21311.0
Notes:
These are the objectives for this unit.
2-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Through the Command Line Processor, you can issue:
SQL statements
XQuery statements (with the prefix XQUERY)
DB2 commands
OS system commands
In an Windows environment, the CLP can be found under the Command Line Processor as
well as the Command Window and in a Linux/UNIX environment, it would be activated with
the db2profile, so a normal terminal could be used.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CLP syntax
db2
option-flag
db2-command
sql-statement
?
phrase
message
sql-state
class-code
CL21311.0
Notes:
The options may be entered after the db2 and before the command or statement. If the
command or statement will not fit on one line, either:
Continue typing and allow your typing to wrap and continue on the next line, or
Use the \ (backslash) character for continuation.
The class-code option in the above syntax diagram means that you can request help for a
message specified by a valid class-code. Class codes represents the first two digits of the
SQL state.
2-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Online reference
?
? command string
? SQLnnnn
(nnnn = 4 or 5 digit SQLCODE)
? nnnnn
(nnnnn = 5 digit SQLSTATE)
CL21311.0
Notes:
The Online Command Reference contains the syntax and explanations of all DB2
commands that may be executed through CLP. The Online Command Reference is
invoked by:
1. db2 ? List of all DB2 commands
2. db2 ? command Information about specific commands
3. db2 ? SQLnnnn Information about a specific SQLCODE generated by database
manager. SQLCODE must be four or five digits in length.
4. db2 ? nnnnn Information about a specific SQLSTATE generated by database
manager. SQLSTATE must be five digits in length.
A set of reference manuals are also available online within the Information Center (db2ic).
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
db2
CL21311.0
Notes:
Prefix all CLP commands or requests with db2, or use CLP in interactive mode by typing
db2 and then pressing Enter. In the interactive mode, the user can type CLP commands
without prefixing them by db2.
When using CLP without being in interactive mode on a Linux/UNIX platform,
remember to put quotes around the statement or command if special characters are
included.
To issue XQuery statements in CLP, prefix the statements with the XQUERY keyword.
Interactive mode does not allow you to do piping or other operating system functions at the
interactive CLP command level. To execute operating system commands without quitting
interactive mode, issue !<operating-system-command>.
In the interactive mode, the history of commands is displayed by entering history (or
shortened form h). Two variations are allowed:
reverse (r) list in reverse order, in other words most recent first
num (such as 5) limits the list to the last num history entries
2-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Examples, assuming that you first entered db2start, then a "list active databases"
command, and then you entered ? sql11111:
db2 => h
1 db2start
2 list active databases
3 ? sql11111
4 h
db2 => h r
5 h r
4 h
3 ? sql11111
2 list active databases
1 db2start
db2 => h r 3
6 h r 3
5 h r
4 h
There is also an edit (e) command which enables editing of a previous command. You can
edit the command using the number in the history list, for example e 2. If no number is
entered, the last command would be edited.
After having edited the command and closed the editor, the edited command is displayed in
the CLP window, and you will be asked:
Do you want to execute the above command ? (y/n)
If you enter y, the command as edited is executed.
With runcmd (r) you can re-run a previously executed command. You can do this using the
number from the history list, for example r 3. If no number is entered, the last command is
re-executed.
If a command that you are entering exceeds the limit allowed at the command prompt, use
a \ (backslash) (in Linux/UNIX) as the line continuation character. If you want a blank
space between the last character on the current line and the first character on the next line,
do not forget to code a blank space before the backslash. You might want to use the CLP
flag -t instead of the \ (backslash). It says that the CLP statement terminates with a ;
(semicolon).
In its output, CLP represents a SQL NULL value as a hyphen (-). If the column is numeric,
the hyphen is placed at the right of the column. If the column is not numeric, the hyphen is
at the left.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Issue the db2 list command options command to view the current settings for the
command line flags and the value of DB2OPTIONS.
The following shows the option flag, description, and default setting.
a Displays SQLCA data OFF
c Autocommit SQL statements ON
e{c | s} Display SQLCODE or SQLSTATE OFF
f filename Read command input from a file instead of standard input OFF
You may wish to issue db2 < filename instead of db2 -f filename.
i Display XML data with indentation OFF
l filename Log commands in a history file OFF
m Display the number of rows affected OFF
n Remove new line character OFF
2-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Every session
put point
in UNIX db2profile
or Windows System Environment Variables
Copyright IBM Corporation 2012
CL21311.0
Notes:
The CLP options can be used in any sequence and combination. To turn the option on,
prefix the option with a minus sign (-). To turn an option off, prefix the option with a minus
sign (-) and follow the option letter with another minus sign (-) or prefix the option with a
plus sign (+). For example, either use -c- or +c.
These options can also be specified by setting the DB2OPTIONS environment variable.
((Windows) set DB2OPTIONS='+c -a', (Linux/UNIX) export DB2OPTIONS='+c -a').
CLP option flags temporarily override DB2OPTIONS settings.
db2 update command options command allows the user to change an option setting
from the interactive input mode or a command file.
db2 update command options using c off
Another way to redirect the CLP output from the screen to a file instead of using the -r
option is to use the > symbol.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Some operating systems use the > or < symbols in SQL statements, and do not want them
interpreted as redirection symbols by CLP. The quotation marks should surround all text to
be processed by the CLP, but does not include any options.
db2 -r sample.rep "select * from org"
db2 "select * from org where deptnumb > 38" > sample.rep
This can be avoided by using a DB2 interactive CLP session.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Input file
Edit create.tab
-- comment:
connect to sample;
create table tab3
(name varchar(20) not null,
phone char(40),
salary dec(7,2));
select * from tab3;
commit work;
connect reset;
CL21311.0
Notes:
Use an editor to create a file called create.tab.
Comments are denoted with a line that starts with two hyphens (--).
A semicolon can be used to denote the end of a SQL statement if the file is executed with
the -t command option.
In non-interactive mode, execute the file with db2 -svtf create.tab. Since db2 was not
coded in the input file, only DB2 commands can be coded in the input file. In non-interactive
mode, the command to execute the commands in the input file must be started with db2.
The -s option says to stop execution if an error occurs. The -v option says to echo the
current command on the monitor screen. The -t option says the statements end with a
semicolon. The -f option says the command input is read from an input file called
create.tab.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
vi seltab
echo "Table Name Is" $1 > out.sel
db2 "select * from $1" >> out.sel
seltab org
out.sel
contents
DEPTNUMB
10
15
20
38
42
51
66
84
DEPTNAME
MANAGER
Head Office
160
New England
50
Mid Atlantic
10
South Atlantic
30
Great Lakes
100
Plains
140
Pacific
270
Mountain
290
DIVISION
Corporate
Eastern
Eastern
Eastern
Midwest
Midwest
Western
Western
LOCATION
New York
Boston
Washington
Atlanta
Chicago
Dallas
San Francisco
Denver
CL21311.0
Notes:
When including DB2 commands or SQL statements in an input file with operating system
commands, place db2 before the command or SQL statement and enclose the command
or statement in double quotation marks.
db2 -r out.sel "select * from $1" may also be used to redirect the output to the file
out.sel. The output will still echo on the screen unless the +o option is used. The -r option
does not append output to the file.
In Linux/UNIX, the user must have execute authority to execute the CLP input file. The
following steps may be executed to achieve this:
$chmod 744 sel.tab
$ls -l sel.tab
-rwxr--r-- 1 inst31 adm31 58 Jul 15 13:53 sel.tab
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CLP
COMMAND
Terminate CLP
Back-end Process
Disconnect
database Connection
quit
No
No
terminate
Yes
Yes
No
Yes if
CONNECT=1
(RUOW)
connect reset
CL21311.0
Notes:
To connect to a local or remote database, specify db2 connect to dbname where dbname
maps to the alias name specified in the System Database Directory. After the connect is
issued, all SQL requests are executed against the database to which you are connected.
If CONNECT=1 (RUOW), db2 connect reset terminates the connection, and a
subsequent SQL statement will cause connection to the default database, if it is defined.
The default database is defined in the environment variable DB2DBDFT. If
CONNECT=2 (DUOW), db2 connect reset puts the current connection in a dormant state
and establishes a connection with the default database if it is defined. DUOW and
CONNECT types will be defined in detail in the Distributed Management Topic.
db2 terminate issues a disconnect and also terminates the CLP back-end process.
The quit command ends the input mode and returns the user to the command prompt, but
quit does not terminate CLP nor disconnect the database connection.
To end the CLP background process and disconnect the database connection issue the
terminate command.
2-14 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The Command Line Processor Plus (CLPPlus) provides a command-line user interface
that you can use to connect to databases and to define, edit, and run statements, scripts,
and commands.
SQLPLUS is fully supported in DB2. CLPPlus will take a native Oracle SQL plus script and
run it in DB2.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CLPPlus features
Support for establishing connections to databases when a database
user ID and password are provided.
A buffer that can be used to store scripts, script fragments, SQL
statements, SQL PL statements, or PL/SQL statements for editing and
then execution. Text in the buffer can be listed, printed, edited, or run as
a batch script.
A comprehensive set of processor commands can be used to define
variables and strings that can be stored in the buffer.
A set of commands that retrieve information about the database and
database objects.
Ability to store buffers or buffer output to a file.
Multiple options for formatting the output of scripts and queries.
Support for executing system-defined routines.
Support for executing operating system commands.
Option for recording the output of executed commands, statements, or
scripts.
Copyright IBM Corporation 2012
CL21311.0
Notes:
CLPPlus includes the following features:
Support for establishing connections to databases when a database user ID and
password are provided.
A buffer that can be used to store scripts, script fragments, SQL statements, SQL PL
statements, or PL/SQL statements for editing and then execution. Text in the buffer can
be listed, printed, edited, or run as a batch script.
A comprehensive set of processor commands can be used to define variables and
strings that can be stored in the buffer.
A set of commands that retrieve information about the database and database objects.
Ability to store buffers or buffer output to a file.
Multiple options for formatting the output of scripts and queries.
Support for executing system-defined routines.
Support for executing operating system commands.
Copyright IBM Corp. 1999, 2012
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
IBM Data Studio now replaces DB2 Control Center and includes additional capabilities:
Make database object changes with a change plan and manage change using forward
engineering from a model; develop Java applications that use pureQuery annotated
methods;copy objects from one database to another.
Run commands on multiple objects, and manage cluster members in DB2 pureScale
environment.
Create and manage jobs, and schedule command scripts configuring email notifications
to report on job completion.
Monitor database health and availability, and status of utilities operating on databases
using Web Console accessed from the Data Studio full client.
InfoSphere Optim Query Tuner for DB2 for Linux, UNIX, and Windows cuts cost and
improves performance by providing expert advice on writing high quality queries and
improving database design. Its easy-to-use advisors can help developers to write more
efficient SQL queries.
Copyright IBM Corp. 1999, 2012
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Reduces costs and risks by enabling developers to tune SQL during development,
while problems are still relatively inexpensive to fix and before they cause a costly
outages or performance issues.
Operates within a familiar Eclipse development environment and features seamless
integration and natural launch points within IBM Data Studio.
Accelerates query tuning analysis by providing expert advice and recommendations.
Fosters collaboration among developers and DBAs.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Set Database
connection
information
Set User
name and
password
CL21311.0
Notes:
The Data Studio product uses connection profiles to perform database administration
tasks.
When you create a DB2 database, a new connection profile is created. You can define new
connection profiles to access existing DB2 databases.
The connection profile includes the network information needed to access DB2 database
servers, like TCP/IP host names and port numbers. The connection profile can also include
the user id and optionally the password that will be used to connect to a database.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Select the
administration
task
Monitoring uses
Data Studio Web
Console
CL21311.0
Notes:
Once a connection profile is defined, you can connect to a DB2 database. The
Administration Explorer view in Data Studio allows to select a database and then select the
task you want to perform.
The visual shows how the menus provide options to make configuration changes, perform
backup or recovery tasks.
The Monitor menu option invokes the Data Studio Web Console to check for alert
conditions with this database.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The Data Studio Administration Explorer view can be used to review and update the
configuration setting for a DB2 database or a DB2 instance.
The applications shows the current, active value for each option and also any pending
changes that will take effect when the database or instance is restarted. You can see
whether the parameter can take effect immediately or if the change will be deferred to the
next restart.
You can preview the generated DB2 UPDATE command and then decide if you want to
proceed and execute the command to change the configuration settings.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Once you connect to a database, the Data Studio Administration Explorer views lets you
select a category of database objects to manage, like tables, indexes or table spaces.
This produces a list of the current database objects of that type.
Once a particular object, like one table is selected, you can use menus to select the task to
be performed. For a table, you have the option to Browse the contents. You could choose
to load new data, unload data or select ALTER to make changes to the table definition.
The MANAGE option provides access to functions like RUNSTATS and SET INTEGRITY.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Set options
using the
properties view
CL21311.0
Notes:
The visual shows how the Data Studio software can be used to create or alter database
objects. The example show a new table space being defined.
The properties tab allows the options to be reviewed and changed.
When this type of task is being performed a change plan, that includes the date and time of
this change is generated. In order to see the generated DDL statement you select the icon
to Review and Deploy.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Generated
statements can
be saved to a file
Options to RUN,
Edit or Schedule
generated
statements
Figure 2-19. Data Studio - Review, Edit, Save or Schedule generated DDL statements
CL21311.0
Notes:
The example shows how the generated CREATE TABLESPACE statement can be
reviewed or edited prior to execution.
There are options to save the generated statement in a file for reuse.
You could also schedule the generated statement for later execution.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Generated change
plans can be
reviewed and reused
CL21311.0
Notes:
In Data Studio, when a database is selected in the Administration Explorer view, the
change plans generated by previous object maintenance tasks can be selected.
You can select any of the change plans to review the associated object change. You could
reuse the generated statement to make a similar change to another database object.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
A connection
profile is selected
for execution
SQL Script
Results can be
viewed
CL21311.0
Notes:
With Data Studio you can easily create new SQL scripts or open previously saved SQL
script files.
You assign a connection profile to decide which database will be used to process the SQL
script. This makes it easy to run an SQL script against several databases, since you may
want to try the script with a test database before it is used with production data.
You can start execution of the SQL script using the Run SQL icon.
Data Studio provides a simple method to review the results of the script execution.
This function can be used to replace many of the tasks that the deprecated Command
Editor application has been previously used.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Visual
Explain
Optim Query
Workload
Tuner
Visual Explain
Shows access
plan and costs
CL21311.0
Notes:
When a SQL script is opened in Data Studio, you can click on an icon to view the Visual
Explain report, which shows the access plan selected by the DB2 Optimizer for processing
an SQL statement.
There is also an icon that provides performance tuning assistance for SQL statements.
This function can be used to access the Optim Query Workload Tuner software, if that
feature has been installed.
Unit 2. Command Line Processor (CLP) and DB2 GUI Tool usage
2-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit summary
Having completed this unit, you should be able to:
Utilize the DB2 Command Line Processor to run DB2
commands and SQL statements
Use CLPPlus to connect databases and to define, edit, and
run statements, scripts, and commands
Describe the GUI tools available that support administration
and development with DB2 LUW servers
Use Data Studio to perform database administration tasks
and execute SQL scripts
CL21311.0
Notes:
This is the summary of topics covered in this unit.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Getting Started with DB2 Installation and Administration on Linux and
Windows
Installing DB2 Servers
Command Reference
3-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Specify the key features of an Instance
Create and drop a DB2 Instance
Use db2start and db2stop commands to manage a DB2
instance
Display and set DB2 registry variables
Describe and modify the Database Manager Configuration
CL21311.0
Notes:
3-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
What is an instance?
DB2 PRODUCT
Catalog
Instance_1
Instance_2
DBM CFG
DBM CFG
DB CFG
DB CFG
DB_1
LOG
Catalog
DB_3
LOG
DB CFG
Catalog
DB_2
LOG
CL21311.0
Notes:
Instances
An instance is a logical database manager environment where you catalog databases and
set configuration parameters. Depending on your needs, you can create more than one
instance on the same physical server providing a unique database server environment for
each instance.
Note
Note: For non-root installations on Linux and UNIX operating systems, a single instance is
created during the installation of your DB2 product. Additional instances cannot be created.
3-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Use one instance for a development environment and another instance for a production
environment.
Tune an instance for a particular environment.
Restrict access to sensitive information.
Control the assignment of SYSADM, SYSCTRL, and SYSMAINT authority for each
instance, providing specific security controls for databases in different intances.
Optimize the database manager configuration for each instance.
Limit the impact of an instance failure. In the event of an instance failure, only one
instance is affected. In the event of an instance failure, only databases belonging to that
instance are affected. Databases belonging to other instances can continue to function
normally.
Multiple instances will require:
Additional system resources (virtual memory and disk space) for each instance.
More administration because of the additional instances to manage.
The instance directory stores all information that pertains to a database instance. You
cannot change the location of the instance directory once it is created. The directory
contains:
The database manager configuration file
The system database directory
The node directory
The node configuration file (db2nodes.cfg)
Any other files that contain debugging information, such as the exception or register
dump or the call stack for the DB2 database processes.
The visual shows that an Instance provides the foundation needed to support a DB2
database. An instance can support one or more databases. The instance provides
management of databases, while the database provides data management. Each
database has its own configuration, a set of catalog tables and a set of transaction logs.
3-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Database Server
D a ta b a s e M a n a g e r
in s t2
D a ta b a s e M a n a g e r
In s t1
d a ta b a s e 1
d a ta b a s e 1
d a ta b a s e 2
Ta b le s p a c e A
Ta b le s p a c e B
Ta b le 1
Ta b le 2
Ta b le 3
Ta b le 4
co
nn
ec
tt
L o c a l U s e r A p p lic a tio n
P A T H = ...
D B 2 IN S TA N C E = in s t1
DB2INSTANCE designates
The current instance
CL21311.0
Notes:
A DB2 database server may support multiple DB2 instances. Each DB2 database belongs
to a specific DB2 instance. In DB2 LUW each relational table is defined in one or more
table spaces, which belong to a single DB2 database.
You may use one DB2 instance to support a production application, and use another DB2
instance to support development or testing. The instances could be running on different
releases or fix pack levels of DB2 LUW.
A database name would be unique within an instance, but the same database name could
be used in another DB2 instance on the same database server.
There is the concept of the current instance. The variable DB2INSTANCE defines the
current DB2 instance. This impacts command processing. For example the DB2 command
GET DBM CFG will return the configuration of the current instance, and UPDATE DBM
CFG will change the options for the current instance.
3-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
<instance_name> (UNIX/Linux)
(Windows)
<instance_name>
DROP:
Instance must be stopped and no applications are allowed to be connected to
the databases in this instance
Does not remove (drop) databases
Removes the Instance
db2idrop <instance_name>
Copyright IBM Corporation 2012
CL21311.0
Notes:
Creating instances
Although an instance is created as part of the installation of the database manager, your
business needs might require you to create additional instances.
If you belong to the Administrative group on Windows, or you have root user authority on
Linux or UNIX operating systems, you can add additional instances. The computer where
you add the instance becomes the instance-owning computer (node zero). Ensure that you
add instances on a computer where a DB2 administration server resides. Instance IDs
should not be root or have password expired.
Restrictions
On Linux and UNIX operating systems, additional instances cannot be created for
non-root installations.
If existing user IDs are used to create DB2 instances, make sure that the user IDs:
3-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
To add an instance using the command line: enter the command: db2icrt instance_name.
When creating instance on an AIX server, you must provide the fenced user id, for
example:
DB2DIR/instance/db2icrt -u db2fenc1 db2inst1
When using the db2icrt command to add another DB2 instance, you should provide the
login name of the instance owner and optionally specify the authentication type of the
instance. The authentication type applies to all databases created under that instance. The
authentication type is a statement of where the authenticating of users will take place.
To drop a root instance, issue the db2idrop command. To drop non-root instances, you
must uninstall your DB2 database product.
3-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
db2start
db2stop
CL21311.0
Notes:
Starting the instance on UNIX and Linux systems.
You might need to start or stop a DB2 database during normal business operations. For
example, you must start an instance before you can perform some of the following tasks:
connect to a database on the instance, precompile an application, bind a package to a
database, or access host databases.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
On Windows based DB2 servers, in order to successfully launch the DB2 database
instance as a service, the user account must have the correct privilege as defined by the
Windows operating system to start a Windows service. The user account can be a member
of the Administrators, Server Operators, or Power Users group. When extended security is
enabled, only members of the DB2ADMNS and Administrators groups can start the
database by default.
By default, the db2start command launches the DB2 database instance as a Windows
service. The DB2 database instance on Windows can still be run as a process by
specifying the /D parameter on the db2start command. The DB2 database instance can
also be started as a service by using the Control Panel or the NET START command.
Stopping instances using db2stop.
You might need to stop the current instance of the database manager.
1. Log in or attach to an instance with a user ID or name that has SYSADM, SYSCTRL, or
SYSMAINT authority on the instance; or, log in as the instance owner.
2. Display all applications and users that are connected to the specific database that you
want to stop. To ensure that no vital or critical applications are running, list applications.
You need SYSADM, SYSCTRL, or SYSMAINT authority for this activity.
3. Force all applications and users off the database. You require SYSADM or SYSCTRL
authority to force users.
4. If command line processor sessions are attached to an instance, you must run the
TERMINATE command to end each session before running the db2stop command.
5. From the command line, enter the db2stop command. The DB2 database manager
applies the command to the current instance.
3-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Use the profile registries to control the environment variables from one computer. Different
levels of support are provided through the different profiles.
A DB2 database is affected by the following profile registries:
The DB2 instance-level profile registry contains registry variables for an instance.
Values that are defined in this registry override their settings in the global registry.
The DB2 global-level profile registry contains settings that are used if a registry variable
is not set for an instance. All instances that pertain to a particular copy of DB2
Enterprise Server Edition can access this registry.
Most environment variables are set in the DB2 database profile registries by using the
db2set command. The few variables that are set outside the profile registries require
different commands depending on your operating system.
To use the db2set command, make sure you have the privileges that are required to set
registry variables.
On Linux and UNIX operating systems, you must have the following privileges:
3-10 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
3-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows a number of ways the db2set command can be used to display, set and
reset the DB2 registry variables.
The following examples show how to issue the various parameters with the db2set
command:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
3-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The ENV_GET_REG_VARIABLES table function returns the DB2 registry settings from
one or all database members.
Note
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Syntax
>>-ENV_GET_REG_VARIABLES--(--member--)-------------------------><
3-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
(MAXTOTFILOP) = 16000
(CPUSPEED) = 4.605357e-007
(COMM_BANDWIDTH) = 1.000000e+002
(NUMDB) = 8
(FEDERATED) = NO
= 0x0d00
(TP_MON_NAME) =
(DFT_ACCOUNT_STR) =
(JDK_PATH) = C:\IBM\SQLLIB\java\jd
k
Diagnostic error capture level
Notify Level
(DIAGLEVEL) = 3
(NOTIFYLEVEL) = 3
(DIAGPATH) =
(DIAGSIZE) = 0
CL21311.0
Notes:
When a DB2 instance is created a new database manager configuration file with default
values is created for the instance. The instance configuration can be listed using the DB2
command GET DBM CFG.
The DBM configuration can be updated using DB2 command UPDATE DBM CFG.
For example the following command could be used to set the instance configuration option
SVCENAME, which sets the tcpip port number for client connections:
db2 update dbm cfg using svcename 5023
Tools like Data Studio support listing and setting database manager configuration options.
The tool can also be used to stop and restart the DB2 instance from a client system to
implement instance changes that require restarting the instance.
Some configuration parameters can be set to AUTOMATIC, allowing the database
manager to automatically manage these parameters to reflect the current resource
requirements. To turn off the AUTOMATIC setting of a configuration parameter while
maintaining the current internal setting, use the MANUAL keyword with the UPDATE
3-16 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
3-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit summary
Having completed this unit, you should be able to:
Specify the key features of an Instance
Create and drop a DB2 Instance
Use db2start and db2stop commands to manage a DB2
instance
Display and set DB2 registry variables
Describe and modify the Database Manager Configuration
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Student exercise
CL21311.0
Notes:
3-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
- Database Administration Concepts and Configuration Reference
Copyright IBM Corp. 1999, 2012
4-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
- Getting Started with DB2 Installation and Administration on Linux and Windows
4-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit objectives
After completing this unit, you should be able to:
Review specifics of creating a database
Explore the System Catalog tables and views
Check and update Database configuration parameter settings
Compare DMS, SMS and Automatic Storage managed table spaces
Describe how to setup and manage a DB2 database with Automatic Storage
enabled
Define Storage Groups to manage databases with different classes of
storage available
Differentiate between table spaces, containers, extents, and pages
Create and alter table spaces
Create buffer pools to handle multiple page sizes or improve table access
efficiency
Use DB2 commands and SQL statements to display current table space
statistics and status information
Copyright IBM Corporation 2012
CL21311.0
Notes:
Here are the objectives for this lecture unit.
4-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
database2
Tablespace A
Tablespace B
Table 1
Table 2
Table 3
Table 4
Tablespace A
Table 1
Table 2
CL21311.0
Notes:
Each instance can have one or more databases defined in it.
Each database can have three or more table spaces associated with it. The table spaces
are a logical level between the database and the tables stored in that database. Table
spaces are created within a database, and tables are created within table spaces.
4-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
When a database is created several types of storage are required to support the new
database.
Database Path
- A set of Database Control Files are needed for each database
The control files include the Database Configuration file, the Recovery History, a
set of Log Control files, a Tablespace Control files, a Buffer pool Control files and
others.
- A set of Database log files are needed to support each database
- The default location for the database path is defined by the dftdbpath configuration
option in the DBM CFG
The database path needs to be a local file system
Automatic Storage paths can be defined when a database is created to enable
automatic storage management for table spaces. Prior to DB2 10.1, a single set of
Copyright IBM Corp. 1999, 2012
4-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
storage paths could be assigned to one database. Beginning with DB2 10.1, multiple
storage paths can be assigned to storage groups. This allows table space storage to be
managed at a level higher than the individual table. The initial Automatic Storage Paths
are defined when a database is created become the initial default storage group for the
database.
Each database is created with three default System Table spaces. If automatic storage
is enabled, DB2 will use Automatic Storage management, but these can be defined to
use any supported type of table space management. The three table spaces are:
- SYSCATSPACE: DB2 catalog tables
- TEMPSPACE1: System Temporary tables, provide work space for sorting and utility
processing
- USERSPACE1: Initial table space for defining user tables and indexes
4-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
DB2 supports three types of storage management for table spaces. All three types can be
used in a single database.
The Storage Management type is set when a table space is created.
Table spaces are a logical level between the database and the tables stored in that
database.
Database managed storage
In a DMS (Database Managed Storage) table space, the database manager controls the
storage space. The storage model consists of a limited number of devices or files whose
space is managed by DB2 Database for Linux, UNIX, and Windows. The database
administrator decides which devices and files to use, and DB2 manages the space on
those devices and files. The table space is essentially an implementation of a special
purpose file system designed to best meet the needs of the database manager. When a
table is dropped in a DMS table space, the assigned pages become available for other
objects in that table space, but the disk space is not released.
Copyright IBM Corp. 1999, 2012
4-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DMS table spaces are different from SMS table spaces in that space for DMS table spaces
is allocated when the table space is created. For SMS table spaces, space is allocated as
needed. A DMS table space containing user defined tables and data can be defined as a
regular or large table space that stores any table data or index data.
System-managed storage
In an SMS (System-Managed Storage) table space, the operating system's file system
manager allocates and manages the space where the table is stored. The storage model
typically consists of many files, representing table objects, stored in the file system space.
The user decides on the location of the files, DB2 Database for Linux, UNIX, and Windows
controls their names, and the file system is responsible for managing them. By controlling
the amount of data written to each file, the database manager distributes the data evenly
across the table space containers. Each table has at least one SMS physical file
associated with it. When a table is dropped, the associated files are deleted, so the disk
space is freed. SMS table spaces do not have a defined size limit.
Important
The SMS table space type has been deprecated in Version 10.1 for user-defined
permanent table spaces and might be removed in a future release. The SMS table space
type is not deprecated for catalog and temporary table spaces.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Auto-growth for DMS managed table spaces will stop when any of the following happen:
The value specified for MAXSIZE is reached -or DB2s table space size limit has been reached (if MAXSIZE is NONE) -or One of the containers in the last range cannot grow any further
- DB2 will not extend the others in the last range
To continue growth, you can add space to the file system, or add a new stripe set.
Default table spaces
DB2 uses automatic storage management for the three default table spaces unless
automatic storage is not enabled for the database.
The default page size for these table spaces is 4 K. The CREATE DATABASE statement
can specify an alternate page size.
The Default storage paths on a DB2 for Windows system in a DB2 instance named DB2
with a database name of SAMPLE would be the following:
Syscatspace: C:\DB2\NODE0000\SAMPLE\T0000000 (DMS-file)
Userspace1: C:\DB2\NODE0000\SAMPLE\T0000002 (DMS-file)
Tempspace1: C:\DB2\NODE0000\SAMPLE\T0000001 (SMS-folder)
4-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CREATE
database-name
AT DBPARTIONNUM
|Create Database Options|
ALIAS
YES
NO
ON
,
USING CODESET
db-alias
IDENTITY
path
drive
DBPATH ON
TERRITORY territory
4096
n K
DFT_EXTENT_SZ
dft-extentsize
RESTRICTIVE
WITH
"comment-string"
tblspace-defn:
|
codeset
PAGESIZE
COLLATE USING
path
drive
MANAGED BY
SYSTEM USING (
DATABASE USING (
EXTENTSIZE
OVERHEAD
|autoconfigure-settings|
,
)
'container-string'
,
FILE
'container-string'
DEVICE
num-pages
PREFETCHSIZE
number-of-milliseconds
TRANSFERRATE
num-pages
num-pages
number-of-milliseconds
CL21311.0
Notes:
The CREATE DATABASE command initializes a new database with an optional
user-defined collating sequence, creates the three initial table spaces, creates the system
tables, and allocates the recovery log file. When you initialize a new database, the
AUTOCONFIGURE command is issued by default.
Note
Note: When the instance and database directories are created by the DB2 database
manager, the permissions are accurate and should not be changed.
When the CREATE DATABASE command is issued, the Configuration Advisor also runs
automatically. This means that the database configuration parameters are automatically
tuned for you according to your system resources. In addition, Automated Runstats is
4-10 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
enabled. To disable the Configuration Advisor from running at database creation, refer to
the DB2_ENABLE_AUTOCONFIG_DEFAULT registry variable. To disable Automated
Runstats, refer to the auto_runstats database configuration parameter.
Adaptive Self Tuning Memory is also enabled by default for single partition databases. To
disable Adaptive Self Tuning Memory by default, refer to the self_tuning_mem database
configuration parameter. For multi-partition databases, Adaptive Self Tuning Memory is
disabled by default.
If no code set is specified on the CREATE DATABASE command, then the collations
allowed are: SYSTEM, IDENTITY_16BIT, language-aware-collation, and
locale-sensistive-collation (SQLCODE -1083). The default code set for a database is
UTF-8. If a particular code set and territory is needed for a database, the required code set
and territory should be specified in the CREATE DATABASE command.
Example 1: Creating a database on a UNIX or Linux operating system:
To create a database named TESTDB1 on path /DPATH1 using /DATA1 and /DATA2 as
the storage paths defined to the default storage group IBMSTOGROUP, use the
following command:
CREATE DATABASE TESTDB1 ON '/DATA1','/DATA2' DBPATH ON '/DPATH1'
To create a database named TESTDB2 on drive D:, with storage on E:\DATA, use the
following command:
CREATE DATABASE TESTDB2 ON 'E:\DATA' DBPATH ON 'D:'
In this example, E:\DATA is used as the storage path defined to the default storage
group IBMSTOGROUP.
4-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows several examples of the CREATE DATABASE commands.
The first example: create database sales1 on /dbsales1
The Database Path for this database would be /dbsales1. The database would have
Automatic Storage enabled with one Automatic Storage path, /dbsales1 being the same as
the database path.
The second example: create database sales2 automatic storage no on /dbsales2
The Database Path for this database would be /dbsales2. The database would have
Automatic Storage disabled. SMS table space management would be used for the three
system table spaces and the containers would use the database path.
The next example: create database sales3 on /dbauto3 dbpath on /dbsales3
The Database Path for this database would be /dbsales3. The database would have
Automatic Storage enabled with one Automatic Storage path, /dbauto3.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
4-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Completed by
DB2 during database creation (1 of 2)
1. Creates database in the specified subdirectory
2. If Automatic storage is enabled a default storage group named
IBMSTOGROUP is created
3. Creates SYSCATSPACE, TEMPSPACE1, and USERSPACE1 table spaces
4. Creates the system catalog tables and recovery logs
5. Catalogs database in local database directory and system database
directory
6. Stores the specified code set, territory, and collating sequence
7. Creates the schemas SYSCAT, SYSFUN, SYSIBM, and SYSSTAT
8. Binds database manager bind files to the database (db2ubind.lst)
DB2 CLI packages are automatically bound to databases when the databases
are created or migrated. If a user has intentionally dropped a package, then
you must rebind db2cli.lst.
Copyright IBM Corporation 2012
CL21311.0
Notes:
When you create a database, the database manager:
1. Creates the database in the specified path (or default path)
2. If automatic storage is enabled, the defined storage paths will be assign to a storage
group named IBMSTOGROUP.
3. Creates SYSCATSPACE, TEMPSPACE1, and USERSPACE1 table spaces.
4. Creates the system catalog tables and recovery log.
5. Catalogs the database in the following database directories:
- Server's local database directory on the path indicated by path or, if the path is not
specified, the default database path defined in the database manager system
configuration file. A local database directory resides on each operating system file
system (Linux/UNIX) or drive (Windows) that contains a database.
- Servers system database directory for the attached instance. The resulting
directory entry will contain the database name and a database alias. If the command
4-14 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
was issued from a remote client, the client's system database directory is also
updated with the database name and alias.
Creates a system or a local database directory if neither exists. If specified, the
comment and code set values are placed in both directories.
6. Stores the specified codeset, territory, and collating sequence. A flag is set in the
database configuration file if the collating sequence consists of unique weights, or if it is
the identity sequence.
7. Creates SYSCAT, SYSFUN, SYSIBM, and SYSSTAT schemata with SYSIBM as the
owner.
8. Binds previously defined database manager bind files to the database (these are listed
in db2ubind.lst).
DB2 CLI packages are automatically bound to databases when the databases are created
or migrated. If a user has intentionally dropped a package, then you must rebind db2cli.lst.
4-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Completed by
DB2 during database creation (2 of 2)
9. Grants the following privileges:
ACCESSCTRL , DATAACCESS , DBADM and SECADM
privileges to database creator
SELECT privilege on system catalog tables and views to PUBLIC
UPDATE access to the SYSSTAT catalog views
BIND and EXECUTE privilege to PUBLIC for each successfully
bound utility
CREATETAB, BINDADD, IMPLICIT_SCHEMA, and CONNECT
authorities to PUBLIC
USE privilege on USERSPACE1 table space to PUBLIC
Usage of the WLM workload for user class
SYSDEFAULTUSERCLASS to PUBLIC
CL21311.0
Notes:
Default privileges granted on creating a database
When you create a database, default database level authorities and default object level
privileges are granted to you within that database.
The authorities and privileges that you are granted are listed according to the system
catalog views where they are recorded:
SYSCAT.DBAUTH
The database creator is granted the following authorities:
ACCESSCTRL
DATAACCESS
DBADM
SECADM
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
4-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DBMS_LOB
DBMS_OUTPUT
DBMS_SQL
DBMS_STANDARD
DBMS_UTILITY
SYSCAT.PACKAGEAUTH
The database creator is granted the following privileges:
CONTROL on all packages created in the NULLID schema
BIND with GRANT on all packages created in the NULLID schema
EXECUTE with GRANT on all packages created in the NULLID schema
In a non-restrictive database, the special group PUBLIC is granted the following
privileges:
BIND on all packages created in the NULLID schema
EXECUTE on all packages created in the NULLID schema
SYSCAT.SCHEMAAUTH
In a non-restrictive database, the special group PUBLIC is granted the following privileges:
CREATEIN on schema SQLJ
CREATEIN on schema NULLID
SYSCAT.TBSPACEAUTH I
n a non-restrictive database, the special group PUBLIC is granted the USE privilege on
table space USERSPACE1.
SYSCAT.WORKLOADAUTH
In a non-restrictive database, the special group PUBLIC is granted the USAGE privilege on
SYSDEFAULTUSERWORKLOAD.
SYSCAT.VARIABLEAUTH
In a non-restrictive database, the special group PUBLIC is granted the READ privilege on
schema global variables in the SYSIBM schema, except for the following variables:
SYSIBM.CLIENT_ORIGUSERID
SYSIBM.CLIENT_USRSECTOKEN
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Member-specific directory
The member-specific directory has the path:
your_instance/NODExxxx/SQLxxxx/
MEMBERxxxx
/database/inst481/NODE0000/SQL00001:
db2rhist.asc
db2rhist.bak
db2rhist.lock
HADR
load
LOGSTREAM0000 (Default database logs)
MEMBER0000
SQLDBCONF
SQLOGAB
SQLOGCTL.GLFH.1
SQLOGCTL.GLFH.2
SQLOGCTL.GLFH.LCK
SQLSGF.1
SQLSGF.2
SQLSPCS.1
SQLSPCS.2
/database/inst481/NODE0000/SQL00001
/MEMBER0000:
db2event
DB2TSCHG.HIS
HADR
SQLBP.1
SQLBP.2
SQLDBCONF
SQLINSLK
SQLOGCTL.LFH.1
SQLOGCTL.LFH.2
SQLOGMIR.LFH
SQLTMPLK
/database/inst481/NODE0000/SQL00001
/LOGSTREAM0000:
sqldbbak
sqldbdir
sqldbins
S0000000.LOG
S0000001.LOG
S0000002.LOG
CL21311.0
Notes:
Database directories and files
When you create a database, information about the database including default
information is stored in a directory hierarchy.
The hierarchical directory structure is created for you. You can specify the location of
the structure by specifying a directory path or drive for the CREATE DATABASE
command; if you do not specify a location, a default location is used.
In the directory that you specify as the database path in the CREATE DATABASE
command, a subdirectory that uses the name of the instance is created.
Within the instance-name subdirectory, the partition-global directory is created. The
partition-global directory contains global information associated with your new
database. The partition-global directory is named NODExxxx/SQLyyyyy, where xxxx is
the data partition number and yyyyy is the database token (numbered >=1).
4-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
to determine which log files to process during table space recovery. You can examine
the contents of history files in a text editor.
Logging-related files. The global log control files, SQLOGCTL.GLFH.1,
SQLOGCTL.GLFH.2, contain recovery information at the database level, for example,
information related to the addition of new members while the database is offline and
maintaining a common log chain across members. The log files themselves are stored
in the LOGSTREAMxxxx directories (one for each member) in the partition-global
directory.
Locking files. The instance database lock files, SQLINSLK, and SQLTMPLK, help to
ensure that a database is used by only one instance of the database manager.
Automatic storage containers
Member-specific directory
The member-specific directory has the path: /NODExxxx/SQLxxxx/MEMBERxxxx
This directory contains objects associated with the first database created, and
subsequent databases are given higher numbers: SQL00002, and so on. These
subdirectories differentiate databases created in this instance on the directory that you
specified in the CREATE DATABASE command.
The database directory contains the following files:
Buffer pool information files. The files SQLBP.1 and SQLBP.2 contain buffer pool
information. These files are duplicates of each other for backup purposes.
Local event monitor files.
Logging-related files. The log control files, SQLOGCTL.LFH.1, its mirror copy
SQLOGCTL.LFH.2, and SQLOGMIR.LFH, contain information about the active logs. In
a DB2 pureScale environment, each member has its own log stream and set of local
LFH files, which are stored in each member-specific directory.
Hint
Map the log subdirectory to disks that you are not using for your data. By doing so, you
might restrict disk problems to your data or the logs, instead of having disk problems for
both your data and the logs. Mapping the log subdirectory to disks that you are not
using for your data can provide a substantial performance benefit because the log files
and database containers do not compete for movement of the same disk heads. To
change the location of the log subdirectory, use the newlogpath database configuration
parameter.
The local configuration file. The local SQLDBCONF file contains database configuration
information. Do not edit this file. To change configuration parameters, use the UPDATE
Copyright IBM Corp. 1999, 2012
4-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Default
table space containers with Automatic Storage
CREATE DATABASE DSS ON /dbauto DBPATH ON /database
dbauto
DB2INSTANCE=inst20
inst20
NODE0000
DSS
T0000000
SYSCATSPACE
C0000000.CAT
T0000001
TEMPSPACE1
C0000000.TMP
T0000002
USERSPACE1
C0000000.LRG
CL21311.0
Notes:
If Automatic Storage is enabled when a new database is created, DB2 will use Automatic
Storage Management for the three system default table spaces. The number of containers
and the names will depend on the number of and names of the Automatic Storage paths
defined.
The example assumes that the database named DSS is created in the instance named
inst20 with a single Automatic Storage path defined, named /dbauto. The table space
SYSCATSPACE will only have containers on NODE0000. The other two table spaces.
USERSPACE1 and TEMPSPACE1 will have containers defined, but will not contain any
data.
4-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
S Y S S T A T
v
i
e
w
s
S Y S IB M .S Y S C O L U M N S
S Y S IB M .S Y S T A B L E S
...
S Y S C A T
...
CL21311.0
Notes:
A set of system catalog tables is created and maintained for each database. These tables
contain information about the definitions of the database objects and also security
information about the type of access users have to these objects.
The database manager creates and maintains two sets of catalog views. All of the system
catalog views are created when a database is created with the CREATE DATABASE
command. The catalog views cannot be explicitly created or dropped. The views are within
the SYSCAT schema and select privilege on all views is granted to PUBLIC by default. A
second set of views formed from a subset of those within the SYSCAT schema contains
statistical information used by the SQL Optimizer. The views within the SYSSTAT schema
contain some updateable columns.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
TABLE SPACE
CONTAINER
EXTENT
DATA
PAGE
CL21311.0
Notes:
With DB2 LUW a table space is defined to have one or more containers.
An extent is a block of storage within a table space container. It represents the number of
pages of data that will be written to a container before writing to the next container. When
you create a table space, you can choose the extent size based on your requirements for
performance and storage management.
When selecting an extent size, consider:
The size and type of tables in the table space.
Space in DMS table spaces is allocated to a table one extent at a time. As the table is
populated and an extent becomes full, a new extent is allocated. DMS table space
container storage is pre-reserved which means that new extents are allocated until the
container is completely used.
Space in SMS table spaces is allocated to a table either one extent at a time or one
page at a time. As the table is populated and an extent or page becomes full, a new
extent or page is allocated until all of the extents or pages in the file system are used.
Copyright IBM Corp. 1999, 2012
4-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
When using SMS table spaces, multi page file allocation is allowed. Multi page file
allocation allows extents to be allocated instead of a page at a time.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
File
Directory
Device
Directory
File
Device
SMS
DMS
DMS
CL21311.0
Notes:
When a SMS managed or DMS managed table space is created, one or more containers
will be defined. For SMS table spaces, the container name is a disk directory. DB2 will use
the directory to store sets of files associated with the database objects. For DMS table
spaces, the container names can be the name of the file to create or the name of a defined
raw device to be used by DB2 for disk storage.
For Automatic Storage managed table spaces, DB2 will create a set of containers with a
predefined naming scheme when the table space is created.
4-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Writing to containers
DFT_EXTENT_SZ defined at database level
EXTENTSIZE defined at table space level
Data written in round-robin manner
Container 0
Container 1
1
Extent
Table space TBS
Copyright IBM Corporation 2012
CL21311.0
Notes:
An extent is an allocation of contiguous space within a container of a table space. Extents
are allocated to a single database object and consist of multiple pages. The default extent
size is 32 pages, but a different value can be specified when creating a table space.
Data for an object is stored on a container by extents. When data for an object is written,
the database system stripes the data across all the containers in the table space based on
the extent size.
DFT_EXTENT_SZ is a database configuration parameter. If you do not specify an extent
size when the table space is created, DFT_EXTENT_SZ will be used. The default size for
DFT_EXTENT_SZ is 32 pages. If you do not alter this value or explicitly indicate an extent
size when you create a table space, all your table spaces within the database will have this
default value. The range of values for DFT_EXTENT_SZ is between two and 256 pages.
You can specify extent size at the table space level. This can be done at table space
creation with the parameter EXTENTSIZE. Carefully determine the correct size, as once it
is set for a table space, it cannot be altered. This size can have an impact on space
utilization and performance.
4-28 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The database manager will try to evenly distribute the table among containers. In doing so,
the database manager writes an extent of pages to each container before writing to the
next container. Once the database manager has written an extent to all the containers
allocated to a table space, it will write the next extent to the first container written to in that
table space. This round-robin process of writing to the containers is designed to balance
the workload across the containers of the table space.
DB2 utilities like BACKUP and LOAD are designed to perform I/O at the extent level, so
larger extents would require fewer I/O operations. DB2 will use the extent as an efficient
was to read groups of pages when scanning a table.
4-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Large RIDs
64
GB
128
GB
8 KB
4
KB
256
GB
16 KB
8
TB
16
TB
8 KB
512
GB
32
TB
64
TB
16 KB
32 KB
32 KB
Page size
Page size
table space size
16M
255
4x109 Rows
16M
512M
~2K
1.1x1012 Rows
CL21311.0
Notes:
Storage limitations for regular table spaces
If a table space is created as a Regular table space, the 4-byte Row Identifiers used place
a limit on the size for DB2 tables and table spaces. The three bytes that are used for page
numbers, provides addressing for up to 16 million pages. The one byte used for slot
numbers, limits pages to at most 255 rows per page. That provides an overall limit of about
4 billion rows.
The maximum storage for table and table spaces depends on the page size selected for
the table space. With 16 million pages, a 4 KB page table space would be limited to 64
Gigabytes of storage. Using 8 KB pages doubles the limit to 128 GB. With 16 KB pages,
the limit is 256 GB and a 32 KB page size allows up to 512 GB to be addressed.
For tables in SMS table spaces, the limit applies at the table level. For tables in
DMS-managed table spaces, the limit applies to the table space. So creating a table with 6
indexes in a DMS table space with 8 KB pages, would place a size limit for the table and its
indexes at 256 GB. Using DMS table spaces, the indexes for a table can be stored in a
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
different regular or large DMS table space, which would allow a large table to use the full
capacity of a DMS regular table space.
4-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CREATE
tablespace-name
REGULAR
SYSTEM
TEMPORARY
PAGESIZE 4096
USER
PAGESIZE integer
db-partition-group-name
IN
MANAGED BY
integer
PREFETCHSIZE
AUTOMATIC
num-pages
integer
OVERHEAD
BUFFERPOOL
bufferpool-name
K
M
G
NO FILE SYSTEM CACHING
FILE SYSTEM CACHING
number-of-milliseconds
Inherit
TRANSFERRATE
K
M
G
number-of-milliseconds
Inherit
ON
OFF
CL21311.0
Notes:
The visual shows a portion of the syntax used to create a table space.
The default for table space type is Large. The default table space management type is
automatic storage. With DB2 10.1, automatic storage table spaces can be assigned a
named storage group. This directs DB2 to assign the containers from a specific set of
containers that may have different device characteristics.
When a table space is created, the page size and extent size is fixed and can not be
altered. Table spaces can be assigned to use a specific buffer pool that has a matching
page size.
The OVERHEAD and TRANSFERRATE provide performance estimates that will be used
by the DB2 optimizer to determine efficient access plans. The setting for OVERHEAD and
TRANSFERRATE can be set to be inherited from the storage group for automatic storage
table spaces.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
YES
NO
AUTORESIZE
K
M
G
system-containers:
MAXSIZE int
K
M
G
NONE
,
USING
'container-string'
)
| on-db-partitions-clause |
database-containers:
USING
| container-clause |
| on-db-partitions-clause |
container-clause:
,
FILE
'container-string'
DEVICE
on-db-partitions-clause:
num-pages
integer
K
M
G
,
ON
DBPARTITIONNUM
db-partition-number1
TO
DBPARTITIONNUMS
db-partition-number2
CL21311.0
Notes:
Here are some additional options that can be selected when creating a table space.
AUTORESIZE
Specifies whether or not the auto-resize capability of a DMS table space or an automatic
storage table space is to be enabled. Auto-resize able table spaces automatically increase
in size when they become full. The default is NO for DMS table spaces and YES for
automatic storage table spaces.
INITIALSIZE integer K | M | G
Specifies the initial size, per database partition, of an automatic storage table space. This
option is only valid for automatic storage table spaces. The integer value must be followed
by K (for kilobytes), M (for megabytes), or G (for gigabytes). Note that the actual value used
might be slightly smaller than what was specified, because the database manager strives
to maintain a consistent size across containers in the table space. Moreover, if the table
space is auto-resize able and the initial size is not large enough to contain meta-data that
must be added to the new table space, the database manager will continue to extend the
Copyright IBM Corp. 1999, 2012
4-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
table space by the value of INCREASESIZE until there is enough space. If the INITIALSIZE
clause is not specified, the database manager determines an appropriate value. The value
for integer must be at least 48 K.
INCREASESIZE integer PERCENT or INCREASESIZE integer K | M | G
Specifies the amount, per database partition, by which a table space that is enabled for
auto-resize will automatically be increased when the table space is full, and a request for
space has been made. The integer value must be followed by:
- PERCENT to specify the amount as a percentage of the table space size at the time
that a request for space is made. When PERCENT is specified, the integer value
must be between 0 and 100 (SQLSTATE 42615).
- K (for kilobytes), M (for megabytes), or G (for gigabytes) to specify the amount in
bytes
Note that the actual value used might be slightly smaller or larger than what was
specified, because the database manager strives to maintain consistent growth across
containers in the table space. If the table space is auto-resize able, but the
INCREASESIZE clause is not specified, the database manager determines an
appropriate value.
MAXSIZE integer K | M | G or MAXSIZE NONE
Specifies the maximum size to which a table space that is enabled for auto-resize can
automatically be increased. If the table space is auto-resize able, but the MAXSIZE
clause is not specified, the default is NONE.
integer
Specifies a hard limit on the size, per database partition, to which a DMS table space or
an automatic storage table space can automatically be increased. The integer value
must be followed by K (for kilobytes), M (for megabytes), or G (for gigabytes). Note that
the actual value used might be slightly smaller than what was specified, because the
database manager strives to maintain consistent growth across containers in the table
space.
NONE
Specifies that the table space is to be allowed to grow to file system capacity, or to the
maximum table space size (described in "SQL and XML limits").
DMS managed table spaces require container definitions that include a specific size, while
SMS managed table spaces require container definitions with no size specified.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Partition
Automatic
Storage
Table Space
2012Q1
Table Space 14
2011Q3
2011Q2
Table Space 11
2011Q4
Storage
Group
Table Space 10
Table Space 9
Table Space 1
spath: /warm/fs1
spath: /warm/fs2
spath: /hot/fs1
New
SG_HOT
SG_WARM
spath: /cold/fs1
spath: /cold/fs2
spath: /cold/fs3
SG_COLD
Physical Disk
SSD RAID Array
CL21311.0
Notes:
A storage group is a named set of storage paths where data can be stored. Storage groups
are configured to represent different classes of storage available to your database system.
You can assign table spaces to the storage group that best suits the data. Only automatic
storage table spaces use storage groups.
A table space can be associated with only one storage group, but a storage group can
have multiple table space associations. To manage storage group objects you can use the
CREATE STOGROUP, ALTER STOGROUP, RENAME STOGROUP, DROP and
COMMENT statements.
With the table partitioning feature, you can place table data in multiple table spaces. Using
this feature, storage groups can store a subset of table data on fast storage while the
remainder of the data is on one or more layers of slower storage. Use storage groups to
support multi-temperature storage which prioritizes data based on classes of storage. For
example, you can create storage groups that map to the different tiers of storage in your
database system. Then the defined table spaces are associated with these storage groups.
4-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
When defining storage groups, ensure that you group the storage paths according to their
quality of service characteristics. The common quality of service characteristics for data
follow an aging pattern where the most recent data is frequently accessed and requires the
fastest access time (hot data) while older data is less frequently accessed and can tolerate
higher access time (warm data or cold data).
The priority of the data is based on:
Frequency of access
Acceptable access time
Volatility of the data
Application requirements
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Use the CREATE STOGROUP statement to create storage groups. Creating a storage
group within a database assigns storage paths to the storage group.
If you create a database with the AUTOMATIC STORAGE NO clause it does not have a
default storage group. You can use the CREATE STOGROUP statement to create a default
storage group.
Note
Although, you can create a database specifying the AUTOMATIC STORAGE NO clause,
the AUTOMATIC STORAGE clause is deprecated and might be removed from a future
release.
4-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
To create a storage group by using the command line, enter the following statement:
CREATE STOGROUP operational_sg ON '/filesystem1', '/filesystem2'
where operational_sg is the name of the storage group and /filesystem1 and
/filesystem2 are the storage paths to be added.
Important
Important: To help ensure predictable performance, all the paths that you assign to a
storage group should have the same media characteristics: latency, device read rate, and
size.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
For automatic storage table spaces, the USING STOGROUP option allows a new table
space to be assigned to an existing storage group.
If a database has storage groups, the default storage group is used when an automatic
storage managed table space is created without explicitly specifying the storage group.
When you create a database, a default storage group named IBMSTOGROUP is
automatically created. However, a database created with the AUTOMATIC STORAGE NO
clause, does not have a default storage group. The first storage group created with the
CREATE STOGROUP statement becomes the designated default storage group. There
can only be one storage group designated as the default storage group.
You can designate a default storage group by using either the CREATE STOGROUP or
ALTER STOGROUP statements. When you designate a different storage group as the
default storage group, there is no impact to the existing table spaces using the old default
storage group. To alter the storage group associated with a table space, use the ALTER
TABLESPACE statement.
4-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
You can determine which storage group is the default storage group by using the
SYSCAT.STOGROUPS catalog view.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
You can use the ALTER STOGROUP statement to alter the definition of a storage group,
including setting media attributes, setting a data tag, or setting a default storage group. You
can also add and remove storage paths from a storage group.
If you add storage paths to a storage group and you want to stripe the extents of their table
spaces over all storage paths, you must use the ALTER TABLESPACE statement with the
REBALANCE option for each table space that is associated with that storage group.
If you drop storage paths from a storage group, you must use the ALTER TABLESPACE
statement with the REBALANCE option to move allocated extents off the dropped paths.
You can use the DB2 Work Load Manager (WLM) to define rules about how activities are
treated based on a tag that is associated with accessed data. You associate the tag with
data when defining a table space or a storage group.
4-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
STORAGE_GROUP
STORAGE_GROUP_ID STORAGE_PATH
DB_STORAGE_PATH_STATE
-------------------- ---------------- -------------------- --------------------IBMSTOGROUP
0 /dbauto/path1
IN_USE
APP_DATA
1 /dbauto/path2
IN_USE
TOTAL_PATH_MB
PATH_FREE_MB
-------------------- -------------------20940
5649
20940
5649
2 record(s) selected.
Figure 4-22. Query storage groups with SQL using the table function ADMIN_GET_STORAGE_PATHS
CL21311.0
Notes:
The ADMIN_GET_STORAGE_PATHS table function returns a list of automatic storage
paths for each database storage group, including file system information for each storage
path.
Syntax
>>-ADMIN_GET_STORAGE_PATHS--(--storage_group_name--,--member--)-><
The schema is SYSPROC.
Table function parameters
storage_group_name - An input argument of type VARCHAR(128) that specifies a valid
storage group name in the currently connected database when this function is called. If
the argument is NULL or an empty string, information is returned for all storage groups
in the database. If the argument is specified, information is only returned for the
identified storage group.
4-42 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
member - An input argument of type INTEGER that specifies a valid member in the
same instance as the currently connected database when calling this function. Specify
-1 for the current database member, or -2 for all database members. If the NULL value
is specified, -1 is set implicitly.
Authorization
One of the following authorities is required to execute the routine:
EXECUTE privilege on the routine
DATAACCESS authority
DBADM authority
SQLADM authority
4-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Name
IBMSTOGROUP
SG_HIGH
SG_LOW
Numpaths
2
2
2
NumDropPen
0
0
0
PathState
InUse
InUse
InUse
InUse
InUse
InUse
PathName
/dbauto/path1
/dbauto/path2
/dbauto/path1/sg_high
/dbauto/path2/sg_high
/dbauto/path1/sg_low
/dbauto/path2/sg_low
CL21311.0
Notes:
The visual shows an example of the db2pd command report for storage groups.
The report includes the defined storage groups, includes the designation of the default
storage group and lists the paths assigned to each storage group.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The administration of table spaces using Automatic Storage is very easy. The CREATE
TABLESPACE syntax does not require the names of containers or the number of
containers to be defined. For Automatic Storage table spaces, the disk space is assigned
from a storage group. If no storage group is specified, a default storage group will be used.
This allows the DBA to monitor available space at the storage group level instead of each
individual table space. As long as there is space available in one of the defined Automatic
Storage paths, a table space can be automatically extended by DB2. Smaller databases
may only want a single storage group for the database.
When a table space is created, DB2 can create multiple containers, using all of the
available storage paths, which helps performance for table and index scans.
Additional storage path(s) can be added using an ALTER STOGROUP statement, to
support growth over time.
4-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Automatic Storage management actually uses DMS and SMS table spaces under the
covers. A DMS table space is used for REGULAR and LARGE table spaces, while SMS is
used for SYSTEM and USER TEMPORARY table spaces.
The allocation of space for Automatic Storage table spaces can be controlled using the
options of CREATE and ALTER TABLESPACE, including:
INITIALSIZE Which defaults to 32 MB, if not specified.
AUTORESIZE Can be set to YES or NO, with YES being the default for Regular and
Large table spaces.
INCREASESIZE Can be set to a specific amount or percentage increase.
MAXSIZE Can be used to define a limit on growth for the table space.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Examples: CREATE
CREATE
CREATE
CREATE
CREATE
CL21311.0
Notes:
If a database is enabled for Automatic Storage, the MANAGED BY AUTOMATIC
STORAGE clause can be specified, or the MANAGED BY clause might be left out
completely (which implies Automatic Storage). No container definitions are provided in this
case because the DB2 database manager assigns the containers automatically.
In the example CREATE TABLESPACE statements shown above:
USER1: Created with Automatic Storage using the APP_DATA storage group, with an
initial size of 32 MB and will auto-resize.
TEMPTS: Created with Automatic Storage, as a SMS-managed temporary table space
with one directory on each Automatic Storage path. SMS-managed table spaces do not
have a MAXSIZE limit. The default storage group will be used.
MYTS: Created with Automatic Storage, with an initial size of 100 MB and will
auto-resize until it reaches 1 GB. The default storage group will be used.
LRGTS: Created with Automatic Storage, with an initial size of 5 GB and will not
auto-resize. The default storage group will be used.
Copyright IBM Corp. 1999, 2012
4-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
USER2: Created the Automatic Storage, with an initial size of 500 MB and will
auto-resize. The default storage group will be used.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
ALTER TABLESPACE
ALTER TABLESPACE can be used to change table space
characteristics:
For all types table space management, you can adjust:
Bufferpool assigned
Prefetch size
Overhead and Transfer rate I/O Costs
File System Caching option
CL21311.0
Notes:
The ALTER TABLESPACE statement is used to modify an existing table space in the
following ways:
Add a container to, or drop a container from a DMS table space; that is, a table space
created with the MANAGED BY DATABASE option.
Modify the size of a container in a DMS table space.
Lower the high water mark for a DMS table space through extent movement.
Add a container to an SMS table space on a database partition that currently has no
containers.
Modify the PREFETCHSIZE setting for a table space.
Modify the BUFFERPOOL used for tables in the table space.
Modify the OVERHEAD setting for a table space.
Modify the TRANSFERRATE setting for a table space.
4-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
New Automatic Storage paths can added to a database using the ALTER STOGROUP
command.
For example:
alter stogroup APP_DATA add storage on /dbauto/path3,/dbauto/path4
Automatic Storage can be enabled in an existing database by creating a new storage
group, that will become the default storage group for the database.
When new paths are added to a storage group, existing Automatic Storage table
spaces in that group, will grow using the previously assigned storage paths until
remaining space is used.
Newly created table spaces will begin to use all defined paths
Individual table spaces can be altered using REBALANCE to spread data over all
storage paths
Storage paths can also be removed using ALTER STOGROUP.
Copyright IBM Corp. 1999, 2012
4-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
DEFINER
TBSPACEID
TBSPACETYPE
---------- ----------- ----------SYSIBM
0 D
INST28
9 D
INST28
3 D
SYSIBM
2 D
SYSIBM
1 S
INST28
7 D
INST28
8 D
INST28
4 D
INST28
5 D
INST28
6 D
INST28
10 S
DATATYPE
-------A
A
L
L
T
L
L
A
L
L
A
SGNAME
-----------IBMSTOGROUP
IBMSTOGROUP
IBMSTOGROUP
IBMSTOGROUP
IBMSTOGROUP
APP_DATA
APP_DATA
-
11 record(s) selected.
CL21311.0
Notes:
The sample query and result show how the view SYSCAT.TABLESPACES can be used to
get information about the table spaces in a database.
Notice that the three default table spaces are listed with a definer of SYSIBM. The
SGNAME column shows the storage group used for automatic storage table spaces.
The column TBSPACETYPE, is the type of table space.
D = Database-managed space
S = System-managed space
The column DATATYPE show the type of data that can be stored in this table space.
A = All types of permanent data; regular table space
L = All types of permanent data; large table space
T = System temporary tables only
U = Created temporary tables or declared temporary tables only
Copyright IBM Corp. 1999, 2012
4-53
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Id
0x9396EEC0 0
DMS
Regular 4096
Yes
..SYSCATSPACE
0x93977C90 1
SMS
SysTmp
4096
32
Yes
32
..TEMPSPACE1
0x93982990 2
DMS
Large
4096
32
Yes
32
..USERSPACE1
0x95727700 3
DMS
Large
4096
Yes
..SYSTOOLSPACE
0x957B92B0 4
DMS
Regular 4096
Yes
..TSP01
0x957C58B0 5
DMS
Large
4096
Yes
..TSP02
0x957D8760 6
DMS
Large
4096
Yes
..TSP03
0x957E0EC0 7
DMS
Large
4096
Yes
..TSP04
0x957E9620 8
DMS
Large
4096
Yes
..TSP05
0x957F1D80 9
DMS
Regular 4096
Yes
..TSP06
0x957FA4E0 10
SMS
Regular 4096
Yes
..SMS01
Tablespace Statistics:
Address
Id
TotalPgs
UsablePgs
UsedPgs
PndFreePgs FreePgs
HWM
0x9396EEC0 0
24576
24572
20696
3876
20696
0x93977C90 1
..
Copyright IBM Corporation 2012
Figure 4-29. Using the db2pd command to list tablespace status and statistics
CL21311.0
Notes:
The db2pd command can be used to list the current status and usage of the table spaces
for an active database. The db2pd command would be run from the database server by a
user system the system administration authority defined for the DB2 instance.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Example
To list table spaces ordered by number of physical reads from table space
containers.
SELECT varchar(tbsp_name, 30) as tbsp_name,
member,
tbsp_type,
pool_data_p_reads
FROM TABLE(MON_GET_TABLESPACE('',-2)) AS t
ORDER BY pool_data_p_reads DESC
TBSP_NAME
MEMBER TBSP_TYPE POOL_DATA_P_READS
------------------------------ ------ ---------- -------------------SYSCATSPACE
0 DMS
79
USERSPACE1
0 DMS
34
TEMPSPACE1
0 SMS
0
CL21311.0
Notes:
The MON_GET_TABLESPACE table function returns one row of data per database table
space and per database member. No aggregation across database members is performed.
However, aggregation can be achieved through SQL queries.
Metrics collected by this function are controlled at the database level using the
mon_obj_metrics configuration parameter. By default, metrics collection is enabled.
The term member refers to the multiple members in a DB2 pureScale clustered database.
In a standard DB2 database, there is one database member, member 0.
4-55
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Figure 4-31.
CL21311.0
Notes:
SYSTOOLSPACE and SYSTOOLSTMPSPACE table spaces
The SYSTOOLSPACE table space is a user data table space used by the DB2
administration tools and some SQL administrative routines for storing historical data and
configuration information.
The following tools and SQL administrative routines use the SYSTOOLSPACE table space:
ADMIN_COPY_SCHEMA procedure
ADMIN_DROP_SCHEMA procedure
ADMIN_MOVE_TABLE procedure
ADMIN_MOVE_TABLE_UTIL procedure
Administrative task scheduler
ALTOBJ procedure
Automatic Reorganization (including the db.tb_reorg_req health indicator)
4-56 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
4-57
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
A buffer pool is an area of main memory that has been allocated by the database manager
for the purpose of caching table and index data as it is read from disk. Every DB2 database
must have a buffer pool.
Each new database has a default buffer pool defined, called IBMDEFAULTBP. Additional
buffer pools can be created, dropped, and modified, using the CREATE BUFFERPOOL,
DROP BUFFERPOOL, and ALTER BUFFERPOOL statements.
The SYSCAT.BUFFERPOOLS catalog view accesses the information for the buffer pools
defined in the database.
If there is sufficient memory available, the buffer pool can become active immediately. By
default new buffer pools are created using the IMMEDIATE keyword, and on most
platforms, the database manager is able to acquire more memory. The expected return is
successful memory allocation. In cases where the database manager is unable to allocate
the extra memory, the database manager returns a warning condition stating that the buffer
pool could not be started. This warning is provided on the subsequent database startup.
For immediate requests, you do not need to restart the database. When this statement is
4-58 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
committed, the buffer pool is reflected in the system catalog tables, but the buffer pool does
not become active until the next time the database is started.
If you issue a CREATE BUFFERPOOL DEFERRED, the buffer pool is not immediately
activated; instead, it is created at the next database startup. Until the database is restarted,
any new table spaces use an existing buffer pool, even if that table space is created to
explicitly use the deferred buffer pool.
4-59
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Advantages of large buffer pools
Large buffer pools provide the following advantages:
They enable frequently requested data pages to be kept in the buffer pool, which allows
quicker access. Fewer I/O operations can reduce I/O contention, thereby providing
better response time and reducing the processor resource needed for I/O operations.
They provide the opportunity to achieve higher transaction rates with the same
response time.
They prevent I/O contention for frequently used disk storage devices, such as those
that store the catalog tables and frequently referenced user tables and indexes. Sorts
required by queries also benefit from reduced I/O contention on the disk storage
devices that contain temporary table spaces.
Advantages of many buffer pools
Use only a single buffer pool if any of the following conditions apply to your system:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
- The total buffer pool space is less than 10 000 4-KB pages
- Persons with the application knowledge to perform specialized tuning are not
available
- You are working on a test system
In all other circumstances, and for the following reasons, consider using more than one
buffer pool:
- Temporary table spaces can be assigned to a separate buffer pool to provide better
performance for queries (especially sort-intensive queries) that require temporary
storage.
- If data must be accessed repeatedly and quickly by many short update-transaction
applications, consider assigning the table space that contains the data to a separate
buffer pool. If this buffer pool is sized appropriately, its pages have a better chance
of being found, contributing to a lower response time and a lower transaction cost.
- You can isolate data into separate buffer pools to favor certain applications, data,
and indexes. For example, you might want to put tables and indexes that are
updated frequently into a buffer pool that is separate from those tables and indexes
that are frequently queried but infrequently updated.
- You can use smaller buffer pools for data that is accessed by seldom-used
applications, especially applications that require very random access into a very
large table. In such cases, data need not be kept in the buffer pool for longer than a
single query. It is better to keep a small buffer pool for this type of data, and to free
the extra memory for other buffer pools.
4-61
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
...
=
=
=
=
=
=
=
TESTDB
OURDB
/database
a.00
Test Database
Indirect
4
CL21311.0
Notes:
The DB2 command line processor can be used to change directory entries in the system
database directory for a DB2 instance.
Within the database directory a database alias must be unique. The database name does
not have to be unique. A reason to catalog a local database is to change its alias, which is
the name by which users and programs identify the database. When a database is created,
its alias is the same as its name. The name by which users and programs refer to a
database can be changed without having to DROP and re-CREATE the database.
You might want a database cataloged with more than one alias name.
The UNCATALOG command can be used to remove a database alias.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Database configuration
CL21311.0
Notes:
The database configuration file is created when a DB2 database is created. The
parameters it contains affect resources at the database level. Values for many of these
parameters can be changed from the default values to improve performance or support
different application requirements.
The DB2 CLP or tools like IBM Data Studio might be used to get a listing of the database
configuration file. The GET DB CFG command will list the parameters contained in the
database configuration file.
The updateable parameters in the database configuration file can be changed using the
UPDATE DB CFG command, or using the IBM Data Studio tool.
The database territory, code set, country code, and code page are recorded in the
database configuration file. However, these parameters cannot be changed.
The db2pd command option -dbcfg can also be used to get the current options for an active
database. This shows the value active in memory and the current value in the configuration
disk file, which may take effect on the next database restart.
Copyright IBM Corp. 1999, 2012
4-63
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DEACTIVATE DATABASE
Databases initialized by ACTIVATE DATABASE can be shut down
using the DEACTIVATE DATABASE command, or using the
db2stop command.
db2 deactivate db <db_name>
CL21311.0
Notes:
If a database has not been started, and a CONNECT TO (or an implicit connect) is issued
in an application, the application must wait while the database manager starts the required
database, before it can do any work with that database. However, once the database is
started, other applications can simply connect and use it without spending time on its start
up.
Database administrators can use ACTIVATE DATABASE to start up selected databases.
This eliminates any application time spent on database initialization.
Databases initialized by ACTIVATE DATABASE can be shut down using the DEACTIVATE
DATABASE command, or using the db2stop command.
If a database was started by a CONNECT TO (or an implicit connect) and subsequently an
ACTIVATE DATABASE is issued for that same database, then DEACTIVATE DATABASE
must be used to shut down that database. If ACTIVATE DATABASE was not used to start
the database, the database will shut down when the last application disconnects.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
4-65
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit summary
Having completed this unit, you should be able to:
Review specifics of creating a database
Explore the System Catalog tables and views
Check and update Database configuration parameter settings
Compare DMS, SMS and Automatic Storage managed table spaces
Describe how to setup and manage a DB2 database with Automatic Storage
enabled
Define Storage Groups to manage databases with different classes of
storage available
Differentiate between table spaces, containers, extents, and pages
Create and alter table spaces
Create buffer pools to handle multiple page sizes or improve table access
efficiency
Use DB2 commands and SQL statements to display current table space
statistics and status information
Copyright IBM Corporation 2012
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Student exercise
CL21311.0
Notes:
4-67
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Command Reference
Database Administration Concepts and Configuration Reference
5-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Describe the DB2 object hierarchy
Create the following objects:
Schema, Table, View, Alias, Index
CL21311.0
Notes:
5-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Instance 1
SYSCATSPACE
SYSCATSPACE
Catalog
Log
Database 1
Catalog
View1
Log
Database 2
View1
View2
DB configuration file
DB configuration file
TSSMS1
Table1
TSDMSLRG1
Table2
USERSPACE1
Table2
Table3
TSDMSREG2
Index1
Index1
TSDMSLRG3
BLOBs
Index2
CL21311.0
Notes:
Each DB2 instance has its own database manager configuration file. Its global parameters
affect the system resources allocated to DB2 for an individual instance. Its parameters can
be changed from the system default values to improve performance or increase capacity,
depending on the workstation configuration.
Each instance might have multiple databases. A relational database presents data as a
collection of tables. A table consists of a defined number of columns and any number of
rows. Each database includes a set of system catalog tables, which describe the logical
and physical structure of the data (like a table or view), or contain statistics of the data
distribution; a configuration file containing the parameter values allocated for the database;
and a recovery log with ongoing transactions and archival transactions.
Each table might have multiple indexes. Indexes might provide a faster way to access table
data. Each table might have multiple views. Views might be associated with more than one
base table.
5-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The physical objects in a database are assigned to table spaces. When creating a table,
you can decide to have certain objects such as indexes and large object (LOB) data kept
separately from the rest of the table data. By default, all objects referencing a table reside
in the same table space where the table itself resides. A table space can also be spread
over one or more physical storage devices.
In the visual, two databases are shown.
For Database 1:
The system catalog tables are in table space SYSCATSPACE.
Table 1 and its one Index are in a SMS table space named TSSMS1.
Tables 2 and Table 3 are both assigned to the Large DMS table space TSDMSLRG1.
Two Indexes for Table 3 are assigned to the Regular table space TSDMSREG2.
The Large Object data columns from Table 3 are assigned to the Large table space
TSDMSLRG3.
For Database 2:
The system catalog tables are in table space SYSCATSPACE.
Table 2 is assigned to the table space USERSPACE1. By default, USERSPACE1 would be
an Automatic Storage managed table space.
5-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Create Schema
A schema is a collection of named objects
A schema is also a name qualifier
The schema names 'INTERNAL' and 'EXTERNAL' make it easy to distinguish two
different SALES tables (INTERNAL.SALES, EXTERNAL.SALES).
The schema name provides a way to group those objects logically, providing a
way to use the same natural name for several objects, and to prevent ambiguous
references to those objects.
Schemas also enable multiple applications to store data in a single database
without encountering namespace collisions.
A schema can contain tables, views, nicknames, triggers, functions, packages,
and other objects.
A schema is itself a database object.
The schema can be explicitly created using the CREATE SCHEMA statement,
with the current user or a specified authorization ID recorded as the schema
owner.
CREATE SCHEMA PAYROLL
AUTHORIZATION DB2ADMIN DATA CAPTURE NONE ;
A schema may also be implicitly created when another object is created, if the
user has IMPLICIT_SCHEMA authority
A schema can be used to set a default DATA CAPTURE option for objects
Copyright IBM Corporation 2012
CL21311.0
Notes:
When the schema is explicitly created with the CREATE SCHEMA statement, the schema
owner is granted CREATEIN, DROPIN, ALTERIN privileges on the schema with the ability
to grant these privileges to other users.
A schema name or authorization name cannot begin with SYS.
While organizing your data into tables, it might also be beneficial to group tables (and other
related objects) together. This is done by defining a schema. Information about the schema
is kept in the system catalog tables of the database to which you are connected. As other
objects are created, they can be placed within this schema.
An authorization ID that holds DBADM authority can create a schema with any valid
schema name or authorization name. Any ID can explicitly create a schema that matches
the authorization ID of the statement.
When the schema is explicitly created with the CREATE SCHEMA statement, the schema
owner is granted CREATEIN, DROPIN, ALTERIN, GRANTIN privileges on the schema with
ability to grant to other users.
Copyright IBM Corp. 1999, 2012
5-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
You can create a schema and include certain SQL statements with it (CREATE TABLE,
excluding typed tables and materialized query tables; CREATE VIEW statement, excluding
typed views; CREATE INDEX statement; COMMENT statement; GRANT statement). For
example, the following is a single statement:
CREATE SCHEMA pers
CREATE TABLE ORG (
deptnumb SMALLINT NOT NULL,
deptname VARCHAR(14),
manager SMALLINT,
division VARCHAR(10),
location VARCHAR(13),
CONSTRAINT pkeydno PRIMARY KEY (deptnumb),
CONSTRAINT fkeymgr FOREIGN KEY (manager)
REFERENCES staff (id)
)
CREATE TABLE STAFF (
id SMALLINT NOT NULL,
name VARCHAR(9),
dept SMALLINT,
job VARCHAR(5),
years SMALLINT,
salary DECIMAL(7,2),
comm DECIMAL(7,2),
CONSTRAINT pkeyid PRIMARY KEY (id),
CONSTRAINT fkeydno FOREIGN KEY (dept)
REFERENCES org (deptnumb)
)
Thus, you can use a single statement to create two tables that are dependent on each
other, rather than having to create the first with Primary Key, the second with Primary and
Foreign Key, and then alter the first to add Foreign Key.
Unqualified object names in any SQL statement within the CREATE SCHEMA statement
are implicitly qualified by the name of the created schema.
Information
Starting with DB2 10.1, you can use the DATA CAPTURE attribute with the CREATE
SCHEMA statement or set the dft_schemas_dcc database configuration parameter to ON,
to have all subsequently created tables inherit the DATA CAPTURE CHANGES property.
5-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
When accessing data within DB2, unqualified references will be implicitly qualified with the
authorization ID that was used to connect to the database. You can override this by setting
the CURRENT SCHEMA. The initial value of the CURRENT SCHEMA special register is
equivalent to USER.
The example on the graphic shows that a user KEITH is connecting to the database. If
Keith issues a select against the EMPLOYEE table, the table that will be accessed will be
KEITH.EMPLOYEE. If he sets his current schema to PAYROLL, then a select against the
EMPLOYEE table will be directed against the PAYROLL.EMPLOYEE table.
Alternative syntax includes:
SET CURRENT SCHEMA = 'PAYROLL'
SET SCHEMA 'PAYROLL'
SET CURRENT SQLID 'PAYROLL'
Note that the use of the = is optional in all of these statements.
5-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The value of the CURRENT SCHEMA special register is used as the schema name in all
dynamic SQL statements where an unqualified reference to a database object exists.
5-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
in dms03;
CL21311.0
Notes:
A table consists of data logically arranged in columns and rows. DB2 supports page sizes
of 4, 8, 16, and 32 KB. The number of columns, maximum row length, and maximum table
size vary by page size. Regular table spaces use a 4-byte Row ID, which allows 16 million
pages with up to 255 rows per page. Non-temporary SMS-managed table spaces are all
Regular table spaces. DMS-managed table spaces can be either Regular or Large table
spaces. Large table spaces use a 6-byte Row ID, and can store up to 64TB of data.
A table with a 4K page could be as large as 64 GB in a regular table space or 8 Terabytes
in a Large table space. Using a 32K page size would allow a table in a Regular table space
to be as large as 512GB or 64TB in a Large table space.
Tables are created using the SQL statement CREATE TABLE.
If no schema name is supplied with the table name, the value of the CURRENT SCHEMA
special register is used as the schema name.
Table design concepts
When designing tables, you must be familiar with some related concepts.
Copyright IBM Corp. 1999, 2012
5-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
A declared temporary table is a temporary table that is only accessible to SQL statements
that are issued by the application which created the temporary table. A declared temporary
table does not persist beyond the duration of the connection of the application to the
database.
The term global for these temporary tables is used to indicate that all subroutines of a
program would see this table and its data, but the tables are "local" in the sense that each
user's data in this table is only ever seen by this specific user'
Use declared temporary tables to potentially improve the performance of your applications.
When you create a declared temporary table, DB2 does not insert an entry into the system
catalog tables, and, therefore, your server does not suffer from catalog contention issues.
In comparison to regular tables, DB2 does not lock declared temporary tables or their rows.
If your current application creates tables to process large amounts of data and drops those
tables once the application has finished manipulating that data, consider using declared
temporary tables instead of regular tables.
To use a declared temporary table, perform the following steps:
Copyright IBM Corp. 1999, 2012
5-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The DECLARE GLOBAL TEMPORARY TABLE statement defines a temporary table for the
current session. The declared temporary table description does not appear in the system
catalog. It is not persistent and cannot be shared with other sessions. Each session that
defines a declared global temporary table of the same name has its own unique description
of the temporary table. When the session terminates, the rows of the table are deleted, and
the description of the temporary table is dropped.
The privileges held by the authorization ID of the statement must include at least one of the
following:
USE privilege on the USER TEMPORARY table space
DBADM authority
SYSADM authority
SYSCTRL authority
5-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
When defining a table using LIKE or a fullselect, the privileges held by the authorization ID
of the statement must also include at least one of the following on each identified table or
view:
SELECT privilege on the table or view
CONTROL privilege on the table or view
DATAACCESS authority
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The CREATE GLOBAL TEMPORARY TABLE statement creates a description of a
temporary table at the current server. Each session that selects from a created temporary
table retrieves only rows that the same session has inserted. When the session terminates,
the rows of the table associated with the session are deleted.
The privileges held by the authorization ID of the statement must include either DBADM
authority, or CREATETAB authority in combination with further authorization, as described
here:
One of the following privileges and authorities:
USE privilege on the table space
SYSADM
SYSCTRL
Plus one of these privileges and authorities:
5-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Table partitioning
Data organization scheme in which table data is divided across multiple
storage objects called data partitions or ranges:
Each data partition is stored separately
These storage objects can be in different table spaces, in the same table
space, or a combination of both
Benefits:
Easier roll-in and roll-out of table data
Allows large data roll-in (ATTACH) or roll-out (DETACH) with a minimal impact
to table availability for applications
Supports very large tables
Indexes can be either partitioned (local) or non-partitioned (global)
Table and Index scans can use partition elimination when access includes
predicates for the defined ranges
Different ranges can be assigned to table spaces in different storage groups for
current data versus less used historical data
Copyright IBM Corporation 2012
CL21311.0
Notes:
Partitioned tables
Partitioned tables use a data organization scheme in which table data is divided across
multiple storage objects, called data partitions or ranges, according to values in one or
more table partitioning key columns of the table.
A data partition or range is part of a table, containing a subset of rows of a table, and stored
separately from other sets of rows. Data from a given table is partitioned into multiple data
partitions or ranges based on the specifications provided in the PARTITION BY clause of
the CREATE TABLE statement. These data partitions or ranges can be in different table
spaces, in the same table space, or a combination of both. If a table is created using the
PARTITION BY clause, the table is partitioned.
All of the table spaces specified must have the same page size, extent size, storage
mechanism (DMS or SMS), and type (REGULAR or LARGE), and all of the table spaces
must be in the same database partition group.
5-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
A partitioned table simplifies the rolling in and rolling out of table data and a partitioned
table can contain vastly more data than an ordinary table. You can create a partitioned
table with a maximum of 32,767 data partitions. Data partitions can be added to, attached
to, and detached from a partitioned table, and you can store multiple data partition ranges
from a table in one table space.
Indexes on a partitioned table can be partitioned or non-partitioned. Both non-partitioned
and partitioned indexes can exist together on a single partitioned table.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
In this example, the data objects and index objects for each data range are stored in different table
spaces
The table spaces used must be defined with the same options, such as type of management, extent
size and page size
CL21311.0
Notes:
The example shows a range partitioned table based on one column, BRANCH_ID.
The CREATE TABLE statement lists four data partitions, each using one table space for
the data object and another table space for the partitioned indexes on this table.
Once defined, a range can not be altered. New empty ranges can be added using the
ALTER TABLE ADD option. A new range with data already loaded can be added to the
table using the ALTER TABLE ATTACH statement. A range can be removed from the table
using the ALTER TABLE DETACH statement.
5-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
EMPNAME
SALARY
------
-----------------
----------
10
John Smith
1000000.00
20
Jane Johnson
300000.00
30
Robert Appleton
250000.00
...
Copyright IBM Corporation 2012
CL21311.0
Notes:
A view is an alternate representation of data from one or more tables. It can include some
or all of the columns contained in the tables on which it is defined.
To create a view, you must be connected to a database either implicitly or explicitly
and the base tables or views upon which the view is based must previously exist.
Views can be created using the SQL statement CREATE VIEW.
You must have SYSADM, DBADM, CONTROL, or SELECT privilege on each base table to
create a view. Privileges on the base tables granted to groups are not checked to
determine authorization to create a view. However, if the base table has SELECT
privilege given to PUBLIC, a view could be created. In addition, you must have the
IMPLICIT_SCHEMA privilege or the CREATEIN privilege on the schema used.
Views might be used to exclude users from seeing certain data: rows or columns. The
WHERE clause used in the CREATE VIEW statement determines which rows might be
viewed by the user. The columns listed in the AS SELECT clause determine which columns
might be viewed by the user.
5-20 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Views can also be used to increase the access rights to data for a special user group.
Views might be used to improve performance. If a difficult SQL statement is to be used by
users, it might be advantageous to create a view that is coded to utilize an index, or to
ensure that a join is correctly coded.
Data for a view is not separately stored. The data is stored in the base tables.
When an object is dropped, views can become inoperative if they are dependent on that
object. To recover an inoperative view, determine the SQL statement that was initially used
to create the view. This information can be obtained from the SYSCAT.VIEWS.TEXT
column. Recreate the view by using the CREATE VIEW statement with the same view
name. Use the GRANT statement to regrant all privileges that were previously granted on
the view. If you do not want to recover an inoperative view, you can explicitly drop it with the
DROP VIEW statement.
An inoperative view only has entries in the SYSCAT.TABLES and SYSCAT.VIEWS catalog
views. All entries in the SYSCAT.VIEWDEP, SYSCAT.TABAUTH, and SYSCAT.COLUMNS
catalog views are removed.
CREATE VIEW view-name (column-name { ,column-name }) AS fullselect
{ WITH [ CASCADED | LOCAL ] CHECK OPTION }
WITH CHECK OPTION specifies the constraint that every row that is inserted or updated
through the view must conform to the definition of the view. WITH CHECK OPTION must
not be specified if the view is read-only. If WITH CHECK OPTION is specified for an
updateable view that does not allow inserts, then the constraint applies to update only. If
WITH CHECK OPTION is omitted, the definition of the view is not used in the checking of
any insert or update operations that use the view. Some checking might still occur during
insert or update operations if the view is directly or indirectly dependent on an another view
that includes WITH CHECK OPTION.
CASCADED causes the constraints of all dependent views to also be applied.
LOCAL causes the constraints of only this view to be applied.
A view can be defined on a view.
A read-only view cannot be the object of an INSERT, UPDATE, or DELETE statement.
For more information, refer to the SQL Reference manual.
5-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The CREATE ALIAS statement defines an alias for a module, nickname, sequence, table,
view, or another alias. Aliases are also known as synonyms.
The keyword PUBLIC is used to create a public alias (also known as a public synonym).
If the keyword PUBLIC is not used, the type of alias is a private alias (also known as a
private synonym).
The definition of the newly created table alias is stored in SYSCAT.TABLES. The
definition of the newly created module alias is stored in SYSCAT.MODULES. The
definition of the newly created sequence alias is stored in SYSCAT.SEQUENCES.
An alias can be defined for an object that does not exist at the time of the definition. If it
does not exist, a warning is issued (SQLSTATE 01522). However, the referenced object
must exist when a SQL statement containing the alias is compiled, otherwise an error is
issued (SQLSTATE 52004).
An alias can be defined to refer to another alias as part of an alias chain but this chain is
subject to the same restrictions as a single alias when used in an SQL statement. An
5-22 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
alias chain is resolved in the same way as a single alias. If an alias used in a statement
in a package, an SQL routine, a trigger, the default expression for a global variable, or a
view definition points to an alias chain, then a dependency is recorded for the package,
SQL routine, trigger, global variable, or view on each alias in the chain. An alias cannot
refer to itself in an alias chain and such a cycle is detected at alias definition time
(SQLSTATE 42916).
Resolving an unqualified alias name: When resolving an unqualified name, private
aliases are considered before public aliases.
Conservative binding for public aliases: If a public alias is used in a statement in a
package, an SQL routine, a trigger, the default expression for a global variable, or a
view definition, the public alias will continue to be used by these objects regardless of
what other object with the same name is created subsequently.
Creating an alias with a schema name that does not already exist will result in the
implicit creation of that schema provided the authorization ID of the statement has
IMPLICIT_SCHEMA authority. The schema owner is SYSIBM. The CREATEIN privilege
on the schema is granted to PUBLIC.
Syntax alternatives: The following syntax alternatives are supported for compatibility
with previous versions of DB2 and with other database products, SYNONYM can be
specified in place of ALIAS.
5-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows several examples of statements used to create indexes.
Indexes can be created for many reasons, including: to allow queries to run more
efficiently; to order the rows of a table in ascending or descending sequence according to
the values in a column; to enforce constraints such as uniqueness on index keys. You can
use the CREATE INDEX statement or the db2advis Design Advisor command to create the
indexes.
To create an index from the command line, use the CREATE INDEX statement.
For example:
CREATE UNIQUE INDEX EMP_IX
ON EMPLOYEE(EMPNO)
INCLUDE(FIRSTNAME, JOB)
The INCLUDE clause, applicable only on unique indexes, specifies additional columns to
be appended to the set of index key columns. Any columns included with this clause are
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
not used to enforce uniqueness. These included columns can improve the performance of
some queries through index only access. This option might:
Eliminate the need to access data pages for more queries
Eliminate redundant indexes
If SELECT EMPNO, FIRSTNAME, JOB FROM EMPLOYEE is issued to the table on which
this index resides, all of the required data can be retrieved from the index without reading
data pages. This improves performance.
When a row is deleted or updated, the index keys are marked as deleted and are not
physically removed from a page until cleanup is done some time after the deletion or
update is committed. These keys are referred to as pseudo-deleted keys. Such a cleanup
might be done by a subsequent transaction which is changing the page where the key is
marked deleted. Clean up of pseudo-deleted keys can be explicitly triggered by using the
CLEANUP ONLY ALL parameter in the REORG INDEXES command.
5-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
R
E
S
T
R
I
C
T
DEPTNAME
Parent Table
PRIMARY KEY = DEPT
Employee Table
EMPNO
NAME
WKDEPT
Dependent Table
FOREIGN KEY = WKDEPT
CL21311.0
Notes:
Referential integrity is imposed by adding foreign key (or referential) constraints to table
and column definitions, and to create an index on all the foreign key columns. Once the
index and foreign key constraints are defined, changes to the data within the tables and
columns is checked against the defined constraint. Completion of the requested action
depends on the result of the constraint checking.
Referential constraints are established with the FOREIGN KEY clause, and the
REFERENCES clause in the CREATE TABLE or ALTER TABLE statements. There are
effects from a referential constraint on a typed table or to a parent table that is a typed table
that you should consider before creating a referential constraint.
The identification of foreign keys enforces constraints on the values within the rows of a
table or between the rows of two tables. The database manager checks the constraints
specified in a table definition and maintains the relationships accordingly. The goal is to
maintain integrity whenever one database object references another, without performance
degradation.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
In this example, primary and foreign keys are used for a department number column.
For the EMPLOYEE table, the column name is WORKDEPT, and for the DEPARTMENT
table, the name is DEPTNO. The relationship between these two tables is defined by the
following constraints:
There is only one department number for each employee in the EMPLOYEE table, and
that number exists in the DEPARTMENT table.
Each row in the EMPLOYEE table is related to no more than one row in the
DEPARTMENT table. There is a unique relationship between the tables.
Each row in the EMPLOYEE table that has a non-null value for WORKDEPT is related
to a row in the DEPTNO column of the DEPARTMENT table.
The DEPARTMENT table is the parent table, and the EMPLOYEE table is the
dependent table.
5-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
A unique constraint is the rule that the values of a key are valid only if they are unique
within the table. Unique constraints are optional and can be defined in the CREATE TABLE
or ALTER TABLE statement using the PRIMARY KEY clause or the UNIQUE clause. The
columns specified as a unique constraint must be defined as NOT NULL. A unique index is
used by the database manager to enforce the uniqueness of the key during changes to the
columns of the unique constraint.
A table can have an arbitrary number of unique constraints, with at most one unique
constraint defined as a Primary Key. A table cannot have more than one unique constraint
on the same set of columns.
A unique constraint that is referenced by the Foreign Key of a referential constraint is called
a parent key. When a unique constraint is defined in a CREATE TABLE statement, a
unique index is automatically created by the database manager and designated as a
primary or unique system-required index. When a unique constraint is defined in an ALTER
TABLE statement and an index exists on the same columns, that index is designated as
unique and system-required. If such an index does not exist, the unique index is
Copyright IBM Corp. 1999, 2012
5-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
SMALLINT,
CANADA_SL
US_SL
CL21311.0
Notes:
Column constraints can be defined using the SQL statements CREATE TABLE or ALTER
TABLE. The constraint name cannot be the same as any other constraint specified within
that statement, and must be unique within the table.
If the ALTER TABLE statement is used, existing data is checked against the new constraint
before the ALTER statement succeeds. If any rows exist that would violate the constraint,
the ALTER TABLE statement fails.
To add constraints to a large table, it is more efficient to put the table into the set integrity
pending state, add the constraints, and then check the table for a consolidated list of
violation rows. Use the SET INTEGRITY statement to explicitly set the set integrity pending
state. If the table is a parent table, set integrity pending is implicitly set for all dependent
and descendent tables.
When a table check constraint is added, packages that insert or update the table might be
marked as invalid.
5-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The definition of the constraint allows basic WHERE clause constructs to be used:
Values can only be inserted or updated in the column if the result of the constraint test
resolves to True.
The definition of the constraint does not support:
Subqueries
Column functions
Functions that are not deterministic
Functions defined to have an external action
User-defined functions defined with either CONTAINS SQL or READS SQL DATA
Host variables or parameter markers
Special registers (such as CURRENT DATE)
References to generated columns other than the identity column
The constraint can be explicitly named when it is defined. If it is not named, DB2 will create
a name.
The ALTER TABLE statement can also be used to DROP constraints. For example:
ALTER TABLE SPEED_LIMITS DROP CONSTRAINT SPEED65
or
ALTER TABLE SPEED_LIMITS DROP CHECK SPEED65
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Creating triggers
A trigger defines a set of actions that are executed with, or triggered by, an INSERT,
UPDATE, or DELETE clause on a specified table or a typed table.
Use triggers to:
Validate input data
Generate a value for a newly inserted row
Read from other tables for cross-referencing purposes
Write to other tables for audit-trail purposes
You can use triggers to support general forms of integrity or business rules. For example, a
trigger can check a customer's credit limit before an order is accepted or update a
summary data table.
5-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Benefits:
Faster application development: Because a trigger is stored in the database, you do not
have to code the actions that it performs in every application.
Easier maintenance: After a trigger is defined, it is automatically invoked when the table
that it is created on is accessed.
Global enforcement of business rules: If a business policy changes, you only need to
change the trigger and not each application program.
A trigger body can include one or more of the following statements: INSERT, searched
UPDATE, searched DELETE, fullselect, SET Variable, and SIGNAL SQLSTATE. The
trigger can be activated before or after the INSERT, UPDATE, or DELETE statement to
which it refers.
Information
Starting in DB2 Version 10.1 the CREATE TRIGGER statement allows more flexibility and
functionality when creating triggers.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Bi-temporal tables
Combine application-period (ATT) and system-period (STT)
capabilities
Copyright IBM Corporation 2012
Figure 5-19.
CL21311.0
Notes:
You can use temporal tables to associate time-based state information with your data. Data
in tables that do not use temporal support are deemed to be applicable to the present,
while data in temporal tables can be valid for a period defined by the database system,
user applications, or both.
There are many business needs requiring the storage and maintenance of time-based
data. Without this capability in a database, it is expensive and complex to maintain a
time-focused data support infrastructure. With temporal tables, the database can store and
retrieve time-based data without additional application logic. For example, a database can
store the history of a table (deleted rows or the original values of rows that have been
updated) so you can query the past state of your data. You can also assign a date range to
a row of data to indicate when it is deemed to be valid by your applications or business
rules.
A temporal table records the period when a row is valid. A period is an interval of time that
is defined by two date or time columns in the temporal table. A period contains a begin
column and an end column. The begin column indicates the beginning of the period, and
Copyright IBM Corp. 1999, 2012
5-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
the end column indicates the end of the period. The beginning value of a period is inclusive,
while the ending value of a period is exclusive. For example, a row with a period from
January 1 to February 1 is valid from January 1, until January 31 at midnight.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
2.
3.
travel_history;
CL21311.0
Notes:
This visual shows an example of the actual syntax required to configure a System-period
temporal table. The key syntax that is required for the base table (travel in this example) to
configure a system-period temporal table base-history table pair is the definition of the
three columns (sys_start, sys_end, and ts_start) indicated on the CREATE TABLE
statement.
In addition to the column definition, the CREATE TABLE contains the PERIOD
SYSTEM_TIME (sys_start, sys_end) keyword.
Next, the history table associated with the base table must be explicitly created. In this
example, the CREATE TABLE contains the like keyword followed by the base table name
(travel). Another option is to explicitly specify each column and data type for the history
table.
5-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Note
The column names and data types must match the base table.
For the example shown, the history table (travel_history) is created in a separate
tablespace from the base travel table, as updates and deletes to the base table cause
writes to the history table as well.
Finally, the actual system-period temporal table base-history table pair is setup (and can be
used transparently by DB2) when an ALTER TABLE with the ADD VERSIONING USE
HISTORY TABLE travel_history option is issued against the database. The
system-period Temporal table is NOT operational until this final step is completed.
When dropping a system-period Temporal Table, you simply issue a DROP TABLE for the
base table and the associated history table is automatically dropped as well.
If you want to drop the base table and keep the history table, you must deactivate the
linkage between the base table and the history table with an ALTER TABLE base_table
DROP VERSIONING statement prior to issuing the DROP TABLE base_table statement.
System-period temporal tables
A system-period temporal table is a table that maintains historical versions of its rows.
Use a system-period temporal table to store current versions of your data and use its
associated history table to transparently store your updated and deleted data rows.
A system-period temporal table includes a SYSTEM_TIME period with columns that
capture the begin and end times when the data in a row is current. The database
manager also uses the SYSTEM_TIME period to preserve historical versions of each
table row whenever updates or deletes occur. The database manager stores these rows
in a history table that is exclusively associated with a system-period temporal table.
Adding versioning establishes the link between the system-period temporal table and
the history table. With a system-period temporal table, your queries have access to your
data at the current point in time and the ability to retrieve data from past points in time.
A system-period temporal table also includes a transaction start-ID column. This
column captures the time when execution started for a transaction that impacts the row.
If multiple rows are inserted or updated within a single SQL transaction, then the values
for the transaction start-ID column are the same for all the rows and are unique from the
values generated for this column by other transactions. This common start-ID column
value means you can use the transaction start-ID column to identify all the rows in the
tables that were written by the same transaction.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Query the past: what trips were available on 03/01/2012 for less than $500?
Current date = May 1, 2012
SELECT trip_name FROM travel FOR SYSTEM_TIME AS OF 03/01/2012
WHERE price < 500.00
Query the past and the present: In 2011, how many different tours
were offered?
SELECT COUNT (DISTINCT trip_name) FROM travel
FOR SYSTEM_TIME BETWEEN 01/01/2011 AND 01/01/2012
CL21311.0
Notes:
The discussion of system-period temporal tables concludes with some example queries
utilizing the same travel table and the data that was previously modified during earlier
examples of system-period temporal table operations.
For these examples, the current date is May 1, 2012.
The first query wants to find all trip names in the travel table that cost less than $500 as of
the 03/01/2012. Since the 03/01/2012 date is less than the current date of 05/01/2012,
DB2 will utilize both the base and history table for the query results. In this type of query it
is possible that the base table may not currently contain rows that match the predicates but
the history table contains rows that were valid on 03/01/2012 that could be returned.
The second query is the typical DB2 SELECT statement that is returning the trip_name
column from the travel table where the destination is Brazil. Since there is no AS OF,
BETWEEN, or FROM date specified in the query, In this case ONLY the base table
(travel) is queried. This behavior is identical to specifying FOR SYSTEM _TIME AS OF
CURRENT DATE on the SELECT statement.
5-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The third query wants to determine the total number of tours that were offered at any time
during 2011. Since this SELECT has added the FOR SYSTEM _TIME BETWEEN
01/01/2011 AND 01/01/2012 clause, DB2 will access both the base table and history
table to retrieve the query result.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Application-period temporal tables
An application-period temporal table is a table that stores the in effect aspect of application
data. Use an application-period temporal table to manage data based on time criteria by
defining the time periods when data is valid.
Similar to a system-period temporal table, an application-period temporal table includes a
BUSINESS_TIME period with columns that indicate the time period when the data in that
row is valid or in effect. You provide the begin time and end time for the BUSINESS_TIME
period associated with each row. However, unlike a system time-period temporal table,
there is no separate history table. Past, present, and future effective dates and their
associated business data are maintained in a single table. You can control data values by
BUSINESS_TIME period and use application-period temporal tables for modeling data in
the past, present, and future.
Creating an application-period temporal table results in a table that manages data based
on when its data is valid or in effect.
5-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Example
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The coverage for policy A123 shows an increase from 12000 to 16000
on July 1 (2008-07-01), but an earlier increase to 14000 is missing:
UPDATE policy_info
FOR PORTION OF BUSINESS_TIME
FROM '2008-06-01' TO '2008-08-01
SET coverage = 14000 WHERE policy_id = 'A123';
Copyright IBM Corporation 2012
CL21311.0
Notes:
When querying an application-period temporal table, you can include FOR
BUSINESS_TIME in the FROM clause. Using FOR BUSINESS_TIME specifications, you
can query the current, past, and future state of your data.
Time periods are specified as follows:
- AS OF value1 Includes all the rows where the begin value for the period is less than or equal to
value1 and the end value for the period is greater than value1.
- FROM value1 TO value2
Includes all the rows where the begin value for the period is greater than or equal to
value1 and the end value for the period is less than value2. This means that the
begin time is included in the period, but the end time is not.
- BETWEEN value1 AND value2
Includes all the rows where any time period overlaps any point in time between
5-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
value1 and value2. This means that the begin time and end time are both included in
the period.
The first example query uses the with FOR BUSINESS_TIME AS OF clause to see if the
Policy A123 had coverage on a specific date 2008-07-15.
SELECT policy_id, coverage, bus_start, bus_end
FROM policy_info
FOR BUSINESS_TIME AS OF '2008-07-15'
where policy_id = 'A123'
The next example query uses the FOR BUSINESS_TIME BETWEEN...AND clause to
retrieve policy information for one Policy for a range of dates.
SELECT policy_id, coverage, bus_start, bus_end
FROM policy_info
FOR BUSINESS_TIME FROM
'2008-01-01' TO '2008-06-15'
where policy_id = 'A123'
Updating data in an application-period temporal table can be similar to updating data in a
regular table, but data can also be updated for specified points of time in the past, present,
or future. Point in time updates can result in rows being split and new rows being inserted
automatically into the table.
In addition to the regular UPDATE statement, application-period temporal tables also
support time range updates where the UPDATE statement includes the FOR PORTION OF
BUSINESS_TIME clause. A row is a candidate for updating if its period-begin column,
period-end column, or both fall within the range specified in the FOR PORTION OF
BUSINESS_TIME clause.
If for example the policy_info table contained coverage for policy A123 and included an
increase from 12000 to 16000 on July 1 (2008-07-01), but an earlier increase to 14000 is
missing, the following UPDATE statement could be used:
UPDATE policy_info
FOR PORTION OF BUSINESS_TIME FROM '2008-06-01' TO '2008-08-01'
SET coverage = 14000
WHERE policy_id = 'A123';
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Compression for Indexes, Temporary data and XML data was added in
DB2 9.7
Copyright IBM Corporation 2012
Figure 5-24. Data Row Compression summary
CL21311.0
Notes:
You can use less disk space for your tables by taking advantage of the DB2 table
compression capabilities. Compression saves disk storage space by using fewer database
pages to store data.
Also, because you can store more rows per page, fewer pages must be read to access the
same amount of data. Therefore, queries on a compressed table need fewer I/O operations
to access the same amount of data. Since there are more rows of data on a buffer pool
page, the likelihood that needed rows are in the buffer pool increases. For this reason,
compression can improve performance through improved buffer pool hit ratios. In a similar
way, compression can also speed up backup and restore operations, as fewer pages of
need to be transferred to the backup or restore the same amount of data.
You can use compression with both new and existing tables. Temporary tables are also
compressed automatically, if the database manager deems it to be advantageous to do so.
There are two main types of data compression available for tables:
Row compression (available with a license for the DB2 Storage Optimization Feature).
Copyright IBM Corp. 1999, 2012
5-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Value compression
For a particular table, you can use row compression and value compression together or
individually. However, you can use only one type of row compression for a particular table.
Classic row compression, sometimes referred to as static compression, compresses data
rows by replacing patterns of values that repeat across rows with shorter symbol strings.
The benefits of using classic row compression are similar to those of adaptive
compression, in that you can store data in less space, which can significantly save storage
costs. Unlike adaptive compression, however, classic row compression uses only a
table-level dictionary to store globally recurring patterns; it doesn't use the page-level
dictionaries that are used to compress data dynamically.
How classic row compression works
Classic row compression uses a table-level compression dictionary to compress data
by row. The dictionary is used to map repeated byte patterns from table rows to much
smaller symbols; these symbols then replace the longer byte patterns in the table rows.
The compression dictionary is stored with the table data rows in the data object portions
of the table.
What data gets compressed?
Data that is stored in base table rows and log records is eligible for classic row
compression. In addition, the data in XML storage objects is eligible for compression. You
can compress LOB data that you place inline in a table row; however, storage objects for
long data objects are not compressed.
Compression for temporary tables
Compression for temporary tables is enabled automatically with the DB2 Storage
Optimization Feature. Only classic row compression is used for temporary tables.
System temporary tables
When executing queries, the DB2 optimizer considers the storage savings and the
impact on query performance that compression of system-created temporary tables
offers to determine whether it is worthwhile to use compression. If it is worthwhile,
classic row compression is used automatically. The minimum size that a table must be
before compression is used is larger for temporary tables than for regular tables.
User-created temporary tables
Created global temporary tables (CGTTs) and declared global temporary tables
(DGTTs) are always compressed using classic row compression.
You can use the explain facility or the db2pd tool to see whether the optimizer used
compression for system temporary tables.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
TABLE
Benefits
Static
table
level
dictionary
DB2 page
Dynamic page
level dictionary
DB2 page
DB2 page
Dynamic page
level dictionary
Dynamic page
level dictionary
CL21311.0
Notes:
Adaptive compression
Adaptive compression, was introduced with DB2 10.1, improves upon the compression
rates that can be achieved using classic row compression by itself. Adaptive compression
incorporates classic row compression; however, it also works on a page-by-page basis to
further compress data. Of the various data compression techniques in the DB2 product,
adaptive compression offers the most dramatic possibilities for storage savings.
How adaptive compression works
Adaptive compression actually uses two compression approaches. The first employs the
same table-level compression dictionary used in classic row compression to compress
data based on repetition within a sampling of data from the table as a whole. The second
approach uses a page-level dictionary-based compression algorithm to compress data
based on data repetition within each page of data. The dictionaries map repeated byte
patterns to much smaller symbols; these symbols then replace the longer byte patterns in
the table. The table-level compression dictionary is stored within the table object for which
it is created, and is used to compress data throughout the table. The page-level
Copyright IBM Corp. 1999, 2012
5-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
compression dictionary is stored with the data in the data page, and is used to compression
only the data within that page.
Turning adaptive compression on or off
To use adaptive compression, you must have a license for the DB2 Storage Optimization
Feature. You compress table data by setting the COMPRESS attribute of the table to YES.
You can set this attribute when you create the table by specifying the COMPRESS YES
option for the CREATE TABLE statement. You can also alter an existing table to use
compression by using the same option for the ALTER TABLE statement. After you enable
compression, operations that add data to the table, such as an INSERT, LOAD INSERT, or
IMPORT INSERT command operation, can use adaptive compression. In addition, index
compression is enabled for the table. Indexes are created as compressed indexes unless
you specify otherwise and if they are the types of indexes that can be compressed.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
(408) 463-1234
San Jose
California
95141
John
San Jose
California
95141
Jose
San Jose
California
95141
555 Bailey
4400
NorthAvenue
1st
4400 Street
North 1st
San Jose
California
95141
San Jose
California
95134
4400 Street
North 1st
Street
San Jose
California
95134
San Jose
California
95134
San Jose
San
San Jose
San Francisc
o
San Francisc
o
San Francisc
o
San Francisc
o
Francisc
o
California
95134
California
95134
California
94105
California
94105
California
94105
California
94105
California
94105
Margaret Miller
(408) 463-2468
Bruce
(408) 956-9876
Kwan
James
Geyer
Linda
(408) 956-5432
Theodore Mills
(408) 927-8642
Susan
Stern
(408) 927-9630
James
Polaski
(415) 545-1423
John
Miller
(415) 545-5867
James
Walker
(415) 545-4132
Elizabeth Brown
(415) 545-8576
Sarah
(415) 545-1928
Johnson
California
9
San
Jose
Francisco
Avenue
Street
Road
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Compression with
global table static
dictionary
Christine
Haas
(408) 463-1234
[2][3]
[1]
John
Thompson
(408) 463-5678
[2][3]
[1]
95141
95141
[3]
Fernandez
(408) 463-1357
[2][3]
[1]
95141
Margaret
Schneider
(408) 463-2468
[2][3]
[1]
95141
Bruce
Kwan
(408) 956-9876
[2][3]
[1]
95134
James
Geyer
(408) 956-5432
[2][3]
[1]
95134
Linda
Hernandez
(408) 956-9753
[2][3]
[1]
95134
Theodore
Mills
(408) 927-8642
[2][3]
[1]
95134
95134
Susan
Stern
(408) 927-9630
[2][3]
[1]
James
Polaski
(415) 545-1423
[2][4]
[1]
94105
John
Miller
(415) 545-5867
[2][4]
[1]
94105
94105
James
Walker
(415) 545-4132
[2][4]
[1]
Elizabeth
Miller
(415) 545-8576
[2][4]
[1]
94105
Sarah
Johnson
(415) 545-1928
[2][4]
[1]
94105
CL21311.0
Notes:
Adaptive compression must always start with a Classic compression dictionary. This
compression dictionary is similar to prior versions of DB2. The STATIC dictionary contains
patterns of frequently used data that is found ACROSS the entire table. Either a classic
reorg must be used for existing tables to generate this STATIC dictionary, or the dictionary
gets built when a table hits a threshold of data (typically 1-2MB of data) when using
AUTOMATIC COMPRESSION.
A customer needs to be aware that altering a table to use ADAPTIVE compression will
cause the following:
Automatic dictionary creation will be done once about 2M of data is populated in the
table
All of the data in the table PRIOR to the STATIC dictionary being created will not be
TABLE compressed. They are eligible for ADAPTIVE compression however
A full OFFLINE REORG will be required if you want to compress all of the data in the
table
Copyright IBM Corp. 1999, 2012
5-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Data page
Data page
Uempty
Christine
Haas
(2)1234 (4)
Christine
Haas
(408) 463-1234
[2][3] [1]
5141
John
Thompson
(2)5678 (4)
John
[2][3] [1]
5141
Ellen
F(1)
(2)1357 (4)
Ellen
[2][3] [1]
5141
Margaret
Schneider
(2)2468 (4)
Margaret
5141
Bruce
Kwan
(3)9876 (5)
[2][3] [1]
5134
[2][3] [1]
5134
James
Geyer
(3)5432 (5)
Linda
[2][3] [1]
Kwan
[2][3] [1]
5134
Linda
H(1)
(3)9753 (5)
[9]odore
Mills
Bruce
James
(408) 927-8642
[2][3] [1]
5134
(1)
ernandez
(2)
(408) 463-
(3)
(408) 956-
(4)
(5)
(1)
(2)
James
John
(3)
Mill
(4)
Susan
Stern
(408) 927-9630
[2][3] [1]
5134
[9]odore
(3)s
(4)8642 (6)
(5)
James
Polaski
(415) 545-1423
[2][4] [1]
4105
Susan
Stern
(4)9630 (6)
(6)
John
Miller
(415) 545-9876
[2][4] [1]
4105
(1)
Polaski
(5)1423 (7)
(7)
(2)
(3)er
(5)9876 (7)
(1)
Walker
(5)4132 (7)
[8]
(3)er
(5)8576 (7)
Sarah
(2)son
(5)1928 (7)
James
Walker
(408) 956-4132
[2][4] [1]
4105
[8]
Miller
(408) 956-8576
[2][4] [1]
4105
Sarah
Johnson
(408) 956-1928
[2][4] [1]
4105
CL21311.0
Notes:
Once a STATIC dictionary is built, the adaptive compression feature will create local
page-level dictionaries. In the case of individual pages, there may be recurring patterns
that may not have been picked up by the STATIC dictionary. This will also be the case as
more data is added to the table since new pages may contain patterns of data that did not
exist when the original STATIC dictionary was created.
This ADAPTIVE compression places a small dictionary on the page itself. The algorithm
will decide whether or not the savings of compression outweigh the costs of storing the
dictionary similar to the was STATIC compression may not compress rows on a page.
The actual process of creating the page dictionary is dependent on whether or not a
threshold is met. Rebuilding a page dictionary for every INSERT, UPDATE, or DELETE
will result in a very high amount of overhead. Instead, the algorithm checks to see how
stale the dictionary is and updates it when it believes that higher savings can be
achieved.
5-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
db2look utility
DDL and statistics extraction tool, used to
capture table definitions and generate the
corresponding DDL.
In addition to capturing the DDL for a set of
tables, can create a test system that mimics a
production system by generating the following
things:
SQL
CL21311.0
Notes:
The db2look command extracts the Data Definition Language (DDL) statements that are
required to reproduce the database objects of a production database on a test database.
The db2look command generates the DDL statements by object type. Note that this
command ignores all objects under SYSTOOLS schema except user-defined functions and
stored procedures.
It is often advantageous to have a test system that contains a subset of the data of a
production system, but access plans selected for such a test system are not necessarily
the same as those that would be selected for the production system. However, using the
db2look tool, you can create a test system with access plans that are similar to those that
would be used on the production system. You can use this tool to generate the UPDATE
statements that are required to replicate the catalog statistics on the objects in a production
database on a test database. You can also use this tool to generate UPDATE DATABASE
CONFIGURATION, UPDATE DATABASE MANAGER CONFIGURATION, and db2set
commands so that the values of query optimizer-related configuration parameters and
registry variables on a test database match those of a production database.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
You should check the DDL statements that are generated by the db2look command
because they might not reproduce all characteristics of the original SQL objects. For table
spaces on partitioned database environments, DDL might not be complete if some
database partitions are not active. Make sure all database partitions are active using the
ACTIVATE DATABASE command.
Authorization
SELECT privilege on the system catalog tables.
In some cases, such as generating table space container DDL, you will require one of the
following authorities:
SYSADM
SYSCTRL
SYSMAINT
SYSMON
DBADM
EXECUTE privilege on the ADMIN_GET_STORAGE_PATHS table function
The db2look command can extracts DDL statements for the following database objects:
Aliases
Audit policies
Check constraints
Function mappings
Function templates
Global variables
Indexes (including partitioned indexes on partitioned tables)
Index specifications
Materialized query tables (MQTs)
Nicknames
Primary key, referential integrity, and check constraints
Referential integrity constraints
Roles
Schemas
Security labels
Security label components
5-53
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Security policies
Sequences
Servers
Stored procedures
Tables
Note: Values from column STATISTICS_PROFILE in the SYSIBM.SYSTABLES catalog
table are not included.
Triggers
Trusted contexts
Type mappings
User mappings
User-defined distinct types
User-defined functions
User-defined methods
User-defined structured types
User-defined transforms
Views
Wrappers
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
db2look examples
To capture all of the DDL for a database (includes all tables, views,
RI, constraints, triggers, and so on):
db2look -d proddb -e -o statements.sql
{Edit the output file and change the database name}
db2 -tvf statements.sql
CL21311.0
Notes:
On Windows operating systems, the db2look command must be run from a DB2 command
window.
Here are some additional examples using db2look:
Generate the DDL statements for all objects (federated and non-federated) in the
federated database FEDDEPART. For federated DDL statements, only those that apply
to the specified wrapper, FEDWRAP, are generated. The db2look output is sent to
standard output:
db2look -d feddepart -e -wrapper fedwrap
Generate a script file that includes only non-federated DDL statements. The following
system command can be run against a federated database (FEDDEPART) and yet only
produce output like that found when run against a database which is not federated. The
db2look output is sent to a file out.sql:
db2look -d feddepart -e -nofed -o out
5-55
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Generate the DDL statements for objects that have schema name walid in the database
DEPARTMENT. The files required to register any included XML schemas and DTDs are
exported to the current directory. The db2look output is sent to file db2look.sql:
db2look -d department -z walid -e -xs -o db2look.sql
Generate the DDL statements for objects created by all users in the database
DEPARTMENT. The files required to register any included XML schemas and DTDs are
exported to directory /home/ofer/ofer/. The db2look output is sent to standard output:
db2look -d department -a -e -xs -xdir /home/ofer/ofer/
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit summary
Having completed this unit, you should be able to:
Describe the DB2 object hierarchy
Create the following objects:
Schema, Table, View, Alias, Index
CL21311.0
Notes:
5-57
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Student exercise
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Command Reference
Data Movement Utilities Guide and Reference
6-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Discuss using the INSERT SQL statement to populate tables
Explain the differences between IMPORT and LOAD processing
Explain the EXPORT, IMPORT, and LOAD command options
Create and use Exception Tables and Dump-Files
Check table status using LOAD QUERY
Describe Load Pending and Set Integrity Pending status for a table
Use the SET INTEGRITY command
Discuss the db2move and db2look commands
Use the ADMIN_MOVE_TABLE procedure to move a table to different
table spaces
List some of the features of the Ingest utility for continuous data ingest
CL21311.0
Notes:
These are the objectives for this unit.
6-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The INSERT statement inserts rows into a table, nickname, or view, or the underlying
tables, nicknames, or views of the specified fullselect. Inserting a row into a nickname
inserts the row into the data source object to which the nickname refers.
Inserting a row into a view also inserts the row into the table on which the view is based, if
no INSTEAD OF trigger is defined for the insert operation on this view. If such a trigger is
defined, the trigger will be executed instead.
6-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
EXPORT/IMPORT overview
ASCII
DEL ASCII
IMPORT
PC/IXF
EXPORT
CL21311.0
Notes:
The IMPORT utility might be used to insert data from an input file into a table, with the input
file containing data from another database or application program.
The IMPORT utility COMMITCOUNT n (AUTOMATIC) option keeps log sizes manageable.
The IMPORT RESTARTCOUNT n option allows an import to restart from last commit (n+1).
The EXPORT utility might be used to copy data from a table to an output file for use by
another database or spreadsheet program. This file can be used to load tables, providing a
convenient method of migrating data from one database to another.
The IMPORT and EXPORT utilities might be used to move data between databases which
exist on different DB2 Database platforms. These utilities use the database engine to
execute standard SQL statements, so, for example, you could create a table during the
execution of the IMPORT utility.
The IMPORT and EXPORT utilities might be used to move data between DB2 and DRDA
host databases if DB2 Connect is installed.
6-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
6-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
filename
OF
filetype
,
LOBS TO
,
LOBFILE
lob-path
filename
...
MODIFIED BY
filetype-mod
select-statement
MESSAGES
message-file
CL21311.0
Notes:
The EXPORT command can be used to export data from a database to one of several
external file formats. The user specifies the data to be exported by supplying an SQL
SELECT statement.
File types that are supported include:
DEL (delimited ASCII format), which is used by a variety of database manager and file
manager programs.
IXF (Integration Exchange Format, PC version) is a proprietary binary format. This file
type can be used to move data between operating systems.
The MODIFIED BY options allow you to specify different items depending on the file type
being created. For example, for delimited output data, you can specify the character string
delimiter and the column delimiter. The Information Center can be used to list all of the
supported options.
Some EXPORT command parameters:
LOBS TO lob-path
6-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Specifies one or more paths to directories in which the LOB files are to be stored. There
will be at least one file per LOB path, and each file will contain at least one LOB. The
maximum number of paths that can be specified is 999.
LOBFILE filename
Specifies one or more base file names for the LOB files. When name space is
exhausted for the first name, the second name is used, and so on. The maximum
number of file names that can be specified is 999.
MODIFIED BY filetype-mod
Specifies file type modifier options.
lobsinfile
xmlinsepfiles
lobsinsepfiles
xmlgraphic
6-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
MESSAGES message-file
Specifies the destination for warning and error messages that occur during an export
operation (the path must exist).
6-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
DB2
MUSICDB
EXPORT
artexprt
artists
CL21311.0
Notes:
You need DATAACCESS authority, the CONTROL privilege, or the SELECT privilege on
each participating table or view to export data from a database
Before running the export utility, you must be connected (or be able to implicitly connect) to
the database from which you want to export the data. If implicit connect is enabled, a
connection to the default database is established. Utility access to Linux, UNIX, or
Windows database servers from Linux, UNIX, or Windows clients must be through a direct
connection through the engine and not through a DB2 Connect gateway or loop back
environment.
Because the utility issues a COMMIT statement, complete all transactions and release all
locks by issuing a COMMIT or a ROLLBACK statement before running the export utility.
There is no requirement for applications accessing the table and using separate
connections to disconnect.
6-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
filename
OF
filetype
...
,
LOBS FROM
lob-path
ALLOW NO ACCESS
ALLOW WRITE ACCESS
MODIFIED BY
filetype-mod
COMMITCOUNT n/
RESTARTCOUNT
Automatic
MESSAGES
message-file
INTO table-name
INSERT
,
INSERT_UPDATE
(
insert-column
)
REPLACE
REPLACE_CREATE
CREATE INTO table-name
| tblspace-specs |
,
(
insert-column )
tblspace-specs
|
|
IN tablespace-name
INDEX IN tablespace-name
LONG IN tablespace-name
CL21311.0
Notes:
The syntax of the IMPORT command is shown. More information on its options will be
shown via examples on the following pages.
As default during the import an exclusive (X) lock is on the target table. This prevents
concurrent applications from accessing table data. With the ALLOW WRITE ACCESS
option, the import runs in online mode. An intent exclusive (IX) is set on the target table.
This allows concurrent readers and writers to access the table data. ALLOW WRITE
ACCESS is not possible with the REPLACE, CREATE, or REPLACE_CREATE options, or
with buffered inserts. The import operation will periodically commit inserted data to prevent
lock escalation and to avoid running out of active log space. These commits will be
performed even if the COMMITCOUNT option was not used.
The COMMITCOUNT n/AUTOMATIC performs a commit after every n records. When
AUTOMATIC is specified, the import internally determines when a commit needs to be
performed. The import utility will commit for either one of two reasons:
To avoid running out of active log space
To avoid lock escalation
6-10 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
If ALLOW WRITE ACCESS option is specified and the COMMITCOUNT option is not
specified, the import utility will perform commits as if COMMITCOUNT AUTOMATIC has
been specified.
Some IMPORT command parameters:
LOBS FROM lob-path
The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in
the column that will be loaded into the LOB column. The maximum number of paths that
can be specified is 999.
MODIFIED BY filetype-mod
Specifies file type modifier options.
compound=x
generatedignore
This modifier informs the import utility that data for all generated
columns is present in the data file but should be ignored. This
results in all values for the generated columns being generated
by the utility. This modifier cannot be used with the
generatedmissing modifier.
generatedmissing If this modifier is specified, the utility assumes that the input
data file contains no data for the generated columns (not even
NULLs), and will therefore generate a value for each row. This
modifier cannot be used with the generatedignore modifier.
identityignore
This modifier informs the import utility that data for the identity
column is present in the data file but should be ignored. This
results in all identity values being generated by the utility. The
behavior will be the same for both GENERATED ALWAYS and
GENERATED BY DEFAULT identity columns. This means that
for GENERATED ALWAYS columns, no rows will be rejected.
This modifier cannot be used with the identitymissing modifier.
identitymissing
lobsinfile
no_type_id
6-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
nodefaults
norowwarnings
seclabelchar
Indicates that security labels in the input source file are in the
string format for security label values rather than in the default
encoded numeric format.
seclabelname
usedefaults
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table is acquired
before any rows are inserted. This prevents concurrent applications from accessing
table data. This is the default import behavior.
ALLOW WRITE ACCESS
Runs import in the online mode. An intent exclusive (IX) lock on the target table is
acquired when the first row is inserted. This allows concurrent readers and writers to
access table data.
COMMITCOUNT n/AUTOMATIC
Performs a COMMIT after every n records are imported. When a number n is specified,
import performs a COMMIT after every n records are imported. When compound inserts
are used, a user-specified commit frequency of n is rounded up to the first integer
multiple of the compound count value. When AUTOMATIC is specified, import internally
determines when a commit needs to be performed.
RESTARTCOUNT n
Specifies that an import operation is to be started at record n + 1. The first n records are
skipped. This option is functionally equivalent to SKIPCOUNT. RESTARTCOUNT and
SKIPCOUNT are mutually exclusive.
MESSAGES message-file
Specifies the destination for warning and error messages that occur during an import
operation.
INSERT
Adds the imported data to the table without changing the existing table data.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows (of the target
table) with matching Primary Keys.
REPLACE
Deletes all existing data from the table by truncating the data object, and inserts the
imported data. The table definition and the index definitions are not changed. This
option can only be used if the table exists.
REPLACE_CREATE
If the table exists, deletes all existing data from the table by truncating the data object,
and inserts the imported data without changing the table definition or the index
definitions.
If the table does not exist, creates the table and index definitions, as well as the row
contents, in the code page of the database.
INTO table-name
Specifies the database table into which the data is to be imported. This table cannot be
a system table, a declared temporary table or a summary table.
CREATE
Creates the table definition and row contents in the code page of the database. If the
data was exported from a DB2 table, sub-table, or hierarchy, indexes are created.
IN tablespace-name
Identifies the table space in which the table will be created. The table space must exist,
and must be a REGULAR table space. If no other table space is specified, all table
parts are stored in this table space. If this clause is not specified, the table is created in
a table space created by the authorization ID.
INDEX IN tablespace-name
Identifies the table space in which any indexes on the table will be created. This option
is allowed only when the primary table space specified in the IN clause is a DMS table
space. The specified table space must exist, and must be a REGULAR or LARGE DMS
table space.
LONG IN tablespace-name
Identifies the table space in which the values of any long columns (LONG VARCHAR,
LONG VARGRAPHIC, LOB data types, or distinct types with any of these as source types)
will be stored.
6-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
01.00", date
SQL3221W
SQL3222W
SQL3149N "58" rows were processed from the input file. "58" rows were
successfully inserted into the table. "0" rows were rejected.
CL21311.0
Notes:
The visual shows the IMPORT command that could be used to copy the data from an IXF
formatted file into a table. The INSERT mode would add the new rows to the table leaving
existing data in place.
The messages indicate the number of rows processed and include error messages if some
rows of data were rejected.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
LOAD
CL21311.0
Notes:
The Import utility performs SQL INSERTs, so its capabilities are similar to an application
program performing inserts. The Load utility formats the pages and writes them directly into
the database.
The IMPORT utility can create the target table, including indexes if the input is an IXF
formatted file. The LOAD utility adds data to an existing table and updates the tables
indexes.
The ALLOW WRITE ACCESS option of the IMPORT utility can avoid a table level lock, but
the COMMITCOUNT option should be used to prevent lock escalation for larger input files.
The LOAD utility allows concurrent reads from applications if the ALLOW READ ACCESS
option is used for a LOAD INSERT.
The IMPORT utility uses SQL INSERTS, which are normally logged, so the processing is
recoverable, but may consume too much database log space. The INSERT processing of
the IMPORT will enforce constraints and file any INSERT Triggers defined for a table. The
LOAD utility does minimal logging and is less likely to run out of log space. The LOAD does
Copyright IBM Corp. 1999, 2012
6-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
not directly check constraints or fire triggers. The LOAD utility will put a table into a SET
INTEGRITY pending status to make sure the constraints are checked before the new data
can be accessed.
The IMPORT utility can reuse space in a tables data pages that was left when rows where
deleted. In general a LOAD utility creates new extents and will not try to use any free space
in existing pages.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
LOAD
DB2 Data
Load data into tables
Collect index keys / sort
Consistency points at SAVECOUNT
Invalid data rows in dump file; messages in message file
2.
BUILD
Indexes created or updated
3.
DELETE
Unique Key Violations placed in Exception Table
Messages generated for unique key violations
Deletes Unique Key Violation Rows
4.
INDEX COPY
Copy indexes from temp table space to index
table space
CL21311.0
Notes:
The load utility is capable of efficiently moving large quantities of data into newly created
tables, or into tables that already contain data. The utility can handle most data types,
including XML, large objects (LOBs), and user-defined types (UDTs). The load utility is
faster than the import utility, because it writes formatted pages directly into the database,
while the import utility performs SQL INSERTs. The load utility does not fire triggers, and
does not perform referential or table constraints checking (other than validating the
uniqueness of the indexes).
The load process consists of four distinct phases:
Phase 1 Load
During the load phase, data is loaded into the table, and index keys and table statistics
are collected, if necessary. Save points, or points of consistency, are established at
intervals specified through the SAVECOUNT parameter in the LOAD command.
Messages are generated, indicating how many input rows were successfully loaded at
the time of the save point.
6-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Phase 2 Build
During the build phase, indexes are produced based on the index keys collected during
the load phase. The index keys are sorted during the load phase, and index statistics
are collected (if the STATISTICS USE PROFILE option was specified, and profile
indicates collecting index statistics). The statistics are similar to those collected through
the RUNSTATS command.
Phase 3 Delete
During the delete phase, the rows that caused a unique or primary key violation are
removed from the table. These deleted rows are stored in the load exception table, if
one was specified.
Phase 4 Index copy
During the index copy phase, the index data is copied from a system temporary table
space to the original table space. This will only occur if a system temporary table space
was specified for index creation during a load operation with the READ ACCESS option
specified.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Input
Data
Delete
Index
Copy
Copyright IBM Corporation 2012
CL21311.0
Notes:
During the Load phase, data is loaded into the table, and index keys and table statistics are
collected, if necessary.
Save points, or points of consistency, are established at intervals specified by you in the
SAVECOUNT parameter on the LOAD command. Messages are generated to let you know
how many input rows have been successfully loaded at the time of the save point. If a
failure occurs, you can restart the LOAD operation; the RESTART option automatically
restarts the LOAD from the last successful consistency point. The TERMINATE option rolls
back the failed load operation.
6-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Input
Data
Delete
Index
Copy
Copyright IBM Corporation 2012
CL21311.0
Notes:
During the Build phase, indexes are produced based on the index keys collected during the
Load phase. The index keys are sorted during the Load phase and index statistics are
collected (if the STATISTICS YES with INDEXES option was specified). The statistics are
similar to those collected through the RUNSTATS command. If a failure occurs during the
Build phase, the RESTART option automatically restarts the load operation at the
appropriate point.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Input
Data
Delete
Index
Copy
Copyright IBM Corporation 2012
CL21311.0
Notes:
During the Delete phase, rows that caused a unique key violation are removed from the
table.
Unique key violations are placed into the exception table, if one was specified, and
messages about rejected rows are written to the message file. Following the completion of
the Load process, review these messages, resolve any problems, and insert corrected
rows into the table.
Do not attempt to delete or to modify any temporary files created by the Load utility. Some
temporary files are critical to the Delete phase. If a failure occurs during the Delete phase,
the RESTART option automatically restarts the Load operation at the appropriate point.
Each deletion event is logged. If you have a large number of records that violate the
uniqueness condition, the log could fill up during the Delete phase.
6-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Input
Data
Delete
Index
Copy
Notes:
During the Index Copy phase, the new set of index data is copied from a system temporary
table space to the original table space.
Important
This will only occur if a system temporary table space was specified for index creation
during a Load operation with the READ ACCESS USE tempspace option is specified.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
file
pipe
device
cursorname
FROM
CLIENT
OF
ASC
DEL
IXF
CURSOR
filetype-mod
MODIFIED BY
...
SAVECOUNT n
INSERT
REPLACE
RESTART
TERMINATE
WARNINGCOUNT n
ROWCOUNT n
INTO
MESSAGES
msg-file
table-name
,
(
insert-column
FOR EXCEPTION
table-name
ALLOW NO ACCESS
ALLOW READ ACCESS
| statistics options |
|copy options|
USE tablespace-name
CL21311.0
Notes:
The command syntax for the LOAD command provides a number of processing options.
Some of the key LOAD command parameters are:
CLIENT
Specifies that the data to be loaded resides on a remotely connected client. This option
is ignored if the load operation is not being invoked from a remote client. This option is
ignored if specified in conjunction with the CURSOR filetype.
FROM filename/pipename/device/cursorname
Specifies the file, pipe, device, or cursor referring to an SQL statement that contains the
data being loaded. If the input source is a file, pipe, or device, it must reside on the
database partition where the database resides, unless the CLIENT option is specified.
6-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
OF filetype
Specifies the format of the data:
- ASC (non-delimited ASCII format)
- DEL (delimited ASCII format)
- IXF (integrated exchange format, PC version), exported from the same or from
another DB2 table.
- CURSOR (a cursor declared against a SELECT or VALUES statement).
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the Load utility.
SAVECOUNT n
Specifies that the Load utility should set consistency points after every n rows. This
value is converted to a page count, and rounded up to intervals of the extent size.
ROWCOUNT n
Specifies the number of n physical records in the file to be loaded. Allows a user to load
only the first n rows in a file.
WARNINGCOUNT n
Stops the load operation after n warnings. Set this parameter if no warnings are
expected, but verification that the correct file and table are being used is desired.
MESSAGES message-file
Specifies the destination for warning and error messages that occur during the load
operation.
INSERT
One of four modes under which the Load utility can execute. Adds the loaded data to
the table without changing the existing table data.
REPLACE
One of four modes under which the Load utility can execute. Deletes all existing data
from the table, and inserts the loaded data. The table definition and index definitions are
not changed.
RESTART
One of four modes under which the Load utility can execute. Restarts a previously
interrupted load operation. The load operation will automatically continue from the last
consistency point in the Load, Build, or Delete phase.
TERMINATE
One of four modes under which the Load utility can execute. Terminates a previously
interrupted load operation, and rolls back the operation to the point in time at which it
started, even if consistency points were passed.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
INTO table-name
Specifies the database table into which the data is to be loaded. This table cannot be a
system table or a declared temporary table. An alias, or the fully qualified or unqualified
table name can be specified.
FOR EXCEPTION table-name
Specifies the exception table into which rows in error will be copied. Any row that is in
violation of a unique index or a Primary Key index is copied. The exception table must
exist prior to LOAD.
ALLOW NO ACCESS
Load will lock the target table for exclusive access during the load. The table state will
be set to Load In Progress during the load. ALLOW NO ACCESS is the default
behavior. It is the only valid option for LOAD REPLACE.
ALLOW READ ACCESS
Load will lock the target table in a share mode. The table state will be set to both Load
In Progress and Read Access. Readers can access the non-delta portion of the data
while the table is being load.
USE tablespace-name
If the indexes are being rebuilt, a shadow copy of the index is built in table space
tablespace-name and copied over to the original table space at the end of the load
during an INDEX COPY PHASE.
LOCK WITH FORCE
The utility acquires various locks including table locks in the process of loading. Rather
than wait, and possibly timeout, when acquiring a lock, this option allows load to force off
other applications that hold conflicting locks on the target table. Applications holding
conflicting locks on the system catalog tables will not be forced off by the Load utility.
Forced applications will roll back and release the locks the Load utility needs.
6-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
LOAD scenario
INPUT
OUTPUT
cal.parexp
cal.par
calpar.del
~~
30
~~
50
10
~~
10
20
~~
30
30
~~
50
~~
30
~~
80
~~
40
~~
50
~~
50
~~
80
~~
Primary Key
par.msgs
~~
timestamp msg
...
msg
~~
timestamp msg
20
msg
40
msg
... ~ ~
...
...
...
...
dump.fil.000
...
20
40
~~
~~
...
...
Table
Exception Table
UNIQUE INDEX
create tables/indexes
obtain delimited input file in sorted format
create exception table
db2 load from calpar.del of del
modified by dumpfile=<path>/dump.fil
warningcount 100 messages par.msgs
insert into cal.par for exception cal.parexp
db2 load query table cal.par
10
RID
30
RID
50
RID
80
RID
CL21311.0
Notes:
In our LOAD scenario, we have created tables and indexes, sorted the input data (or
obtained it in sorted order), and created the exception table.
The LOAD exception table is a user-created table that mimics the definition of the table
being loaded. It is specified by the FOR EXCEPTION option in the LOAD command. The
table is used to store copies of rows that violate unique index rules.
FOR EXCEPTION indicates that any row that is in violation of a unique index rule will be
stored in the table indicated. In our example, cal.parexp will contain those rows. If an
exception table is not provided for the LOAD, and duplicate records are found, then the
LOAD will continue. However, only a warning message is issued about the deletion of
duplicate records, and the deleted duplicate records are not placed anywhere.
Other types of errors (for example, attempting to load a null into a column that is defined as
NOT NULL) will cause a message to be written to the messages file. The second row in our
example will cause this kind of warning in the message file. The
DUMPFILE=qualified-filename option will write any rejected row to the named file. The
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
name must be a fully qualified name on the server. The file will be created with a partition
number at the end; on a single partition database, this will be 000.
The exception tables used with LOAD are identical to the ones used in the SET
INTEGRITY statement. They can be reused during checking with the SET INTEGRITY
statements. There are a number of rules for creating exception tables; we will see them on
the next visual.
In our example, cal is the owner/schema of the table.
To load data into a table, you must have one of the following:
DATAACCESS
LOAD privilege on the database, and
- INSERT privilege on the table when the Load utility is invoked in INSERT mode
- INSERT and DELETE privilege on the table when the Load utility is invoked in
REPLACE mode
- INSERT privilege on the exception table, if used
Since you will typically be loading large amounts of data using the LOAD command, a
LOAD QUERY command can be used to check the progress of the LOAD process. Options
on the LOAD QUERY command allow you to indicate that you only want to see summary
information (SUMMARYONLY), or just the new information since the last LOAD QUERY
was issued (SHOWDELTA).
6-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The first n columns of the exception table reflect the definition of the table being loaded. All
column attributes (type, length, and null ability attributes) should be identical. An exception
table cannot contain an identity column or any other type of generated column.
All the columns of the exception table should be free of any constraints and triggers.
Constraints include referential integrity and check constraints, as well as unique index
constraints that could cause errors on insert.
The n + 1 column of the exception table is an optional TIMESTAMP column. The
timestamp column in the exception table can be used to distinguish newly-inserted rows
from the old ones, if necessary.
The n + 2 column should be of type CLOB (32 KB) or larger. This column is optional but
recommended, and will be used to give the names of the constraints that the data within
the row violates.
No additional columns are allowed.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
If the original table has generated columns (including the IDENTITY property), the
corresponding columns in the exception table should not specify the generated property.
It should also be noted that a user invoking any facility (LOAD, SET INTEGRITY) that might
cause rows to be inserted into the exception table must have INSERT privilege on the
exception table.
6-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Load
commit
Lock
granted
read/write
read/write
Load allows no access
Time
Super
exclusive
lock
requested
Drain
granted
read/write
Super
exclusive
lock
granted
Load
commit
read
read
r/w
Load allows read access
Time
CL21311.0
Notes:
In most cases, the LOAD utility uses table-level locking to restrict access to tables. The
LOAD utility does not quiesce the table spaces involved in the load operation, and uses
table space states only for load operations with the COPY NO option specified. The level of
locking depends on whether the load operation allows read access. A load operation in
ALLOW NO ACCESS mode will use an exclusive lock (Z-lock) on the table for the duration
of the load. An load operation in ALLOW READ ACCESS mode acquires and maintains a
share lock (S-lock) for the duration of the load operation, and upgrades the lock to an
exclusive lock (Z-lock) when data is being committed.
Before a load operation in ALLOW READ ACCESS mode begins, the Load utility will wait
for all applications that began before the load operation to release locks on the target table.
Since locks are not persistent, they are supplemented by table states that will remain even
if a load operation is aborted. These states can be checked by using the LOAD QUERY
command. By using the LOCK WITH FORCE option, the LOAD utility will force applications
holding conflicting locks off the table into which it is trying to load.
ALLOW NO ACCESS is the default option on the LOAD utility.
6-30 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
6-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Table states
(Load pending, Set Integrity Pending)
LOAD QUERY TABLE <table-name>
Tablestate:
Normal
Set Integrity Pending
Load in Progress
Load Pending
Reorg Pending
Read Access Only
Unavailable
Not Load Restartable
Unknown
CL21311.0
Notes:
In addition to locks, the LOAD utility uses table states to control access to tables. A table
state can be checked by using the LOAD QUERY command. The states returned by the
LOAD QUERY command are as follows:
Normal: No table states affect the table.
Set Integrity Pending: The table has constraints which have not yet been verified. Use
the SET INTEGRITY statement to take the table out of Set Integrity Pending state. The
LOAD utility places a table in Set Integrity Pending state when it begins a load operation
on a table with constraints.
Load in Progress: There is a load operation in progress on this table.
Load Pending: A load operation has been active on this table but has been aborted
before the data could be committed. Issue a LOAD TERMINATE, LOAD RESTART, or
LOAD REPLACE command to bring the table out of this state.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Read Access Only: The table data is available for read access queries. Load
operations using the ALLOW READ ACCESS option place the table in read access only
state.
Reorg Pending: A REORG command recommended ALTER TABLE statement has
been executed on the table. A classic REORG must be performed before the table is
accessible again.
Unavailable: The table is unavailable. The table can only be dropped or restored from
a backup. Rolling forward through a non-recoverable load operation will place a table in
the unavailable state.
Not Load Restartable: The table is in a partially loaded state that will not allow a load
restart operation. The table will also be in Load Pending state. Issue a LOAD
TERMINATE or a LOAD REPLACE command to bring the table out of the Not Load
Restartable state. A table is placed in Not Load Restartable state when a roll forward
operation is performed after a failed load operation that has not been successfully
restarted or terminated, or when a restore operation is performed from an online backup
that was taken while the table was in Load in Progress or Load Pending state. In either
case, the information required for a load restart operation is unreliable, and the Not
Load Restartable state prevents a load restart operation from taking place.
Unknown: The LOAD QUERY command is unable determine the table state.
A table can be in several states at the same time. For example, if data is loaded into a table
with constraints and the ALLOW READ ACCESS option is specified, the table state would
be:
Tablestate:
Set Integrity Pending
Load in Progress
Read Access Only
After the load operation but before issuing the SET INTEGRITY statement, the table state
would be:
Tablestate:
Set Integrity Pending
Read Access Only
After the SET INTEGRITY statement has been issued, the table state would be:
Tablestate:
Normal
6-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
of
of
of
of
of
of
of
rows read
rows skipped
rows loaded
rows rejected
rows deleted
rows committed
warnings
=
=
=
=
=
=
=
51450
0
51450
0
0
51450
0
Tablestate:
Load Pending
Copyright IBM Corporation 2012
CL21311.0
Notes:
The LOAD QUERY command can be used to determine the table state;
LOAD QUERY can be used on tables that are not currently being loaded. For a partitioned
table, the state reported is the most restrictive of the corresponding visible data partition
states.
For example, if a single data partition is in the Read Access Only state and all other data
partitions are in Normal state, the load query operation returns the Read Access Only state.
A load operation will not leave a subset of data partitions in a state different from the rest of
the table.
The sample LOAD QUERY report shows a table was being loaded and failed to complete
because there was not enough free space available in the tablespace. The table remains in
a LOAD PENDING state.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
=
=
=
=
=
2
LOAD
10000 rows
10000 rows
05/12/2012 02:49:07.057958
=
=
=
=
=
3
BUILD
2 indexes
2 indexes
05/12/2012 02:49:07.36690
CL21311.0
Notes:
The LIST UTILITIES command displays to standard output the list of active utilities on the
instance. The description of each utility can include attributes such as start time,
description, throttling priority (if applicable), as well as progress monitoring information (if
applicable).
One of the following authorizations is needed:
sysadm
sysctrl
sysmaint
sysmon
Syntax:
>>-LIST UTILITIES--+-------------+----------------------------->< '-SHOW DETAIL-'
For active LOAD utilities, the LIST UTILITIES command can be used to see which phase of
LOAD processing is the current phase. During the LOAD phase, you can track the number
Copyright IBM Corp. 1999, 2012
6-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
of rows processed. In the BUILD phase you can see how many indexes have been
completed.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
If the LOAD utility cannot start because of a user error, such as a nonexistent data file or
invalid column names, it will terminate and leave the table in a normal state.
If a failure occurs while loading data, you can restart the load operation from the last
consistency point (using the RESTART option), reload the entire table (using the
REPLACE option), or terminate the load (using the TERMINATE option). Specify the same
parameters as in the previous invocation so that the utility can find the necessary
temporary files.
A load operation that specified the ALLOW READ ACCESS option can be restarted using
either the ALLOW READ ACCESS option or the ALLOW NO ACCESS option. A load
operation that specified the ALLOW NO ACESS option can only be restarted using the
ALLOW NO ACCESS option. If the index object is unavailable or marked invalid, a load
restart or terminate in ALLOW READ ACCESS mode will not be permitted. If the original
load operation was aborted in the Index Copy phase, a restart operation in the ALLOW
READ ACCESS mode is not permitted because the index might be corrupted.
6-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
If a load operation in ALLOW READ ACCESS mode was aborted in the Load phase, it will
restart in the Load phase. If it was aborted in any phase other than the Load phase, it will
restart in the Build phase. If the original load operation was in ALLOW NO ACCESS mode,
a restart operation might occur in the Delete phase if the original load operation reached
that point and the index is valid. If the index is marked invalid, the Load utility will restart the
load operation from the Build phase.
Load REPLACE deletes all existing data from the table and inserts the loaded data. Using
load REPLACE and specifying an empty input file will truncate the table.
Terminating the load will roll back the operation to the point in time at which it started, even
if consistency points were passed. The states of any tables involved in the operation return
to normal, and all table objects are made consistent (index objects might be marked as
invalid, in which case index rebuild will automatically take place at the next access). If the
load operation being terminated is a Load REPLACE, the table will be truncated to an
empty table after the Load TERMINATE operation. If the load operation being terminated is
a Load INSERT, the table will retain all of its original records after the Load TERMINATE
operation.
The Load operation writes temporary files onto the server into a subdirectory of the
database directory by default (the location can be changed using a TEMPFILES PATH
temp-pathname parameter on the LOAD command). The temporary files written to this
path are removed when the load operation completes without error. These temporary files
must not be tampered with under any circumstances. Doing so will cause the load
operation to malfunction, and will place your database in jeopardy.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
COPY YES
Load has made Copy
NONRECOVERABLE
No copy made and no backup required
Type:
0=Full Backup
3=Table Space Backup
4 =Copy from Table Load
CL21311.0
Notes:
If a load operation with the COPY NO option is executed in a recoverable database, the
table spaces associated with the load operation are placed in the Backup pending table
space state and the Load in progress table space state. This takes place at the beginning
of the load operation. The load operation might be delayed at this point while locks are
acquired on the tables within the table space.
When a table space is in Backup pending state, it is still available for read access. The
table space can only be taken out of Backup pending state by taking a backup of the table
space. Even if the load operation is aborted, the table space will remain in Backup pending
state because the table space state is changed at the beginning of the load operation, and
cannot be rolled back if it fails. The Load in Progress table space state prevents online
backups of a load operation with the COPY NO option specified while data is being loaded.
The Load in Progress state is removed when the load operation is completed or aborts.
Note that if the database is a recoverable database, then terminating the load will not
eliminate the requirement to make a backup of the loaded table space.
6-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
When loading data, if forward recovery is enabled, there are three options to consider:
1. COPY NO (default) specifies that the table space in which the table resides will be
placed in backup pending state if forward recovery is enabled (that is, logretain or
userexit is on). COPY NO will also put the table space state into the Load in Progress
table space state. This is a transient state that will disappear when the load completes
or aborts. The data in any table in the table space cannot be updated or deleted until a
table space backup or a full database backup is made. However, it is possible to access
the data in any table by using the SELECT statement.
2. COPY YES on the LOAD specifies that a copy of the changes made will be saved. This
copy is used during roll-forward recovery to recreate the changes to the database done
by LOAD. This option is invalid if forward recovery is disabled. COPY YES slows the
LOAD utility. If you are loading a lot of tables, you might want to load all of the tables in
the table space, then back it up.
3. NONRECOVERABLE specifies that the load transaction is to be marked as
non-recoverable and that it will not be possible to recover it by a subsequent roll forward
action. The roll forward utility will skip the transaction and will mark the table into which
data was being loaded as invalid. The utility will also ignore any subsequent
transactions against that table. After the roll forward operation is completed, such a
table can only be dropped or restored from a backup (full or table space) taken after a
commit point following the completion of the non-recoverable load operation.
With this option, table spaces are not put into backup pending state following the load
operation, and a copy of the loaded data does not have to be made during the load
operation. This can be used to enable loading several tables in a table space before
performing a backup of the table space. It can also be used with tables that are always
loaded with LOAD... REPLACE since they could be recovered by reloading.
If you do not create a copy, the LOAD will execute more quickly. You must also allow for the
disk space that an additional backup copy would take.
The name of the backup file has entries for the Type field to indicate what it represents.
There are three options:
1. 0 for full database backup
2. 3 for table space backup
3. 4 for Copy from Table Load
If load is used with the nonrecoverable option, there is no requirement to take a backup.
During a roll forward operation through a LOAD command with the COPY NO option
specified, the associated table spaces are placed in Restore pending state. To remove the
table spaces from Restore pending state, a restore operation must be performed. A roll
forward operation will only place a table space in the Restore pending state if the load
operation completed successfully.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Following a load operation, the loaded table might be in Set Integrity Pending state in either
READ or NO ACCESS mode if the table has table check constraints or referential integrity
constraints defined on it. If the table has descendent Foreign Key tables, they might also be
in Set Integrity Pending state.
If the loaded table has descendent tables, the SET INTEGRITY PENDING CASCADE
parameter can be specified to indicate whether the Set Integrity Pending state of the
loaded table should be immediately cascaded to the descendent materialized query tables
or descendent staging tables. SET INTEGRITY PENDING CASCADE does not apply to
descendent Foreign Key tables. If the loaded table has constraints as well as descendent
Foreign Key tables, and if all of the tables are in normal state prior to the load operation, the
following will result based on the load parameters specified:
INSERT, ALLOW READ ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE or DEFERRED
The loaded table will be placed in Set Integrity Pending state with read access. Descendent
Foreign Key tables will remain in their original states.
Copyright IBM Corp. 1999, 2012
6-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
SET INTEGRITY
,
FOR
table-name
OFF
IMMEDIATE CHECKED
| exception-clause |
,
FOR
table-name
IMMEDIATE UNCHECKED
ALL
,
FOREIGN KEY
CHECK
exception-clause
,
|
FOR EXCEPTION
IN
table-name
USE
table-name
CL21311.0
Notes:
To remove the set integrity pending state, use the SET INTEGRITY statement. The SET
INTEGRITY statement checks a table for constraints violations, and takes the table out of
set integrity pending state. If all the load operations are performed in INSERT mode, the
SET INTEGRITY statement can be used to incrementally process the constraints (that is, it
checks only the appended portion of the table for constraints violations).
For example:
db2 load from infile1.ixf of ixf insert into table1
db2 set integrity for table1 immediate checked
Only the appended portion of TABLE1 is checked for constraint violations. Checking only
the appended portion for constraints violations is faster than checking the entire table,
especially in the case of a large table with small amounts of appended data.
6-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Information
In IBM Data Studio Version 3.1 or later, you can use the task assistant for setting integrity.
Task assistants can guide you through the process of setting options, reviewing the
automatically generated commands to perform the task, and running these commands.
If a table is loaded with the SET INTEGRITY PENDING CASCADE DEFERRED option
specified, and the SET INTEGRITY statement is used to check for integrity violations, the
descendent tables are placed in set integrity pending state with no access. To take the
tables out of this state, you must issue an explicit request.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
cal.for
cal.par
ACCESS_
MODE
10
10
par
cal
NYYY...
30
10
for
cal
YNYY...
50
20
80
50
80
90
AFTER
Primary Key
forexp
cal.for
10
~~
10
~~
50
~~
80
...
20
timestamp
msg
90
timestamp
msg
Foreign Key
parexp
timestamp
msg
CL21311.0
Notes:
The STATUS flag of the SYSCAT.TABLES entry corresponding to the loaded table
indicates the Set Integrity Pending state of the table. For the loaded table to be fully usable,
the STATUS must have a value of N and the ACCESS MODE must have a value of F,
indicating that the table is in normal state and fully accessible.
In the example on the graphic, both CAL.PAR and CAL.FOR have been loaded. They both
indicate that constraints need to be checked (STATUS = C). CAL.PAR is a parent table,
and CAL.FOR is its dependent. CAL.FOR also has a check constraint defined on one of its
columns. In this case, both tables are in Set Integrity Pending status because they have
both been loaded.
The SET INTEGRITY statement is executed to check the constraints on both tables.
CAL.FOR had two rows that did not have parent keys in CAL.PAR, so those rows were
moved to the FOREXP exception table. The CAL.PAR table did not have any check
constraint violations, so there are no rows added to the PAREXP table as a result of this
SET INTEGRITY statement.
6-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
To remove the Set Integrity Pending state, use the SET INTEGRITY statement. The SET
INTEGRITY statement checks a table for constraint violations, and takes the table out of
Set Integrity Pending state. If all the load operations are performed in INSERT mode, the
SET INTEGRITY statement can be used to incrementally process the constraints (that is, it
will check only the appended portion of the table for constraint violations). For example:
db2 load from infile1.ixf of ixf insert into table1
db2 set integrity for table1 immediate checked
Only the appended portion of TABLE1 is checked for constraint violations. Checking only
the appended portion for constraint violations is faster than checking the entire table,
especially in the case of a large table with small amounts of appended data.
If a table is loaded with the SET INTEGRITY PENDING CASCADE DEFERRED option
specified, and the SET INTEGRITY statement is used to check for integrity violations on
the parent table, the descendent tables will be placed in Set Integrity Pending state with no
access when the SET INTEGRITY statement is issued. To take the descendent tables out
of this state, you must issue an explicit SET INTEGRITY request on the descendent tables.
If the ALLOW READ ACCESS option is specified for a load operation, the table will remain
in read access state until the SET INTEGRITY statement is used to check for constraint
violations. Applications will be able to query the table for data that existed prior to the load
operation once it has been committed, but will not be able to view the newly loaded data
until the SET INTEGRITY statement has been issued.
Several load operations can take place on a table before checking for constraint violations.
If all of the load operations are completed in ALLOW READ ACCESS mode, only the data
that it existed in the table prior to the first load operation will be available for queries.
One or more tables can be checked in a single invocation of this statement. If a dependent
table is to be checked on its own, the parent table cannot be in Set Integrity Pending state.
Otherwise, both the parent table and the dependent table must be checked at the same
time. In the case of a referential integrity cycle, all the tables involved in the cycle must be
included in a single invocation of the SET INTEGRITY statement. It might be convenient to
check the parent table for constraint violations while a dependent table is being loaded.
This can only occur if the two tables are not in the same table space.
Use the load exception table option to capture information about rows with constraint
violations.
The SET INTEGRITY statement does not activate any DELETE triggers as a result of
deleting rows that violate constraints, but once the table is removed from Set Integrity
Pending state, triggers are active. Thus, if we correct data and insert rows from the
exception table into the loaded table, any INSERT triggers defined on the table will be
activated. The implications of this should be considered. One option is to drop the INSERT
trigger, insert rows from the exception table, and then recreate the INSERT trigger.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The visual reviews a sequence of steps that might be used in loading tables with the LOAD
utility.
Sorting the input data for a LOAD allows you to control the sequence of the data rows in the
table and reduces the need to reorganize the table after loading data.
The LOAD utility options PAGEFREESPACE, INDEXFREESPACE and
TOTALFREESPACE can be used in the MODIFIED BY clause to set allocations for free
space by the LOAD utility.
When using the LOAD utility with the REPLACE option, the STATISTICS option can be
used to request the collection of statistics during load processing. This can be used to
avoid needing to run a RUNSTATS command following the LOAD processing. By default,
LOAD will not collect new table statistics.
If you specify STATISTICS USE PROFILE, this instructs load to collect statistics during the
load according to the profile defined for this table. This profile must be created before load
is executed. The profile is created by the RUNSTATS command. If the profile does not exist
Copyright IBM Corp. 1999, 2012
6-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
and load is instructed to collect statistics according to the profile, a warning is returned and
no statistics are collected.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
db2move utility
Facilitates the moving/copying of large numbers
s
ble
Ta
LOAD: The db2move.lst file is used to load the PC/IXF data files
created in the EXPORT step
CL21311.0
Notes:
This tool, when used in the EXPORT/IMPORT/LOAD mode, facilitates the movement of
large numbers of tables between DB2 databases located on workstations. The tool queries
the system catalog tables for a particular database and compiles a list of all user tables. It
then exports these tables in PC/IXF format. The PC/IXF files can be imported or loaded to
another local DB2 database on the same system, or can be transferred to another
workstation platform and imported or loaded to a DB2 database on that platform. Tables
with structured type columns are not moved when this tool is used. When used in the
COPY mode, this tool facilitates the duplication of a schema.
6-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The db2move utility allows you to quickly make copies of a database schema. Once a
model schema is established, you can use it as a template for creating new versions.
Use the db2move utility with the -co COPY action to copy a single schema or multiple
schemas from a source database to a target database. Most database objects from the
source schema are copied to the target database under the new schema.
There are options of db2move that allow the creation of the objects, with or without the
data. The schema name can be the same or different in the target database. You can also
adjust the table spaces used to contain the copied objects in the target database.
The LOAD utility is used to move the object data from the source database to the target
database.
The db2move utility attempts to copy all allowable schema objects except for the following
types:
table hierarchy
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
staging tables (not supported by the load utility in multiple partition database
environments)
jars (Java routine archives)
nicknames
packages
view hierarchies
object privileges (All new objects are created with default authorizations)
statistics (New objects do not contain statistics information)
index extensions (user-defined structured type related)
user-defined structured types and their transform functions
6-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Database dbsrc
db2move
Database dbtgt
Output files generated:
COPYSCHEMA.msg
COPYSCHEMA.err
LOADTABLE.msg
LOADTABLE.err
CL21311.0
Notes:
Example 1:
To duplicate schema schema1 from the source database dbsrc to the target database
dbtgt, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt
USER myuser1 USING mypass1
Example 2:
To duplicate schema schema1 from the source database dbsrc to the target database
dbtgt and rename the schema to newschema1 on the target database, map the source
table space ts1 to ts2 on the target database, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt
USER myuser1 USING mypass1
SCHEMA_MAP ((schema1,newschema1))
TABLESPACE_MAP ((ts1,ts2), SYS_ANY))
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Using the ADMIN_MOVE_TABLE procedure, you can move tables by using an online or
offline move. Use an online table move instead of an offline table move if you value
availability more than cost, space, move performance, and transaction overhead.
The ADMIN_MOVE_TABLE procedure allows you to move the data in a table to a new
table object of the same name (but with possibly different storage characteristics) while the
data remains online and available for access.
This utility automates the process of moving table data to a new table object while allowing
the data to remain online for select, insert, update, and delete access.
You can also generate a compression dictionary when a table is moved and reorganize the
target table.
6-53
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
tabschema
tabname
key
value
Create triggers,target,
staging tables
SOURCE
TABLE
c1
c2
2
cn
INSERT
INSERT
DELETE
DELETE
UPDATE
UPDATE
Online
Workload
Keys of
row changed
by online
workload
captured via
triggers
TARGET
TABLE
COPY PHASE
c1
c2
cn
REPLAY PHASE
c1
c2
cn
Rows with
keys present
in staging
table are
re-copied
from source
table
4
STAGING
TABLE
SWAP PHASE
CL21311.0
Notes:
When you call the SYSPROC.ADMIN_MOVE_TABLE procedure, a shadow copy of the
source table is created (Phase 1 in the visual).
During the Copy phase, changes to the source table (updates, insertions, or deletions) are
captured using triggers and placed in a staging table (Phase 2 in the visual).
After the Copy phase is completed, the changes captured in the staging table are replayed
to the shadow copy (Phase 3 in the visual).
Following that, the stored procedure briefly takes the source table offline and assigns the
source table name and index names to the shadow copy and its indexes (Phase 4 in the
visual). The shadow table is then brought online, replacing the source table. By default, the
source table is dropped, but you can use the KEEP option to retain it under a different
name.
Avoid performing online moves for tables without indexes, particularly unique indexes.
Performing a online move for a table without a unique index might result in deadlocks and
complex or expensive replay
6-54 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
There are two methods of calling ADMIN_MOVE_TABLE.
One method specifies the how to define the target table.
>>-ADMIN_MOVE_TABLE--(--tabschema--,--tabname--,---------------->
>--data_tbsp--,--index_tbsp--,--lob_tbsp--,--mdc_cols--,-------->
.-,-------.
V
|
>--partkey_cols--,--data_part--,--coldef--,----options-+--,----->
>--operation--)------------------------------------------------><
The second method allows a predefined table to be specified as the target for the move.
>>-ADMIN_MOVE_TABLE--(--tabschema--,--tabname--,---------------->
.-,-------.
V
|
>--target_tabname--,----options-+--,--operation--)-------------><
6-55
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The example shows a call to the ADMIN_MOVE_TABLE procedure to move a table to a
new set of table spaces. In this call there are no other changes requested for the table.
The CALL does include the option COPY_USE_LOAD, which tells the procedure to use
the LOAD utility in the COPY phase rather than using SQL INSERTS. The MOVE option
tells DB2 to complete all of the processing phases as quickly as possible.
The procedure output includes statistics and start and end times for each processing
phase.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The ingest utility, which became available with DB2 10.1, (sometimes referred to as
continuous data ingest, or CDI) is a high-speed client-side DB2 utility that streams data
from files and pipes into DB2 target tables. Because the ingest utility can move large
amounts of real-time data without locking the target table, you do not need to choose
between the data currency and availability.
The ingest utility ingests pre-processed data directly or from files output by ETL tools or
other means. It can run continually and thus it can process a continuous data stream
through pipes. The data is ingested at speeds that are high enough to populate even large
databases in partitioned database environments.
An INGEST command updates the target table with low latency in a single step. The ingest
utility uses row locking, so it has minimal interference with other user activities on the same
table.
With this utility, you can perform DML operations on a table using a SQL-like interface
without locking the target table. These ingest operations support the following SQL
statements: INSERT, UPDATE, MERGE, REPLACE, and DELETE.
Copyright IBM Corp. 1999, 2012
6-57
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The ingest utility also supports the use of SQL expressions to build individual column
values from more than one data field.
Other important features of the ingest utility include:
Commit by time or number of rows. You can use the commit_count ingest configuration
parameter to have commit frequency determined by the number of written rows or use
the default commit_period ingest configuration parameter to have commit frequency
determined by a specified time.
Support for copying rejected records to a file or table, or discarding them. You can
specify what the INGEST command does with rows rejected by the ingest utility (using
the DUMPFILE parameter) or by DB2 (using the EXCEPTION TABLE parameter).
Support for restart and recovery. By default, all INGEST commands are restartable from
the last commit point. In addition, the ingest utility attempts to recover from certain
errors if you have set the retry_count ingest configuration parameter.
The INGEST command supports the following input data formats:
Delimited text
Positional text and binary
Columns in various orders and formats
In addition to regular tables and nicknames, the INGEST command supports the following
table types:
multidimensional clustering (MDC) and insert time clustering (ITC) tables
range-partitioned tables
range-clustered tables (RCT)
materialized query tables (MQTs) that are defined as MAINTAINED BY USER, including
summary tables
temporal tables
updatable views (except typed views)
A single INGEST command goes through three major phases:
1. 1. Transport
The transporters read from the data source and put records on the formatter
queues. For INSERT and MERGE operations, there is one transporter thread for
each input source (for example, one thread for each input file). For UPDATE and
DELETE operations, there is only one transporter thread.
2. 2. Format
The formatters parse each record, convert the data into the format that DB2
database systems require, and put each formatted record on one of the flusher
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
queues for that record's partition. The number of formatter threads is specified by
the num_formatters configuration parameter.
The default is (number of logical CPUs)/2.
3. 3. Flush
The flushers issue the SQL statements to perform the operations on the DB2 tables.
The number of flushers for each partition is specified by the
num_flushers_per_partition configuration parameter.
The default is max( 1, ((number of logical CPUs)/2)/(number of partitions) ).
6-59
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows an example of an INGEST command that inserts data from a delimited
text file with fields separated by a comma (the default).
The fields in the file correspond to the table columns.
INGEST FROM FILE my_file.txt
FORMAT DELIMITED
(
$field1 INTEGER EXTERNAL,
$field2 DATE 'mm/dd/yyyy',
$field3 CHAR(32)
)
INSERT INTO my_table
VALUES($field1, $field2, $field3);
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
INTEGER EXTERNAL,
$key2
INTEGER EXTERNAL,
$data1 CHAR(8),
$data2 CHAR(32),
$data3 DECIMAL(5,2) EXTERNAL
)
UPDATE my_table
SET (data1, data2, data3) = ($data1, $data2, $data3)
WHERE (key1 = $key1) AND (key2 = $key2);
CL21311.0
Notes:
The visual shows an example of a INGEST utiltiy that could be used to update a tables
rows whose primary key matches the corresponding fields in the input file.
INGEST FROM FILE my_file.txt
FORMAT DELIMITED
(
$key1 INTEGER EXTERNAL,
$key2 INTEGER EXTERNAL,
$data1 CHAR(8),
$data2 CHAR(32),
$data3 DECIMAL(5,2) EXTERNAL
)
UPDATE my_table
SET (data1, data2, data3) = ($data1, $data2, $data3)
WHERE (key1 = $key1) AND (key2 = $key2);
6-61
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The INGEST LIST command lists basic information about INGEST commands that are
being run by the authorization ID that is connected to the database. The ingest utility
maintains statistics for a maximum of 128 ingest jobs running at a time.
Important
A separate CLP session is required to successfully invoke this command. It must be run on
the same machine that the INGEST command is running on.
The INGEST GET STATS command displays statistics about one or more INGEST
commands that are being run by the authorization ID that is connected to the database.
The ingest utility maintains statistics for a maximum of 128 ingest jobs running at a time.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
There are several requirements that may make using the INGEST utility preferred to using
the LOAD utility.
One key requirement would be to avoid the read only or exclusive locking at the table
level used by a load utility.
The INGEST command supports a variety of SQL operations, including insert, update,
merge, replace, and delete. In addition, you can use SQL expressions to build individual
column values from more than one data field.
By default, failed INGEST commands are restartable from the last commit point;
however you must first create a restart table, otherwise you receive an error message
notifying you that the command you issued is not restartable. The ingest utility uses this
table to store information needed to resume an incomplete INGEST command from the
last commit point.
The ingest utility allows the input records to contain extra fields between the fields that
correspond to columns.
Copyright IBM Corp. 1999, 2012
6-63
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
You may choose to use a LOAD utility when the following conditions are present:
You can add new data to a table at a time when no applications need to update the
table.
The input data being loaded includes XML or LOB columns.
You want to define the input using a SQL SELECT statement and use a CUSOR based
load operation.
The input file is in the IXF format, which is not supported bu INGEST.
You want data in the input file to be used to populate columns defined as GENERATED
ALWAYS or SYSTEM_TIME.
6-65
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit summary
Having completed this unit, you should be able to:
Discuss using the INSERT SQL statement to populate tables
Explain the differences between IMPORT and LOAD processing
Explain the EXPORT, IMPORT, and LOAD command options
Create and use Exception Tables and Dump-Files
Check table status using LOAD QUERY
Describe Load Pending and Set Integrity Pending status for a table
Use the SET INTEGRITY command
Discuss the db2move and db2look commands
Use the ADMIN_MOVE_TABLE procedure to move a table to different
table spaces
List some of the features of the Ingest utility for continuous data ingest
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Student exercise
CL21311.0
Notes:
6-67
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Data Recovery and High Availability Guide and Reference
7-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Describe the major principles and methods for backup and
recovery
State the three types of recovery used by DB2
Explain the importance of logging for backup and recovery
Describe how data logging takes place, including circular
logging and archival logging
Use the BACKUP, RESTORE, ROLLFORWARD and
RECOVER commands
Perform a table space backup and recovery
Restore a database to the end of logs or to a point-in-time
Discuss the configuration parameters and the recovery history
file and use these to handle various backup and recovery
scenarios
Copyright IBM Corporation 2012
CL21311.0
Notes:
These are the objectives for this unit.
7-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
12
10
Log 1
2
3
Backup DB
8
7
DB2 Database
Log 2
Database
Backup
Table spaces
Log 3
1: Crash Recovery
Log 4
11
12
Database at 3PM
10
2
3
8
7
Restore DB
Log 5
DB2 Database
Log 6
Database
Backup
Table spaces
2: Version Recovery
3: Roll forward
Recovery
CL21311.0
Notes:
You need to know the strategies available to you to help when there are problems with the
database. Typically, you will deal with media and storage problems, power interruptions,
and application failures. You need to know that you can back up your database, or
individual table spaces, and then rebuild them should they be damaged or corrupted. The
rebuilding of these objects is called recovery. There are three types or methods of
recovery:
- Crash
- Version (or Restore)
- Roll Forward
Crash Recovery
Crash Recovery uses the logs to recover from power interrupts or application abends.
This type of recovery is normally automated by using the default value for the
configuration parameter autorestart. If autorestart is not used, manual intervention
would be necessary to restart a database requiring crash recovery.
Copyright IBM Corp. 1999, 2012
7-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
7-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Introduction to logging
Buffer Pool
Log Buffer
Insert
Current Row
Update
Delete
Requests
for Reads
and Writes
New Row
Old Row
Commit
Log Buffer Full
Externalized
TABLES
LOGS
Copyright IBM Corporation 2012
CL21311.0
Notes:
Transaction logging is the process that records each change to a database to permit
recovery.
The database manager maintains a log of recent changes made to a database so that the
recovery process can restore the database to a consistent state. The log records are
written in a log buffer to reflect the activity occurring against data in the buffer pool. The
data that is logged includes all column data on an insert or delete, but only from
first-changed-column to last-changed-column on an update.
At commit, the log records MUST be written from the log buffer to the log files on disk in
order to guarantee recoverability. The application issuing the commit will not receive
confirmation that the commit has successfully completed until the log buffer has been
written.
It is possible that log records will be written to disk before a commit, for example when the
log buffer fills up. However, this does not impact the integrity of the system, because the
execution of a commit itself is logged. A unit of work that has started and is externalized to
7-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
the log file is not considered complete until the commit associated with that unit of work is
also externalized to the log file.
The pages in the buffer pool changed by a unit of work that has been committed do not
need to be written to disk at commit. The log records contain the information required to
recover such changes in the event of a recovery situation. It is desirable to leave pages in
the buffer pool if the data they contain is accessed frequently.
Log Write Ahead (LWA) is another reason to externalize the content of the log buffer.
The LWA protocol makes sure that crash recovery can rollback changes made by
uncommitted transactions even if the non-committed table data had been written back to
disk (which is possible even if infrequent), because it guarantees that in that case the log
records needed for the rollback will have been externalized.
If you are logging large object (LOB) data, you have to consider the impact to performance.
If you turn logging on for LOB data, your application's performance will deteriorate and you
might encounter problems related to the increased size of the log file. If you turn the logging
off, your application's performance improves, however its recoverability is sacrificed.
Contrary to the regular table data, updated LOBs are forced (externalized) at commit.
There are several reasons for this (LOB data is not buffered), but among other things, it
helps explain why not logged LOBs do not cause a problem for crash recovery; also, when
a LOB is updated, the old copy is not overlaid and is kept around until commit, so rolling
back LOBs simply re-instates the old copy. In summary, log records are never needed to
rollback, or perform crash recovery, of LOB data (they are needed for normal rollforward
processing).
7-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
"n"
PRIMARY
"n"
SECONDARY
CL21311.0
Notes:
Circular logging will use the number of primary log files specified via a configuration
parameter. The necessary log information is for in-process transactions. The log files are
used in sequence. A log file can be reused when all units of work contained within it are
committed or rolled back, and the committed changes are reflected on the disks supporting
the database.
If the database manager requests the next log in the sequence and it is not available for
reuse, a secondary log file will be allocated. After it is full, the next primary log file is
checked for reuse again. If it is still not available, another secondary log file is allocated.
This process continues until the primary log file becomes available for reuse or the number
of secondary log files permitted for allocation is exceeded.
Primary log files are allocated when the database is created, while secondary log files are
allocated as needed. Secondary log files are deallocated once the database manager
determines that they are no longer needed. Therefore, the database administrator might
elect to use the primary log files for typical processing, but permit the allocation of
secondary log files to permit periodic applications that have large units of work. For
Copyright IBM Corp. 1999, 2012
7-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
example, submission of an IMPORT utility with a large commit count might require the use
of secondary log files. Supporting such applications via primary logs would be wasteful of
space, since all primary logs, whether used or not, are allocated when the database is
activated.
If DB2 cannot continue logging due to a log full condition, database activity will halt until a
log file becomes available.
The logpath should be able to contain the sum of the primary and secondary logs.
LOGPRIMARY + LOGSECOND must be less than or equal to 256. LOGFILSIZ has a
maximum of 1 million 4 KB pages. The total active log file size limit is 1024 GB.
The number of primary and secondary log files must comply with the following:
If logsecond has a value of -1, logprimary <= 256.
If logsecond does not have a value of -1, (logprimary + logsecond) <= 256.
Important
Circular logging provides support for crash and version/restore recovery, but does NOT
support roll forward recovery.
7-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
12
13
ACTIVE Contains
information for noncommitted or nonexternalized transactions
14
OFFLINE ARCHIVE
Archive moved from
ACTIVE log subdirectory
(may also be on other
media
ONLINE ARCHIVE
Contains information
15
for committed and
externalized transactions.
Stored in the ACTIVE
log subdirectory.
16
CL21311.0
Notes:
The second type of logging supported by DB2 is archival logging (log retention logging),
where log files are not reused.
When a log becomes full, another log file is allocated. Usually, the database administrator
will configure several primary log files so that a log file being allocated is not immediately
needed for logging. (Allocation is done ahead of the need for the file.) The number of log
files allocated when the database is created is specified by the number of primary logs.
This type of logging is enabled through configuration parameters, LOGARCHMETH1 and
LOGARCHMETH2, highlighted later in this unit.
If DB2 allocates the sum of primary and secondary logs as defined in the database
configuration file and requires additional space for logging, a log full condition will result.
This situation can be caused by an application that attempts to process too large a unit of
work. It can also be caused by a configuration that allocates too few log files, or log files
that are too small to handle the work load.
7-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The logpath (or newlogpath) should be able to contain two times the sum of the primary
and secondary logs plus 1. The maximum total log file size limit is 1024 GB (that is, the
number of log files (LOGPRIMARY + LOGSECOND) multiplied by the size of each logfile in
bytes (LOGFILSIZ * 4096) must be less than 1024 GB). LOGPRIMARY + LOGSECOND
must be less than or equal to 256. LOGFILSIZ has a maximum of 1,048,572 4 KB pages.
The number of primary and secondary log files must comply with the following:
If logsecond has a value of -1, logprimary <= 256.
If logsecond does not have a value of -1, (logprimary + logsecond) <= 256.
There are three types of log files associated with log retention logging:
1. Active: These files contain information related to transactions that have not yet
committed (or rolled back) work. They also contain information for transactions that
have been committed, but whose changes have not yet been written to the database
files. (The changes could be in the buffer pool.)
The active log supports crash recovery.
2. Online Archive: These files contain information related to completed transactions that
no longer require crash recovery protection. They are termed online because they
reside in the same subdirectory as the active log files.
3. Offline archive: These files have been moved from the active log file subdirectory. The
method of moving these files could be a manual process or a process invoked through
a user exit.
Note
When using archival (log retention) logging, DB2 will truncate and close the last
log file written to free up space when the last application disconnects from the
database and the database deactivates. This is a positive feature when the
database is to be inactive for some period of time. However, if an installation has a
low level of activity and there are short periods where no application will be
connected to the database, it will be costly to truncate the last active log and then
reallocate primary log files when a new application connects.
The database administrator should consider using the ACTIVATE DATABASE command.
This command will keep the database active and prevent log file truncation. However, the
administrator must remain aware of the impact on recovery. If the database is truly not
being used for an extended period of time, preventing log file truncation will also prevent
the most recent log information from being archived. This will make the recovery point for
the database to be less than the most recent unit of work if a failure on the log disk occurs.
The DEACTIVATE DATABASE command can be specified to allow log file truncation to
occur for databases on which the ACTIVATE command has been used.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
7-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
NODE0000
newlogpath
S~3.LOG
S~4.LOG
S~5.LOG
SQL0000n
/2nd/choice
LOGSTREAM0000
mirrorlogpath
S~3.LOG
S~0.LOG
S~1.LOG
S~4.LOG
S~5.LOG
S~2.LOG
CL21311.0
Notes:
By default, log files are located in LOGSTREAM0000, which is a subdirectory of the
database directory. It is not generally good practice to store log files on the same physical
device as the database files for which they provide recovery support.
The location of log files currently in use is identified by the informational configuration
parameter Path to log files.
The NEWLOGPATH database configuration parameter allows the administrator to redirect
logging support to a specified path. The new path does not become active until the
database becomes inactive and the database is in a consistent state. (A database might be
in an inconsistent state due to an incomplete recovery process. This simply means that all
units of work are not complete. The informational database configuration parameter
Database is consistent contains this status.) When the database becomes active again, the
value in NEWLOGPATH will be used to identify the new location of the log files.
The situation illustrated is not generally desirable. In the illustration, a database was
created and used before the log path was changed. Therefore, log files exist in the default
directory LOGSTREAM0000. At some point in time, while S0000002.LOG was the active
7-12 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
log file, the database administrator updated the configuration file to indicate a
NEWLOGPATH of /usr/your/choice. Assuming all applications completed units of work and
disconnected from the database before allocation of S0000003.LOG, the logpath was
changed to /usr/your/choice/LOGSTREAM0000 at the time a new application connected to
the database. The file S0000003.LOG was allocated in the new location. This scenario can
cause subsequent recovery processes to be more complex than necessary. If a log path
change is desired, change the log path BEFORE using the database in order to direct the
logs to a device that does not contain database files.
If a change to the log path is required after a database has been used, create a database
backup after changing the log path.
A recovery strategy involving a change to log path during the recovery process is not
recommended.
DB2 supports log mirroring at the database level. Mirroring log files helps protect a
database from:
Accidental deletion of an active log
Data corruption caused by hardware failure
If you are concerned that your active logs might be damaged (as a result of a disk crash),
you should consider using the DB2 configuration parameter, MIRRORLOGPATH, to specify
a secondary path for the database to manage copies of the active log, mirroring the
volumes on which the logs are stored.
The MIRRORLOGPATH configuration parameter allows the database to write an identical
second copy of log files to a different path. It is recommended that you place the secondary
log path on a physically separate disk (preferably one that is also on a different disk
controller). That way, the disk controller cannot be a single point of failure.
When MIRRORLOGPATH is first enabled, it will not actually be used until the next
database startup. This is similar to the NEWLOGPATH configuration parameter.
Mirrored logs
Dual or mirrored logging provides a way to maintain mirror copies of both primary and
secondary logs this capability was added to DB2 Version 8. If either the primary or the
secondary log becomes corrupt (but not both), or if the device where either logs is stored
becomes unavailable, the database can still be accessed.
Mirrored logs should be kept on different disk drives and preferably on different disk
controllers (and, certainly, different RAID controllers). Since log files are written serially,
dedicated drives should be preferred as other disk action on the same drive can reduce
performance.
Mirrored logging is enabled by setting the MIRRORLOGPATH database configuration
parameter to a path where the mirror logs are to be located.
If there is an error writing to either the active log path or the mirror log path, the database
will mark the failing path as bad, write a message to the administration notification log, and
Copyright IBM Corp. 1999, 2012
7-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
write subsequent log records to the remaining good log path only. DB2 will not attempt to
use the bad path again until the current log file is completed. When DB2 needs to open the
next log file, it will verify that this path is valid, and if so, will begin to use it. If not, DB2 will
not attempt to use the path again until the next log file is accessed for the first time. There
is no attempt to synchronize the log paths, but DB2 keeps information about access errors
that occur, so that the correct paths are used when log files are archived. If a failure occurs
while writing to the remaining good path, the database shuts down.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Log size
(LOGFILSIZ)
Primary Log
Archive Method
Y
DEFAULT
E
Log Buffer
(LOGARCHMETH1)
Secondary Log
Archive Method
(LOGBUFSIZ)
(LOGARCHMETH2)
CL21311.0
Notes:
You can use the Information Center or the product documentation to review the detailed
options for database logging. Here are some of the basic options:
Log archive method 1 (logarchmeth1), log archive method 2 (logarchmeth2)
These parameters cause the database manager to archive log files to a location that is
not the active log path. If you specify both of these parameters, each log file from the
active log path that is set by the logpath configuration parameter is archived twice. This
means that you will have two identical copies of archived log files from the log path in
two different destinations. If you specify mirror logging by using the mirrorlogpath
configuration parameter, the logarchmeth2 configuration parameter archives log files
from the mirror log path instead of archiving additional copies of the log files in the
active log path. This means that you have two separate copies of the log files archived
in two different destinations: one copy from the log path and one copy from the mirror
log path.
Log Buffer (logbufsz)
7-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
This parameter allows you to specify the amount of memory to use as a buffer for log
records before writing these records to disk. The log records are written to disk when
any one of the following events occurs:
- A transaction commits
- The log buffer becomes full
- Some other internal database manager event occurs.
Increasing the log buffer size can result in more efficient input/output (I/O) activity
associated with logging, because the log records are written to disk less frequently, and
more records are written each time. However, recovery can take longer with a larger log
buffer size value. As well, you may be able to use a higher logbufsz setting to reduce
number of reads from the log disk. (To determine if your system would benefit from this,
use the log_reads monitor element to check if reading from log disk is significant.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
7-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
If the primary log files become full, secondary log files (of size logfilsiz) are allocated,
one at a time as needed, up to the maximum number specified by this parameter. If this
parameter is set to -1, the database is configured with infinite active log space. There is
no limit on the size or number of in-flight transactions running on the database. Infinite
active logging is useful in environments that must accommodate large jobs requiring
more log space than you would normally allocate to the primary logs.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Recovery History file
A recovery history file is created with each database and is automatically updated
whenever:
7-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Use the BACKUP DATABASE command to take a copy of the database data and store it
on a different medium. This backup data can then be used in the case of a failure or
damage to the original data. You can back up an entire database, database partition, or
only selected table spaces.
You do not need to be connected to the database that is to be backed up: the backup
database utility automatically establishes a connection to the specified database, and this
connection is terminated at the completion of the backup operation. If you are connected to
a database that is to be backed up, you will be disconnected when the BACKUP
DATABASE command is issued and the backup operation will proceed.
The database can be local or remote. The backup image remains on the database server,
unless you are using a storage management product such as Tivoli Storage Manager
(TSM) or DB2 Advanced Copy Services (ACS).
If you are performing an offline backup and if you have activated the database by using the
ACTIVATE DATABASE command, you must deactivate the database before you run the
offline backup.
Copyright IBM Corp. 1999, 2012
7-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
If there are active connections to the database, in order to deactivate the database
successfully, a user with SYSADM authority must connect to the database, and issue the
following commands:
CONNECT TO database-alias
QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS;
UNQUIESCE DATABASE;
TERMINATE;
DEACTIVATE DATABASE database-alias
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Tape images are not named, but internally contain the same
information in the backup header for verification purposes
Backup history provides key information in easy-to-use format
Copyright IBM Corporation 2012
CL21311.0
Notes:
On all operating systems, file names for backup images created on disk consist of a
concatenation of several elements, separated by periods:
DB_alias.Type.Inst_name.DBPARTnnn.timestamp.Seq_num
For example:
STAFF.0.DB201.DBPART000.19950922120112.001
- Database alias - A 1- to 8-character database alias name that was specified when
the backup utility was invoked.
- Type - Type of backup operation, where: 0 represents a full database-level backup,
3 represents a table space-level backup, and 4 represents a backup image
generated by the LOAD COPY TO command.
- Instance name - A 1- to 8-character name of the current instance that is taken from
the DB2INSTANCE environment variable.
Copyright IBM Corp. 1999, 2012
7-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
DATABASE
RESTORE
| restore-options |
CONTINUE
DB
ABORT
Restore options:
USER
TABLESPACE ONLINE
username
USING
password
TABLESPACE
tablespace-name
)
ONLINE
HISTORY FILE
ONLINE
TAKEN AT
USE TSM
OPEN
num-sessions
date-time
SESSIONS
,
FROM
directory
device
LOAD
shared-library
OPEN
TO
target-directory
WITH
num-buffers
INTO
BUFFERS
num-sessions
SESSIONS
NEWLOGPATH
target-database-alias
BUFFER
buffer-size
directory
REPLACE EXISTING
REDIRECT
...
WITHOUT ROLLING FORWARD
WITHOUT PROMPTING
Copyright IBM Corporation 2012
CL21311.0
Notes:
The simplest form of the DB2 RESTORE DATABASE command requires only that you
specify the alias name of the database that you want to restore.
For example:
db2 restore db sample
In this example, because the SAMPLE database exists and will be replaced when the
RESTORE DATABASE command is issued, the following message is returned:
SQL2539W Warning! Restoring to an existing database that is the same as
the backup image database. The database files will be deleted.
Do you want to continue ? (y/n)If you specify y, the restore operation should complete
successfully.
A database restore operation requires an exclusive connection: that is, no applications can
be running against the database when the operation starts, and the restore utility prevents
7-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
other applications from accessing the database until the restore operation completes
successfully. A table space restore operation, however, can be done online.
A table space is not usable until the restore operation (possibly followed by rollforward
recovery) completes successfully.
If you have tables that span more than one table space, you should back up and restore
the set of table spaces together.
When doing a partial or subset restore operation, you can use either a table space-level
backup image, or a full database-level backup image and choose one or more table spaces
from that image. All the log files associated with these table spaces from the time that the
backup image was created must exist.
You can restore a database from a backup image taken on a 32-bit level into a 64-bit level,
but not vice versa.
The DB2 backup and restore utilities should be used to backup and restore your
databases. Moving a fileset from one machine to another is not recommended as this may
compromise the integrity of the database.
Under certain conditions, you can use transportable sets with the RESTORE DATABASE
command to move databases. .
Information
In IBM Data Studio Version 3.1 or later, you can use the task assistant for restoring
database backups. Task assistants can guide you through the process of setting options,
reviewing the automatically generated commands to perform the task, and running these
commands. For more details, see Administering databases with task assistants.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
In order to use table space level backup and restore, archive logging must be used.
A subset of table spaces can be restored from a database or table space backup image.
One of the key reasons to use table space level recovery is to reduce the time of backup
and restore. This is accomplished by reducing the amount of data involved in these
processes. However, the database administrator should not strive to simply minimize the
amount of data in a backup. Although this could be accomplished by placing a single table
in its own table space, this would be likely to lead to a management problem concerning
backup and recovery.
If the table spaces associated with closely related tables are all contained in a single
backup, only the applications targeting the closely related tables are affected. In many
cases, such applications would be affected even if a single table in the group was
unavailable. This is especially true in the case of referentially constrained tables. You
should consider grouping the table spaces that support referential structures in a single
backup image.
7-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The key is to reduce the amount of data involved in the event recovery; this is necessary
while controlling the impact on management of the backup/restore strategy.
LOB data and long field data can be placed in a table space that is separate from the
regular data. This is often desirable from a recovery standpoint because the frequency of
taking backups of the LOB data can be much less than that of the regular data. The nature
of LOB data is that it tends not to be updated frequently and it is large. However, if a
REORG of such a table and its LOB data is one of the actions recorded in the log files
through which you are rolling forward, you must have all table spaces relating to the table in
the backup, defeating one of your reasons to separate the data. The solution is to establish
new backups of the table spaces associated with such tables AFTER a REORG that
includes the LOB data.
Point-in-time recovery during roll forward is supported.
If the catalog tables are involved in a recovery situation, access to the entire database is
impacted. Therefore, it is good practice to maintain table space level backups of your
catalog tables. If you need to recover the catalog, the duration of the process will be
reduced if you can restore a table space backup instead of a database backup.
Application tables considered critical to the business should also be considered prime
candidates for table space recovery. The major reason is the reduction in downtime
provided through the table space backup/restore strategy.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Roll forward pending is a state used by DB2 to protect the integrity of the database. This
state indicates that a roll-forward process is necessary to ensure consistency of the data.
Roll forward pending can occur either at the database or at the table space level.
If the database is in a roll-forward pending state, no activity to the database is allowed. If a
table space is in a roll forward pending state, only that table space is unusable.
A database will be put into a roll-forward pending state when:
Restoring an OFFLINE DATABASE backup and omitting the option WITHOUT
ROLLING FORWARD. This applies only to a database using archive logging.
Restoring an ONLINE DATABASE backup.
A table space will be put into a roll forward pending state when a table space is restored.
Under some conditions DB2 may detects a media failure and isolates it at the table space
level.
7-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Authorization for this command requires SYSADM, SYSCTRL, or SYSMAINT.
ROLLFORWARD applies transactions recorded in the database log files.
The command needs to be invoked after a database or a table space backup has been
restored, or if any table spaces have been taken offline by the database due to a media
error.
Restore is the first phase of a complete roll forward recovery of a database or table space.
After a successful database restore, a database that was configured for roll forward
recovery at the time the backup was taken enters a roll forward pending state, and is not
usable until the ROLLFORWARD command has been run successfully. If the restore was
for a table space, the table spaces restored are placed in a roll forward pending state.
When the ROLLFORWARD DATABASE command is issued, if the database is in a
roll forward pending state, the database is rolled forward. If the database is not in a
roll forward pending state, all table spaces in the database in the roll forward pending state
are processed.
7-30 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Another database RESTORE is not allowed when the roll forward process is running.
Note
If you restored from a full OFFLINE database backup image, you can bypass the
roll-forward pending state during the recovery process. The RESTORE
DATABASE command gives you the option to use the restored database
immediately WITHOUT ROLLING FORWARD the database.
You CANNOT bypass the roll-forward phase when recovering at the table space level or if
you restore from a backup image that was created using the ONLINE option of the
BACKUP DATABASE command.
7-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The database administrator can clear a ROLLFORWARD PENDING condition by issuing
the ROLLFORWARD command. The point in time to which the roll forward stage proceeds
is also controllable by the administrator. An administrator can roll forward to end of logs or
to a specific point in time. Use the Coordinated Universal Time (UTC) or local time on the
server when roll forward is specified in a command.
The integrity of the database must be protected, therefore the earliest point in time at which
the roll forward stage can end is the end of the online backup image.
Table space point-in-time recovery must protect the integrity of the relationships that exist
between tables in the table spaces. These relationships include referentially constrained
tables and single tables that have objects contained in multiple table spaces. A minimum
roll forward time is kept at the table space level and can be displayed with the LIST
TABLESPACES command. A backup is required following a point-in-time recovery of a
table space.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
7-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Recovery History
Archived Logs
Backups
CL21311.0
Notes:
The RECOVER DATABASE command
The RECOVER DB command uses the information in the history file to determine the
backup image to use for the required point in time. The user does not need to specify a
particular backup image. If the required backup image is an incremental backup,
RECOVER will invoke incremental automatic logic to perform the restore. If table spaces
are put into restore pending state during the db rollforward, these table spaces need to be
resolved through additional restore and rollforward commands.
If a PIT is requested, but the earliest backup image in the history file is later than the
request PIT, the RECOVER command will return an error. Otherwise, the backup image in
the history file with the latest backup time prior to the requested PIT is used to restore the
database.
If END OF LOGS is requested, the most recent database backup image in the history file is
used.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The RECOVER command only performs database level recovery, no table space recovery
options are included. Some options of the RESTORE, ROLLFORWARD are not provided
by RECOVER. For example the REDIRECT option of RESTORE is not included, so the
database needs to be recovered using the same containers defined in the backup image.
The default will be used for these options unless otherwise stated below.
Single Partition: If a point in time is specified, the PIT info must exist in the history file.
Multi-Partitions: If a point in time is specified, all nodes must have info for the required
PIT. If not, there will be no recover operation performed on any node.
Multi-Partitions: RECOVER must be issued from the catalog node. Any prompting from
either the RESTORE or ROLLFORWARD phase will be returned to the catalog node, and
prompt must be answered here. The existing RESTORE/ROLLFORWARD prompts
('c'/'d'/'t') will be used (see below).
Note
WITHOUT PROMPTING is the default for the RESTORE phase when using RECOVER.
USING LOCAL TIME is the default for the ROLLFORWARD phase. This is different than
ROLLFORWARD, but is chosen as the default since this is more natural usage from a
customer point of view. The user will have to specify USING UTC TIME to have the same
behavior as ROLLFORWARD.
If the RECOVER completes with no errors, the rollforward STOP/COMPLETE logic will be
performed. If rollforward is started but does not reach the desired end of log/PIT, then
STOP/COMPLETE processing will not be performed.
If the backup image selected for use is an incremental backup, DB2 will invoke automatic
incremental code to perform the database restore. If there is an error completing the
INCREMENTAL AUTO restore, DB2 will perform an internal "incremental abort" to end the
restore operation.
7-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Using the REBUILD WITH option of RESTORE simplifies creating a full or partial
copy of a database using either database or table space backup images
CL21311.0
Notes:
The term disaster recovery is used to describe the activities that need to be done to restore
the database in the event of a fire, earthquake, vandalism, or other catastrophic events. A
plan for disaster recovery can include one or more of the following:
A site to be used in the event of an emergency
A different machine on which to recover the database
Offsite storage of database backups and archived logs
If your plan for disaster recovery is to recover the entire database on another machine, you
require at least one full database backup and all the archived logs for the database. You
might choose to keep a standby database up to date by applying the logs to it as they are
archived. Or, you might choose to keep the database backup and log archives in the
standby site, and perform restore and roll forward operations only after a disaster has
occurred. (In this case, a recent database backup is clearly desirable.) With a disaster,
however, it is generally not possible to recover all of the transactions up to the time of the
disaster.
7-36 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The usefulness of a table space backup for disaster recovery depends on the scope of the
failure. Typically, disaster recovery requires that you restore the entire database; therefore,
a full database backup should be kept at a standby site. Even if you have a separate
backup image of every table space, you cannot use them to recover the database. If the
disaster is a damaged disk, a table space backup of each table space on that disk can be
used to recover. If you have lost access to a container because of a disk failure (or for any
other reason), you can restore the container to a different location.
Both table space backups and full database backups can have a role to play in any disaster
recovery plan. The DB2 facilities available for backing up, restoring, and rolling data
forward provide a foundation for a disaster recovery plan. You should ensure that you have
tested recovery procedures in place to protect your business.
7-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Standby DB2
Database Server
IB M
Primary DB2
Database Server
IB M
Direct Database to
Database TCPIP Link
Primary Copy of
DB2 Database
Standby Copy of
DB2 Database
CL21311.0
Notes:
The DB2 Data Server High Availability Disaster Recovery (HADR) feature is a database
replication feature that provides a high availability solution for both partial and complete site
failures. HADR protects against data loss by replicating data changes from a source
database, called the Primary, to a target database, called the Standby.
HADR might be your best option if most or all of your database requires protection, or if you
perform DDL operations that must be automatically replicated on the standby database.
Applications can only access the current primary database. Updates to the standby
database occur by rolling forward log data that is generated on the primary database and
shipped to the standby database.
A partial site failure can be caused by a hardware, network, or software (DB2 database
system or operating system) failure. Without HADR, a partial site failure requires
restarting the database management system (DBMS) server that contains the
database. The length of time it takes to restart the database and the server where it
resides is unpredictable. It can take several minutes before the database is brought
back to a consistent state and made available. With HADR, the standby database can
7-38 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
take over in seconds. Further, you can redirect the clients that were using the original
primary database to the standby database (new primary database) by using automatic
client reroute or retry logic in the application.
A complete site failure can occur when a disaster, such as a fire, causes the entire site
to be destroyed. Because HADR uses TCP/IP for communication between the primary
and standby databases, they can be situated in different locations. For example, your
primary database might be located at your head office in one city, while your standby
database is located at your sales office in another city. If a disaster occurs at the
primary site, data availability is maintained by having the remote standby database take
over as the primary database with full DB2 functionality. After a takeover operation
occurs, you can bring the original primary database back up and return it to its primary
database status; this is known as failback.
With HADR, you can choose the level of protection you want from potential loss of data by
specifying one of three synchronization modes: synchronous, near synchronous, or
asynchronous.
7-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Principal Standby
e
mod
sync
Any
Primary
Auxiliary Standby
ync mod
e
Auxiliary Standby
HADR feature in multiple standby mode allows up to three standby databases to be configured
One Standby is designated the principal HADR standby database
Any additional standby database is an auxiliary HADR standby database
Both types of HADR standbys:
Are synchronized with the HADR primary database through a direct TCP/IP connection
Support reads on standby
Can issue a forced or non-forced takeover
Other HADR enhancements included in DB2 10.1
Log spooling on the Standby database
Delayed replay for a Standby database
Copyright IBM Corporation 2012
Figure 7-19. DB2 10.1 support for multiple active standby databases
CL21311.0
Notes:
Starting the DB2 10.1, the HADR function supports multiple standby databases.
In an HADR with multiple standbys environment, all of the standby databases are directly
connected to the primary. The databases are not daisy chained/cascading from each other.
Each of the standbys in a multiple standby environment supports the Reads on Standby
feature.
A takeover (whether it be forced or non-forced) is supported from any standby. In other
words, any of the standbys can become the primary through a takeover. After the takeover
occurs, the database configuration parameters on the other standbys
(HADR_REMOTE_HOST, HADR_REMOTE_SVC, and HADR_REMOTE_INST) will be
automatically updated to point to the new primary.
Several other HADR enhancements were made available with DB2 10.1. These can be
used in either single or multiple standby modes.
HADR log spooling
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The high availability disaster recovery (HADR) log spooling feature allows transactions
on primary to make progress without having to wait for the log replay on the standby.
When this feature is enabled, log data sent by the primary is spooled, or written, to disk
on the standby, and that log data is later read by log replay.
Log spooling, which is enabled by setting the hadr_spool_limit database configuration
parameter, is an improvement to the HADR feature. When replay is slow, it is possible
that new transactions on the primary can be blocked because it is not able to send log
data to the standby system if there is no room in the buffer to receive the data. The log
spooling feature means that the standby is not limited by the size of its buffer. When
there is an increase in data received that cannot be contained in the buffer, the log
replay reads the data from disk. This allows the system to better tolerate either a spike
in transaction volume on the primary, or a slow down of log replay (due to the replay of
particular type of log records) on the standby.
This feature could potentially lead to a larger gap between the log position on the
primary and the log replay on standby, which can lead to longer takeover time. You
should consider your spool limit setting carefully because the standby cannot start up
as the new primary and receive transactions until the replay of the spooled logs has
finished.
HADR delayed replay
HADR delayed replay helps prevent data loss due to errant transactions. To implement
HADR delayed replay, set the hadr_replay_delay database configuration parameter on
the HADR standby database.
Delayed replay intentionally keeps the standby database at a point in time that is earlier
than that of the primary database by delaying replay of logs on that standby. If an errant
transaction is executed on the primary, you have until the configured time delay has
elapsed to take action to prevent the errant transaction from being replayed on the
standby. To recover the lost data, you can either copy this data back to the primary, or
you can have the standby take over as the new primary database.
Delayed replay works by comparing timestamps in the log stream, which is generated
on the primary, and the current time of the standby. As a result, it is important to
synchronize the clocks of the primary and standby databases. Transaction commit is
replayed on the standby according to the following equation:
(current time on the standby - value of the hadr_replay_delay configuration
parameter) >= timestamp of the committed log record
7-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
standby mode, you should not enable IBM Tivoli System Automation for Multiplatforms
because the takeover will fail.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
NO SCRIPTING REQUIRED!
One set of embedded scripts that are used by all cluster managers.
CL21311.0
Notes:
The DB2 High Availability (HA) Feature enables integration between IBM Data Server and
cluster managing software.
When you stop a database manager instance in a clustered environment, you must make
your cluster manager aware that the instance is stopped. If the cluster manager is not
aware that the instance is stopped, the cluster manager might attempt an operation such
as failover on the stopped instance. The DB2 High Availability (HA) Feature provides
infrastructure for enabling the database manager to communicate with your cluster
manager when instance configuration changes, such as stopping a database manager
instance, require cluster changes.
The DB2 HA Feature is composed of the following elements:
IBM Tivoli System Automation for Multiplatforms (SA MP or TSA) is bundled with IBM
Data Server on AIX and Linux as part of the DB2 High Availability (HA) Feature, and
integrated with the DB2 installer. You can install, upgrade, or uninstall SA MP using
either the DB2 installer or the installSAM and uninstallSAM scripts that are included in
the IBM Data Server install media.
Copyright IBM Corp. 1999, 2012
7-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Configuring and administering the database instances and the cluster manager manually is
complex, time-consuming, and prone to error. The DB2 High Availability (HA) Feature
provides infrastructure for enabling the database manager to communicate with your
cluster manager when instance configuration changes, such as stopping a database
manager instance, require cluster changes.
Procedure:
1. Install cluster managing software.
SA MP is integrated with DB2 Enterprise Server Edition, DB2 Workgroup Server
Edition, DB2 Connect Enterprise Server Edition and DB2 Connect Application Server
Edition on AIX, Linux, and Solaris SPARC operating systems. It is also integrated with
DB2 Express-C Fixed Term License (FTL) and the DB2 High Availability Feature for
Express Edition on Linux operating systems. On Windows operating systems, the SA
MP is bundled with all of these DB2 database products and features, but it is not
integrated with the DB2 installer.
7-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
2. Configure IBM Data Server database manager instances for your cluster manager, and
configure your cluster manager for IBM Data Server.
DB2 High Availability Instance Configuration Utility (db2haicu) is a text based utility that
you can use to configure and administer your highly available databases in a clustered
environment. db2haicu collects information about your database instance, your cluster
environment, and your cluster manager by querying your system. You supply more
information through parameters to the db2haicu call, an input file, or at runtime by
providing information at db2haicu prompts.
Over time, as your database needs change and you need to modify your database
configuration within the clustered environment, continue to keep the database manager
instance configuration and the cluster manager configuration synchronized.
The DB2 High Availability (HA) Feature provides infrastructure for enabling the database
manager to communicate with your cluster manager when instance configuration changes,
such as stopping a database manager instance, require cluster changes.
Whether you use db2haicu with SA MP, or you use another cluster manager that supports
the DB2 cluster manager API, administering you clustered environment with the DB2 HA
Feature is easier than maintaining the database manager configuration and the cluster
configuration separately.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Some other database recovery facilities include:
On-demand log archiving - the ARCHIVE LOG command
Infinite active logs - setting LOGSECOND to -1 to allow unlimited secondary logs
Block transactions on log directory full - the blk_log_dsk_ful database configuration
option.
Split mirror database copies>:
- SET WRITE SUSPEND/RESUME commands
- db2inidb command modes:
SNAPSHOT: Database copy for testing reporting
STANDBY: Database copy to create standby database for quick recovery
MIRROR: Use split mirror database copy instead of RESTORE
Incremental and delta database and table space backups
Relocating a database or a table space
- RESTORE UTILITY with REDIRECT option
Copyright IBM Corp. 1999, 2012
7-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
- db2relocatedb command
Full and partial database REBUILD support
Integrated Cluster Failover support - simple configuration of DB2 databases for a highly
available cluster using the IBM TSA for multi-platforms product.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
You, or other people in your organization, might desire additional details. Please visit
www.ibm.com/services/learning.
DB2 for LUW Advanced Database Recovery (CL492)
Overview:
Gain a deeper understanding of the advanced recovery features of DB2 for Linux,
UNIX, and Windows environments with multiple partition databases. Get practical
experience in the planning and utilization of a wide variety of DB2 recovery facilities, in
a series of database recovery scenarios you complete during exercises using DB2
Enterprise Server Edition.
Skills taught:
Gain a better understanding of the unique recovery planning requirements for DB2
ESE.
Explore the DB2 recovery facilities and database configuration options.
Plan the implementation of a user exit for archival of database logs.
Copyright IBM Corp. 1999, 2012
7-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit summary
Having completed this unit, you should be able to:
Describe the major principles and methods for backup and
recovery
State the three types of recovery used by DB2
Explain the importance of logging for backup and recovery
Describe how data logging takes place, including circular
logging and archival logging
Use the BACKUP, RESTORE, ROLLFORWARD and
RECOVER commands
Perform a table space backup and recovery
Restore a database to the end of logs or to a point-in-time
Discuss the configuration parameters and the recovery history
file and use these to handle various backup and recovery
scenarios
Copyright IBM Corporation 2012
CL21311.0
Notes:
7-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Student exercise
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Trouble shooting and Tuning Database Performance
Command Reference
Database Administration Concepts and Configuration Reference
8-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Plan the use of RUNSTATS, REORGCHK and REORG
utilities for maintaining database efficiency
Configure the DB2 instance to set the location for diagnostic
data and message severity levels for basic problem analysis
Describe the methods that can be used for monitoring
database and application activity including db2pd commands,
Event Monitors and using SQL statements to access statistics
Describe the function of EXPLAIN and use this facility to assist
basic analysis
Use the db2advis command to analyze a workload for
potential performance improvements
Use the db2fodc command to collect diagnostic data for a
system hang
Copyright IBM Corporation 2012
CL21311.0
Notes:
Here are the objectives for this lecture unit.
8-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
Accurate table and index statistics are very important input to the DB2 optimizer that
selects efficient access plans for processing application requests. The RUNSTATS
command can be used to collect new table and index statistics. If a table is growing in size
or a new index is added, it is important to update the statistics to reflect these changes. A
DB2 database can be configured to automatically select table and indexes that need new
statistics.
The REORG utility can be used to reorganize tables and indexes to improve the efficiency
of the storage and also to reduce access costs. For example, if many rows are deleted from
a table, a large number of pages in the table may contain a few or no data rows. The
REORG utility can rebuild the table to utilize fewer pages, which saves disk space and
allows the table to be scanned with fewer I/O operations. The REORGCHK command can
be used to check a series of indicators and recommend which tables or indexes would
benefit from reorganization. The REORG utility can be used to implement compression for
existing tables and indexes and also to rebuild the compression dictionary to reflect the
current table data contents.
8-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Collect table statistics on 1.5 percent of the data pages and index
statistics on 2.5 percent of the index pages. Both table data pages and
index pages are sampled
RUNSTATS ON TABLE employee AND INDEXES ALL TABLESAMPLE SYSTEM(1.5)
INDEXSAMPLE SYSTEM(2.5)
CL21311.0
Notes:
The runstats utility collects the following information about tables and indexes:
The number of pages that contain rows
The number of pages that are in use
The number of rows in the table (the cardinality)
The number of rows that overflow
For multidimensional clustering (MDC) and insert time clustering (ITC) tables, the
number of blocks that contain data
For partitioned tables, the degree of data clustering within a single data partition
Data distribution statistics, which are used by the optimizer to estimate efficient access
plans for tables and statistical views whose data is not evenly distributed and whose
columns have a significant number of duplicate values
Detailed index statistics, which are used by the optimizer to determine how efficient it is
to access table data through an index
8-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Subelement statistics for LIKE predicates, especially those that search for patterns
within strings (for example, LIKE %disk%), are also used by the optimizer
The visual shows a few examples of RUNSTATS commands that can collect table and
index statistics.
To collect basic statistics on the table and all indexes using sampling for the detailed index
statistics collection:
RUNSTATS ON TABLE employee AND SAMPLED DETAILED INDEXES ALL
For large tables, you may decide to use sampling to collect statistics rather than processing
the full set of data. For example to collect table statistics on 1.5 percent of the data pages
and index statistics on 2.5 percent of the index pages, the following RUNSTATS command
could be used: ( Both table data pages and index pages are sampled)
RUNSTATS ON TABLE employee AND INDEXES ALL TABLESAMPLE SYSTEM(1.5)
INDEXSAMPLE SYSTEM(2.5)
In some cases, distribution statistics can be collected for tables that have non-uniform
distribution of data values. For example to collect statistics on table, with distribution
statistics on columns empid, empname and empdept and the two indexes Xempid and
Xempname. Distribution statistics limits are set individually for empdept, while the other two
columns use a common default:
RUNSTATS ON TABLE employee
WITH DISTRIBUTION ON COLUMNS (empid, empname, empdept NUM_FREQVALUES
50 NUM_QUANTILES 100)
DEFAULT NUM_FREQVALUES 5 NUM_QUANTILES 10
AND INDEXES Xempid, Xempname
8-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Figure 8-4. Using REORGCHK to find tables that would benefit from reorganization
CL21311.0
Notes:
The REORGCHK command returns statistical information about data organization and can
advise you about whether particular tables or indexes need to be reorganized.
The REORGCHK command can collect new statistics or use the current catalog statistics
to produce the report. The report can include one table, a schema of objects or all tables.
REORGCHK calculates statistics obtained from eight different formulas to determine if
performance has deteriorated or can be improved by reorganizing a table or its indexes.
The visual shows the table statistics portion of a REORGCHK report for a schema with
three tables. Three formulas are calculated for each table, any table that exceeds the
recommended value is marked with a * to indicate some possible benefit from
reorganization.
In the sample report all three tables are below the target value for the F2 calculation which
indicates that the table has more pages than are needed to hold the current number of data
rows.
8-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Figure 8-5. Using REORGCHK to find indexes that would benefit from reorganization
CL21311.0
Notes:
The visual shows the index based portion of a sample REORGCHK report. It includes
statistics and calculations for each index of the selected tables. There are five calculations
performed for each index and any index that exceeds the recommended limit for a formula
is marked with a matching *.
All four indexes in the sample report have at least one indication that the index or table may
require reorganization. The F4 formula shows how well each index is clustered based on
the current sequence of the data rows. In order to improve this cluster ratio for an index, the
table data will need to be reorganized using this particular index selected to recluster the
table with the REORG utility. It is common when a table has several indexes for one or
more of the indexes to be flagged in the REORGCHK report as unclustered, but as long as
the index that has been selected to cluster the table is efficiently clustered, no
reorganization of the table is necessary.
8-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Table reorganization
After many changes to table data, logically sequential data might reside on nonsequential
data pages, so that the database manager might need to perform additional read
operations to access data. Also, if many rows have been deleted, additional read
operations are also required. In this case, you might consider reorganizing the table to
match the index and to reclaim space.
You can also reorganize the system catalog tables.
Because reorganizing a table usually takes more time than updating statistics, you could
execute the RUNSTATS command to refresh the current statistics for your data, and then
rebind your applications. If refreshed statistics do not improve performance, reorganization
might help.
The following factors can indicate a need for table reorganization:
There has been a high volume of insert, update, and delete activity against tables that
are accessed by queries.
8-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
There have been significant changes in the performance of queries that use an index
with a high cluster ratio.
Executing the RUNSTATS command to refresh table statistics does not improve
performance.
Output from the REORGCHK command indicates a need for table reorganization.
The REORG utility has many options. Tables can be reorganized online or offline. The
indexes for a table can also be reorganized to improve the efficiency of the index objects.
The first example shown will reorganize the table rgcomp.history1offline, which will rebuild
all of the indexes defined for the table. The RESETDICTIONARY option is used to force the
REORG to create a new compression dictionary during its processing rather than keeping
the original dictionary. This would be done if the data in the table has changed significantly
and a new compression dictionary would produce better compression results.
The second example reorganizes the table rg.hist2 using the index rg.hist2ix1 to recluster
the data. This is an online or inplace reorganization that allows applications to read and
write the table during REORG processing.
The third REORG utility example shows a reorganization for a range partitioned table
named parttab.historypart, where a single range is being reorganized based on the ON
DATA PARTITION option. Without the ON DATA PARTITION option all of the data ranges
in the range partitioned table would be reorganized, one at a time.
8-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Autonomic utilities
DB2 databases can be configured to automatically perform
database maintenance:
Periodic RUNSTATS ( Database CFG auto_runstats ) DEFAULT ON
CL21311.0
Notes:
The DB2 autonomic computing environment is self-configuring, self-healing,
self-optimizing, and self-protecting. By sensing and responding to situations that occur,
autonomic computing shifts the burden of managing a computing environment from
database administrators to technology.
Some autonomic database-level configuration parameters include:
auto_runstats
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
auto_stmt_stats
Automatic maintenance
(AUTO_MAINT) = ON
(AUTO_DB_BACKUP) = OFF
(AUTO_TBL_MAINT) = ON
Automatic runstats
(AUTO_RUNSTATS) = ON
(AUTO_STATS_PROF) = OFF
(AUTO_PROF_UPD) = OFF
Automatic reorganization
(AUTO_REORG) = ON
You can disable both Auto Runstats and Auto Reorg features
temporarily by setting auto_tbl_maint to OFF. Both features can
be enabled later by setting auto_tbl_maint back to ON. You do
not need to issue db2stop or db2start commands to have the
changes take effect.
By default, this parameter is set to ON.
auto_reorg
8-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The term database monitoring refers to the tasks associated with examining the
operational status of your database.
Database monitoring is a vital activity for the maintenance of the performance and health of
your database management system. To facilitate monitoring, DB2 collects information from
the database manager, its databases, and any connected applications. With this
information you can perform the following types of tasks, and more:
Forecast hardware requirements based on database usage patterns.
Analyze the performance of individual applications or SQL queries.
Track the usage of indexes and tables.
Pinpoint the cause of poor system performance.
Assess the impact of optimization activities (for example, altering database manager
configuration parameters, adding indexes, or modifying SQL queries).
8-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
There are two ways to monitor operations in your database. You can view information that
shows the state of various aspects of the database at a specific point in time. Or, you can
set up event monitors to capture historical information as specific types of database events
take place.
You can monitor your database operations in real-time using monitoring table functions.
For example, you can use a monitoring table function to examine the total amount of space
used in a table space. These table functions let you examine monitor elements and metrics
that report on virtually all aspects of database operations using SQL. The monitoring table
functions use the newer, lightweight, high-speed monitoring infrastructure that was
introduced in Version 9.7. In addition to the table functions, snapshot monitoring routines
are also available. The snapshot monitoring facilities in DB2 use monitoring infrastructure
that existed before Version 9.7. Generally speaking, snapshot monitoring facilities are no
longer being enhanced in the product; where possible, use the monitoring table functions to
retrieve the data you want to see.
Event monitors capture information about database operations over time, as specific types
of events occur. For example, you can create an event monitor to capture information about
locks and deadlocks as they occur in the system. Or you might create an event monitor to
record when a threshold that you specify (for example the total processor time used by an
application or workload) is exceeded. Event monitors generate output in different formats;
all of them can write event data to regular tables; some event monitors have additional
output options.
Information
IBM InfoSphere Optim Performance Manager provides a Web interface that you can use to
isolate and analyze typical database performance problems. You can also view a summary
of the health of your databases and drill down.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
-agents
-dynamic
-dbcfg
-reopt
-thresholds
-transactions
-static
-catalogcache
-osinfo
-serviceclasses
-bufferpools
-fcm
-sysplex
-hadr
-ha
-logs
-locks
-mempools
-memsets
-tcbstats
-reorg
-utiltiies
-workloads
-statisticscache -wlocks
PoolName
utilh
pckcacheh
xmlcacheh
catcacheh
bph
bph
bph
bph
bph
bph
shsorth
lockh
dbh
apph
apph
apph
apph
apph
appshrh
Id
5
7
93
8
16
16
16
16
16
16
18
4
2
1
1
1
1
1
20
Overhead
0
113568
50944
0
32
64
32
32
32
32
0
32
381824
0
0
0
0
0
2304
LogSz
2120
243113
80008
67536
16760384
42418432
782592
520448
389376
323840
0
328192
12346744
11104
7303
7347
7347
7367
62980
LogUpBnd
20512768
Unlimited
20971520
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
40960000
458752
24379392
1048576
1048576
1048576
1048576
1048576
20480000
LogHWM
2120
243113
80008
67536
16760384
42418432
782592
520448
389376
323840
0
328192
12346768
27862
8411
7347
8759
7503
62980
CfgParm
UTIL_HEAP_SZ
PCKCACHESZ
n/a
CATALOGCACHE_SZ
n/a
n/a
n/a
n/a
n/a
n/a
SHEAPTHRES_SHR
LOCKLIST
DBHEAP
APPLHEAPSZ
APPLHEAPSZ
APPLHEAPSZ
APPLHEAPSZ
APPLHEAPSZ
application shared
CL21311.0
Notes:
The db2pd command is used for troubleshooting because it can return quick and
immediate information from the DB2 memory sets.
To use the db2pd command one of the following instance level authorities is needed:
The SYSADM authority level.
The SYSCTRL authority level.
The SYSMAINT authority level.
The SYSMON authority level.
The tool collects information without acquiring any latches or using any engine resources. It
is therefore possible (and expected) to retrieve information that is changing while db2pd is
collecting information; hence the data might not be completely accurate. If changing
memory pointers are encountered, a signal handler is used to prevent db2pd from ending
abnormally. This can result in messages such as "Changing data structure forced
command termination" to appear in the output. Nonetheless, the tool can be helpful for
8-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Lockname
AuthID
AppID
Type
Mode
95
G
[000-00095] 3
db2bp
INST28
090004001B0001000000000052 RowLock
*LOCAL.inst28.120507120320
..X
97
111
W
[000-00111] 11
db2bp
USER28
090004001B0001000000000052 RowLock
*LOCAL.inst28.120507120949
.NS
186
CL21311.0
Notes:
The db2pd command -wlocks report displays the owner and waiter information for each
lock being waited on.
In the Sample output of the db2pd -wlocks command, the lock status (Sts) value of G
designates the owner of the lock, while a Sts value of W designates the waiter of that lock.
For the -wlocks parameter, the following information is returned:
ApplHandl - The application handle, including the node and the index.
TranHdl - The transaction handle that is requesting the lock.
LockName - The name of the lock.
Type - The type of lock.
Mode - The lock mode. The possible values are:
- IS
- IX
Copyright IBM Corp. 1999, 2012
8-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
- S
- SIX
- X
- IN
- Z
- U
- NS
- NW
Conv - The lock mode to which the lock will be converted after the lock wait ends.
Sts - The lock status. The possible values are:
- G (granted)
- C (converting)
- W (waiting)
CoorEDU - The EDU ID of the coordinator agent for the application.
AppName - The name of the application.
AuthID - The authorization identifier.
AppID - The application ID. This values is the same as the appl_id monitor element
data.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Figure 8-11. Using DB2 table functions in SQL statements for Monitoring
CL21311.0
Notes:
Table functions for monitoring
Starting with DB2 Version 9.7, you can access monitor data through a light-weight
alternative to the traditional system monitor. Use monitor table functions to collect and view
data for systems, activities, or data objects.
Data for monitored elements are continually accumulated in memory and available for
querying. You can choose to receive data for a single object (for example, service class A
or table TABLE1) or for all objects.
When using these table functions in a database partitioned environment, you can choose
to receive data for a single partition or for all partitions. If you choose to receive data for all
partitions, the table functions return one row for each partition. Using SQL, you can sum
the values across partitions to obtain the value of a monitor element across partitions.
Monitoring system information using table functions
The system monitoring perspective encompasses the complete volume of work and
effort expended by the data server to process application requests. From this
Copyright IBM Corp. 1999, 2012
8-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
perspective, you can determine what the data server is doing as a whole as well as for
particular subsets of application requests.
Monitoring activities using table functions
The activity monitoring perspective focuses on the subset of data server processing
related to executing activities. In the context of SQL statements, the term activity refers
to the execution of the section for a SQL statement.
Monitoring data objects using table functions
The data object monitoring perspective provides information about operations
performed on data objects, that is tables, indexes, buffer pools, table spaces, and
containers.
Monitoring locking using table functions
You can retrieve information about locks using table functions. Unlike request, activity
or data object monitor elements, information about locks is always available from the
database manager. You do not need to enable the collection of this information.
Monitoring system memory using table functions
You can retrieve information about system memory usage using table functions.
Other monitoring table functions
Besides table functions that return information about the system, activities, locks, or
data objects there are also table functions that return various types of miscellaneous
information. These functions include ones that return information related to the fast
communications manager (FCM), and about the status of table space extent
movement.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
SCHEMA
ROWS_READ
TABLE_SCANS
ROWS_INSERTED
---------- ----------- ------------- ------------INST411
1168373
5
3365
INST411
650174
4
0
INST411
9730
0
0
INST411
3365
0
0
ROWS_UPDATED
-----------0
3365
3365
3365
4 record(s) selected.
Figure 8-12. Using the MON_GET_TABLE function for table performance statistics
CL21311.0
Notes:
The visual shows an example of a SQL query that uses the MON_GET_TABLE function to
return selected table statistics. This function can be used to get current statistics for one
table, a schema of tables or all tables that have been accessed by a database.
With DB2 10.1 the statistics available for each table include information about lock waits
and the number of logical and physical pages read for data, index and XML pages.
8-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Auth Id
Application Name
-------INST00
-------------------db2bp_32
Appl.
Handle
-------78
Application Id
--------------------------*LOCAL.inst00.000327220543
Number of
Coordinating Coordinating Status
Status
Agents
Node Number pid/thread
Change Time
---- ---------- ------------ ------------ -------------- -------------0001 1
0
230
UOW Waiting
Not Collected
Seq#
Node
------0
DBName
------MUSICDB
DB Path
-------------------------------------/home/inst00/inst00/NODE0000/SQL00001/
CL21311.0
Notes:
The LIST APPLICATIONS command displays an entry for each application connected to a
specific database or to all databases within the instance (the default).
The output shows the application program name, authorization ID (user name), application
handle, application ID, and the database name for each database application.
If the SHOW DETAIL parameter is used, the output will also display the application's
sequence number, status, status change time, node, and database path.
One of the following authorities is needed to run the LIST APPLICATIONS command:
SYSADM
SYSCTRL
SYSMAINT
SYSMON
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
8-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The FORCE APPLICATION command can be used to break the database connection
between an application and its agent (db2agent). The FORCE APPLICATION command
can be used to break the connections for ALL applications or specific applications.
When specifying specific applications, use the application handle number from the LIST
APPLICATIONS command.
If an application is forced, an uncommitted Unit of Work will be rolled back. The force takes
effect immediately. However, the command runs asynchronously and the user might regain
control before all applications have been forced.
The FORCE command breaks the connection at the database server and does not
terminate the database application. Do NOT use operating system commands to stop or kill
a database agent. SYSADM, SYSCTRL, or SYSMON authority level is required to issue
the FORCE APPLICATION command.
QUIESCE command
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The QUIESCE command forces all users off either the specified instance or database
across all members and puts them into a quiesced mode.
While the instance or database is in quiesced mode, you can perform administrative
tasks on it. After administrative tasks are complete, use the UNQUIESCE command to
activate the instance or database and allow other users to connect to the database.
In this mode, only users with authority in this restricted mode are allowed to attach or
connect to the instance or database. Users with SYSADM, SYSMAINT, and SYSCTRL
authority always have access to an instance while it is quiesced, and users with
SYSADM and DBADM authority always have access to a database while it is quiesced.
Scope:
QUIESCE DATABASE results in all objects in the database being in the quiesced
mode. Only the allowed user or group and SYSADM, SYSMAINT, DBADM, or
SYSCTRL will be able to access the database or its objects.
QUIESCE INSTANCE instance-name means the instance and the databases in the
instance instance-name will be in quiesced mode. The instance will be accessible
just for SYSADM, SYSMAINT, and SYSCTRL and allowed user or group.
If an instance is in quiesced mode, a database in the instance cannot be put in
quiesced mode.
If a database is in the SUSPEND_WRITE state, it cannot be put in quiesced mode.
8-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Monitoring table functions and snapshot routines return the values of monitor elements at
the specific point in time the routine is run, which is useful when you want to check the
current state of your system. However, there are many times when you need to capture
information about the state of your system at exactly the time that a specific event occurs.
Event monitors serve this purpose.
Event monitors can be created to capture point-in-time information related to different kinds
of events that take place in your system. For example, you can create an event monitor to
capture information when a specific threshold that you define is exceeded. The information
captured includes such things as the ID of the application that was running when the
threshold was exceeded. Or, you might create an event monitor to determine what
statement was running when a lock event occurred.
The CREATE EVENT MONITOR statement defines a monitor that will record certain
events that occur when using the database. The definition of each event monitor also
specifies where the database should record the events.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Several different types of event monitors can be created using this statement including the
following types:
Activities. The event monitor will record activity events that occur when using the
database.
Locking. The event monitor will record lock-related events that occur when using the
database.
Package cache. The event monitor will record events related to the package cache
statement.
Statistics. The event monitor will record statistics events that occur when using the
database.
Threshold violations. The event monitor will record threshold violation events that occur
when using the database.
Unit of work. The event monitor will record events when a unit of work completes.
8-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Event monitors are database objects that need to be created.
The example shows an event monitor named wlmactivity, that is created to capture
activities, like SQL statement processing. This event monitor is defined as a WRITE TO
TABLE event monitor. The event monitor is defined to include specific table names and
table spaces for the set of tables associated with this event monitor.
The SET EVENT MONTOR statement can be used to start and stop collection of data
using an event monitor.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The sample query uses information from two of the tables associated with one ACTIVITIES
event monitor.
The query shows the number of rows estimated to be returned by the DB2 optimizer before
execution, the number of rows actually returned, some additional statistics like sort
operations and pages referenced as well as a portion of the SQL statement text.
8-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DB2 Optimizer
SQL STATEMENTS
STATISTICS
DB2
CATALOG
PACKAGE
(Access Path)
CL21311.0
Notes:
When the query compiler optimizes query plans, its decisions are heavily influenced by
statistical information about the size of the database tables, indexes, and statistical views.
This information is stored in system catalog tables.
The optimizer also uses information about the distribution of data in specific columns of
tables, indexes, and statistical views if these columns are used to select rows or to join
tables. The optimizer uses this information to estimate the costs of alternative access plans
for each query.
Statistical information about the cluster ratio of indexes, the number of leaf pages in
indexes, the number of table rows that overflow their original pages, and the number of
filled and empty pages in a table can also be collected. You can use this information to
decide when to reorganize tables or indexes.
When it compiles an SQL or XQuery statement, the query optimizer estimates the
execution cost of different ways of satisfying the query.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Based on these estimates, the optimizer selects an optimal access plan. An access plan
specifies the order of operations that are required to resolve an SQL or XQuery statement.
When an application program is bound, a package is created. This package contains
access plans for all of the static SQL and XQuery statements in that application program.
Access plans for dynamic SQL and XQuery statements are created at run time.
There are three ways to access data in a table:
By scanning the entire table sequentially
By accessing an index on the table to locate specific rows
By scan sharing
Rows might be filtered according to conditions that are defined in predicates, which are
usually stated in a WHERE clause. The selected rows in accessed tables are joined to
produce the result set, and this data might be further processed by grouping or sorting of
the output.
8-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Design Advisor
Assists in finding the right indexes, MQTs or MDC:
Based on workload
Virtual indexes
Materialized
? Query Table
Multidimensional
Clustering
?
Tables
CL21311.0
Notes:
The DB2 Design Advisor is a tool that can help you significantly improve your workload
performance. The task of selecting which indexes, MQTs, clustering dimensions, or
partitions to create for a complex workload can be daunting. The Design Advisor identifies
all of the objects needed to improve the performance a particular workload (can include
SELECT, INSERT, UPDATE, and/or DELETE statements). Given a set of SQL statements,
it will generate recommendations for:
New indexes
New materialized query tables (MQTs)
Conversion to multidimensional clustering tables (MDC)
Repartitioning of tables
Deletion of indexes and MQTs unused by the specified workload
You can decide to implement some or all of the recommendations immediately or schedule
them for a later time.
The Design Advisor can also help you to migrate from a single-partition database to a
multi-partitioned-environment.
8-32 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
For example, over a one month period of time your database manager might have to
process 1,000 INSERTs, 10,000 UPDATEs, 10,000 SELECTs, and 1,000 DELETEs. The
information in the workload is concerned with the type and frequency of the SQL
statements over a given period of time. The advising engine uses this workload information
in conjunction with the database information to recommend indexes. The goal of the
advising engine is to minimize the total workload cost. This information is written to the
ADVISE_WORKLOAD table. With sufficient information/constraints, the Design Advisor is
able to suggest the appropriate actions to take for your tables.
You can execute the db2advis command from the command line, then the output is printed
to stdout by default and saved in the ADVISE_TABLE and ADVISE_INDEX tables.
Partitioning strategies can be found in the ADVISE_PARTITION table. The RUN_ID value
in all these tables corresponds to the START_TIME value in the ADVISE_INSTANCE table
for each execution of the Design Advisor.
To create the ADVISE_WORKLOAD and ADVISE_INDEX tables, run the EXPLAIN.DDL
script found in the misc subdirectory of the sqllib subdirectory.
8-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The example shows the db2advis command line tool being used to evaluate indexes for a
set of SQL statements stored in a file. The -disklimit option is being used to restrict the
amount of disk space available for any new indexes.
The output shows that adding one new index can reduce the query cost by about 39%. The
DDL to create the new index and run the runstats utility are saved in a file for editing.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Requirements
SQL Access to
Explain Tables
db2exfmt
db2expln
GUI interface
Text output
Quick and dirty static SQL
analysis
Static SQL supported
Dynamic SQL supported
CLI applications supported
Requires Explain Tables
Detailed optimizer
information
Suited for analysis of
multiple statements
Information available from
within application
CL21311.0
Notes:
The table summarizes the different tools available within the DB2 EXPLAIN facility and
their individual characteristics. Use this table to select the tool most suitable for your
environment and needs.
Visual Explain tools allow for the analysis of access plan and optimizer information from
the EXPLAIN tables through a graphical interface. Tools like Optim Database
Administrator and IBM Data Studio can be used to view the access plans for SQL
statements in a visual format.
The EXPLAIN Tables are accessible on all supported platforms and can contain
information for both static and dynamic SQL statements. You can access the EXPLAIN
tables using SQL statements, which allow for easy manipulation of the output and
comparisons of the same query over time.
db2exfmt allows you to obtain reports from the Explain tables in a predefined format.
db2expln will allow you to see the access plan information that is stored in the system
catalog as part of a static package. It provides a text-based report on the strategy DB2
Copyright IBM Corp. 1999, 2012
8-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
will use to execute the statements. The command also support generating the explain
report from one or more dynamic SQL statements.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The slide shows an example of the Visual Explain view provided by tools like Optim
Database Administrator or IBM Data Studio.
Visual Explain allows for the analysis of access plan and optimizer information from the
Explain tables through a graphical interface. Dynamic SQL statements can be analyzed
using the tool.
The Visual Explain view provides an overview of the processing steps that will be used to
produce the result of the SQL statement. You can quickly see the method used to access
each table, if indexes will be used and how tables will be joined.
The flow of processing in the example starts at the bottom and moves upward. The number
shown on each operation show the cumulative estimated cost or timerons.
8-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Rows
RETURN
(
1)
Cost
I/O
|
162.025
FETCH
(
2)
33.3962
4.89539
/---+----\
162.025
200000
IXSCAN
TABLE: TEST
(
3)
HISTORY
13.7123
Q1
2
|
200000
INDEX: TEST
HISTIX
Q1
CL21311.0
Notes:
The db2exfmt command produces a detailed explain report based on data in explain
tables.
The visual shows several sections form the detailed explain report generated using the
db2exfmt command.
The report includes the original SQL statement. The access plan report shows the various
operations that will be used to execute the SQL statement. The report includes various cost
and cardinality estimates.
The sample report results show that an estimated 162 rows will be returned from a table
TEST.HISTORY with 200,000 rows of data. In this case the index TEST.HISTIX will be
scanned to access the table.
The Cumulative Total Cost, 33.3962, in the sample report is based on timerons, which
combines the various resource costs, CPU, I/O and Network Communication into a single
measure.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
8-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The Explain tables capture access plans when the Explain facility is activated. The Explain
tables must be created before Explain can be invoked. You can create them using one of
the following methods:
Call the SYSPROC.SYSINSTALLOBJECTS procedure:
db2 CONNECT TO database-name
db2 CALL SYSPROC.SYSINSTALLOBJECTS('EXPLAIN', 'C',
CAST (NULL AS VARCHAR(128)), CAST (NULL AS VARCHAR(128)))
This call creates the explain tables under the SYSTOOLS schema. To create them
under a different schema, specify a schema name as the last parameter in the call.
The visual shows an example of using the M option to migrate an existing set of
explain table to match the currently installed product.
Run the EXPLAIN.DDL DB2 command file:
db2 CONNECT TO database-name
db2 -tf EXPLAIN.DDL
8-40 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
This command file creates explain tables under the current schema.It is located at
the DB2PATH\misc directory on Windows operating systems, and the
INSTHOME/sqllib/misc directory on Linux and UNIX operating systems. DB2PATH
is the location where you install your DB2 copy and INSTHOME is the instance
home directory.
8-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Package
Cache
Catalog
Cache
DB Heap
(log buffer)
Utility Heap
Sort
Memory
Locklist
Package
Cache
Buffer Pools
BP 1
BP 2
Catalog
Cache
Utility Heap
Sort
Memory
Locklist
Buffer Pools
BP 3
BP 4
BP 1
BP 2
BP 3
BP 4
CL21311.0
Notes:
Self-tuning memory simplifies the task of memory configuration by automatically setting
values for memory configuration parameters and sizing buffer pools. When enabled, the
memory tuner dynamically distributes available memory resources among the following
memory consumers: buffer pools, locking memory, package cache, and sort memory.
Self-tuning memory is enabled through the self_tuning_mem database configuration
parameter.
The following memory-related database options can be automatically tuned:
database_memory - Database shared memory size
locklist - Maximum storage for lock list
maxlocks - Maximum percent of lock list before escalation
pckcachesz - Package cache size
sheapthres_shr - Sort heap threshold for shared sorts
sortheap - Sort heap size
8-42 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
8-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
POOL_TYPE
MEMORY_POOL_USED
MEMORY_POOL_USED_HWM
-------------------- -------------------- -------------------UTILITY
65536
65536
PACKAGE_CACHE
524288
917504
XMLCACHE
131072
131072
CAT_CACHE
393216
393216
BP
16908288
16908288
BP
52166656
52166656
BP
851968
851968
BP
589824
589824
BP
458752
458752
BP
393216
393216
SHARED_SORT
196608
262144
LOCK_MGR
2228224
2228224
DATABASE
60489728
60489728
Figure 8-26. Monitoring Database memory usage using the table function MON_GET_MEMORY_POOL
CL21311.0
Notes:
The MON_GET_MEMORY_POOL table function retrieves metrics from the memory pools
contained within a memory set.
The visual shows a sample report generated using MON_GET_MEMORY_POOL. The
function can be used to check current memory allocations and also to see the peak usage
of memory for each pool while the database was active.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
No administrative notification
messages captured
All errors
Important information, no
immediate action required
Informational messages
9Notification
9Error log
9Dump files
9Trap files
file
CL21311.0
Notes:
The visual show some of the Database Manager (DBM) configuration options that control
the location and severity levels for diagnostic messages and data for a DB2 instance.
NOTIFYLEVEL
This parameter specifies the type of administration notification messages that are written to
the administration notification log.
This applies to the Database server (DBM) with local and remote clients.
This configurable parameter is immediately changed (default value is 3).
On Linux and UNIX platforms, the administration notification log is a text file called
instance.nfy. On Windows, all administration notification messages are written to the
Event Log. The errors can be written by DB2, the Health Monitor, the Capture and Apply
programs, and user applications.
Valid values for this parameter are:
8-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DIAGLEVEL
This parameter specifies the type of diagnostic errors that will be recorded in the
db2diag.log file.
This applies to the Database server (DBM) with local and remote clients.
This configurable parameter is immediately changed (default value is 3).
Valid values for this parameter are:
0: No diagnostic data captured.
1: Severe errors only.
2: All errors.
3: All errors and warnings.
4: All errors, warnings and informational messages.
The diagpath configuration parameter is used to specify the directory that will contain the
error file, alert log file, and any dump files that might be generated, based on the value of
the DIAGLEVEL parameter.
If this parameter is null, the diagnostic information will be written to a default diagnostic
path directory string in one of the following directories or folders:
In Windows environments: The default location of user data files, for example, files
under instance directories, varies from edition to edition of the Windows family of
operating systems. Use the DB2SET DB2INSTPROF command to get the location of
8-46 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
the instance directory. The file is in the instance subdirectory of the directory specified
by the DB2INSTPROF registry variable.
In Linux and UNIX environments: Information is written to INSTHOME/sqllib/db2dump/ ,
where INSTHOME is the home directory of the instance.
8-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows two example messages from a db2diag.log file.
The messages are formatted to include many standard data elements as well as some
sections that vary by message.
These messages include:
The DB2 instance name
The DB2 database name
A time stamp when the message was created
The message level indicates the severity of the condition that generated the error
The application id
The EDU (Engine dispatchable unit) name and id
The DB2 internal function
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
8-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Analyzing db2diag log files using db2diag tool
The primary log file intended for use by database and system administrators is the
administration notification log. The db2diag log files are intended for use by IBM Software
Support for troubleshooting purposes.
Administration notification log messages are also logged to the db2diag log files using a
standardized message format.
The db2diag tool serves to filter and format the volume of information available in the
db2diag log files. Filtering db2diag log file records can reduce the time required to locate
the records needed when troubleshooting problems.
Example: Filtering the db2diag log files by database name
If there are several databases in the instance, and you want to only see those
messages which pertain to the database "SAMPLE", you can filter the db2diag log files
as follows:
db2diag -g db=SAMPLE
8-50 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Thus you would only see db2diag log file records that contained "DB: SAMPLE", such
as:
2006-02-15-19.31.36.114000-300 E21432H406
LEVEL: Error
PID
: 940
db2syscs.exe
TID
PROC :
INSTANCE: DB2
NODE : 000
APPHDL
APPID: *LOCAL.DB2.060216003103
: 0-1056
: 660
DB
: SAMPLE
8-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
First occurrence data capture information
First occurrence data capture (FODC) collects diagnostic information about a DB2
instance, host or member when a problem occurs. FODC reduces the need to reproduce a
problem to obtain diagnostic information, because diagnostic information can be collected
as the problem occurs.
FODC can be invoked manually with the db2fodc command when you observe a problem
or invoked automatically whenever a predetermined scenario or symptom is detected. After
the diagnostic information has been collected, it is used to help determine the potential
causes of the problem. In some cases, you might be able to determine the problem cause
yourself, or involvement from IBM support personnel will be required.
Once execution of the db2fodc command has finished, the db2support tool must be
executed to collect the resulting diagnostic files and prepare the FODC package to be
submitted to IBM Support. The db2support command will collect the contents of all FODC
package directories found or specified with the -fodcpath parameter. This is done to avoid
additional requests, from IBM Support for diagnostic information.
8-52 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
8-53
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The db2fodc utility captures symptom-based data about the DB2 instance to help in
problem determination situations. It is intended to collect information about potential hangs,
severe performance issues, and various types of errors.
The db2fodc command can be used for manual first occurrence data collection (FODC) on
problems that cannot trigger automatic FODC, such as hangs or severe performance
problems. It also can be used to collect data about index errors.The db2fodc tool captures
data, to be included in the FODC package, and places it inside an FODC package
directory, created either in the default diagnostic path or in an FODC directory path you
specify using the -fodcpath parameter.
The db2fodc tool supports additional manual collection types and supports triggering
automatic diagnostic data collection when a user-defined threshold condition is exceeded.
To collect data during a potential hang without stopping the database manager:
db2fodc hang -alldbs
Default DB2FODC registry variables and parameters are used.
8-54 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
A new directory prefixed with FODC_Hang_ is created under the current diagnostic
path (an error is generated if it already exists). db2cos_hang script is executed to
collect manual FODC data into one or more files, deposited into the directory.
To collect data from a specific database:
db2fodc db SAMPLE -hang
Data collection is restricted to database SAMPLE. A new directory prefixed with
FODC_Hang_ is automatically created under the current diagnostic path. The
db2cos_hang script is executed to collect manual FODC data into the FODC
package stored in the directory.
To collect data during a performance issue from a specific database using the full collection
script:
db2fodc db SAMPLE -perf full
Data collection is restricted to database SAMPLE. A new directory prefixed with
FODC_Perf_ is created under the current diagnostic path. The db2cos_perf script is
executed to collect manual FODC data into one or more files, deposited into the
directory.
8-55
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The db2trc command controls the trace facility provided with DB2. The trace facility records
information about operations and formats this information into a readable form.
Keep in mind that there is additional processor usage when a trace is running so enabling
the trace facility might impact your system's performance.
In general, IBM Software Support and development teams use DB2 traces for
troubleshooting. You might run a trace to gain information about a problem that you are
investigating, but its use is rather limited without knowledge of the DB2 source code.
Nonetheless, it is important to know how to correctly turn on tracing and how to dump trace
files, just in case you are asked to obtain them.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Note
Note: You will need one of SYSADM, SYSCTRL or SYSMAINT authority to use db2trc
8-57
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
DB2 for LUW Advanced Database Administration for Experts: CL462
Perform advanced database administration tasks using DB2. These tasks include
advanced load and utility issues, parallelism and Symmetric Multiprocessor (SMP)
exploitation, DB2 Governor, job scheduling, job log, scripting, distributed data
management, remote administration, advanced monitoring using the snapshot table
functions, and federated database exploitation.
DB2 for Linux, UNIX, and Windows Performance Tuning and Monitoring Workshop:
CL412
Learn how to tune the IBM DB2 for Linux, UNIX, and Windows relational database
management system and associated applications written for this environment for optimum
performance. Learn about DB2 for Linux, UNIX, and Windows on a serial processor, in a
non-par titi on ed database environment. Explore performance issues affecting the design
of the database and applications using the database, the major database performance
parameters, and the different tools that assist in performance monitoring and tuning.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit summary
Having completed this unit, you should be able to:
Plan the use of RUNSTATS, REORGCHK and REORG
utilities for maintaining database efficiency
Configure the DB2 instance to set the location for diagnostic
data and message severity levels for basic problem analysis
Describe the methods that can be used for monitoring
database and application activity including db2pd commands,
Event Monitors and using SQL statements to access statistics
Describe the function of EXPLAIN and use this facility to
assist basic analysis
Use the db2advis command to analyze a workload for
potential performance improvements
Use the db2fodc command to collect diagnostic data for a
system hang
Copyright IBM Corporation 2012
CL21311.0
Notes:
8-59
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Student exercise
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Trouble shooting and Tuning Database Performance
Command Reference
Database Administration Concepts and Configuration Reference
9-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Explain why locking is needed
List objects that can be locked
Describe and discuss the various lock modes and their
compatibility
Explain four different levels of data protection
Set isolation level and lock time out for current activity
Explain lock conversion and escalation
Describe the situation that causes deadlocks
Create a LOCKING EVENT monitor to collect lock related
diagnostics
Set database configuration options to control locking event
capture
Copyright IBM Corporation 2012
CL21311.0
Notes:
These are the objectives for this lecture unit.
9-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Description
Dirty Write
Dirty Read
Fuzzy Read
Non-repeatable Read
Phantom Read
DB2 Applications request an isolation level based on need to avoid these anomalies
Copyright IBM Corporation 2012
CL21311.0
Notes:
Because many users access and change data in a relational database, the database
manager must be able both to allow users to make these changes and to ensure that data
integrity is preserved. Concurrency refers to the sharing of resources by multiple interactive
users or application programs at the same time.
The primary reasons why locks are needed are:
Ensure data integrity. Stop one application from accessing or changing a record while
another application has the record locked for its use.
Access to uncommitted data. Application A might update a value in the database,
and application B might read that value before it was committed. If the value of A is not
later committed, but backed out, calculations performed by B are based on
uncommitted (and presumably invalid) data. Of course, you might want to read even
uncommitted data, for example to get a rough count of the number of records of a
particular type without the guarantee of instantaneous precision. You can use an
Uncommitted Read (UR) isolation level to do this we will see more about this later.
9-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The database manager controls this access to prevent undesirable effects, such as:
Lost updates. Two applications, A and B, might both read the same row from the
database and both calculate new values for one of its columns based on the data these
applications read. If A updates the row with its new value and B then also updates the
row, the update performed by A is lost.
Nonrepeatable reads. Some applications involve the following sequence of events:
application A reads a row from the database, then goes on to process other SQL
requests. In the meantime, application B either modifies or deletes the row and commits
the change. Later, if application A attempts to read the original row again, it receives the
modified row or discovers that the original row has been deleted.
Phantom Read Phenomenon. The phantom read phenomenon occurs when:
1.Your application executes a query that reads a set of rows based on some search
criterion.
2.Another application inserts new data or updates existing data that would satisfy your
applications query.
3.Your application repeats the query from Step 1 (within the same unit of work).
4.Some additional (phantom) rows are returned as part of the result set, but were not
returned when the query was initially executed (Step 1).
9-4
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Table
Lock
Row Locks
Table
Lock
Block
Locks
Row Locks
Table
Lock
Data
Range lock
Row Locks
CL21311.0
Notes:
DB2 will acquire locks on database objects based on the type of access and the isolation
level that is in effect for the connection or statement that accesses data.
Table space level locks are not always held by applications when they access DB2 data.
These are used primarily by DB2 utilities to make sure that two incompatible operations
would not be performed at the same time. For example, an online BACKUP will not run at
the same time a LOAD utility is processing a table in the same table space.
For standard DB2 tables, DB2 will acquire a lock at the table level based on the type of
access. A SELECT statement would need to acquire a lock that allows read access. A
UPDATE statement would acquire a table lock that permits write access.
In most cases, DB2 will acquire read and write locks at the row level, to allow many
applications to share access to the table. In some cases, based on the isolation level and
the amount of data that would be accessed, DB2 might determine that it is more efficient to
acquire a single table level lock and bypass row locking.
9-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
For Multidimensional Clustered (MDC) tables, data is stored and indexed at the block
(extent) level. DB2 can utilize block level locks for these tables to reduce the number of
locks that would be needed to protect an application access. For example, an MDC table
might have dimensions based on date and region columns. If the predicates for the SQL
statement indicate that all of the rows for a date range and selected products will be
retrieved, DB2 can use locks at the block level to avoid building a long list of row locks.
For range-partitioned tables, DB2 can acquire locks at the data partition level to
supplement the table and row locks that might also be used. Their data partition locks are
also used for controlling access to the table when a new range is attached or an existing
range is detached.
9-6
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Intent None
IS
Intention Share
IX
Intention eXclusive
SIX
Share
Update
eXclusive
superexclusive
CL21311.0
Notes:
The lock modes listed above are used by DB2 at the table level and are defined:
IN Intent None: The lock owner can read any data in the table including uncommitted
data, but cannot update any of it. Other concurrent applications can read or update the
table. No row locks are acquired by the lock owner. Both table spaces and tables can be
locked in this mode.
IS Intention Share: The lock owner can read any data in the locked table if an S lock
can be obtained on the target rows. The lock owner cannot update the data in the table.
Other applications can read or update the table, as long as they are not updating rows
on which the lock owner has an S lock. Both table spaces and tables can be locked in
this mode.
IX Intention Exclusive: The lock owner can read and update data provided that an X
lock can be obtained on rows to be changed, and that a U or S lock can be obtained on
rows to be read. Other concurrent applications can both read and update the table, as
long as they are not reading or updating rows on which the lock owner has an X lock.
Both table spaces and tables can be locked in this mode.
Copyright IBM Corp. 1999, 2012
9-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
SIX Share with Intention Exclusive: The lock owner can read any data in the table
and change rows in the table provided that it can obtain an X lock on the target rows for
change. Row locks are not obtained for reading. Other concurrent applications can read
the table. Only a table object can be locked in this mode. The SIX table lock is a special
case. It is obtained if an application possesses an IX lock on a table and requests an S
lock, or vice versa. The result of lock conversion in these cases is the SIX lock.
S Share: The lock owner and all concurrent applications can read but not update any
data in the table and will not obtain row locks. Tables can be locked in this mode.
U Update: The lock owner can read any data in the table and can change data if an X
lock on the table can be obtained. No row locks are obtained. This type of lock might be
obtained if an application issues a SELECT...'for update'. Other units of work can read
the data in the locked object, but cannot attempt to update it. Tables can be locked in
this mode.
X Exclusive: The lock owner can read or update any data in the table. Row locks are
not obtained. Only uncommitted read applications can access the locked object. Tables
can be locked in this mode.
Z Super Exclusive: This lock is acquired on a table in certain conditions, such as
when the table is altered or dropped, or for some types of table reorganization. No other
concurrent application can read or update the table. Tables and table spaces can be
locked in this mode. No row locks are obtained.
The modes IS, IX, and SIX are used at the table level to SUPPORT row locks. They permit
row-level locking while preventing more exclusive locks on the table by other applications.
The following examples are used to further clarify the lock modes of IS, IX, and SIX:
An application obtains an IS lock on a table. That application might acquire a lock on a
row for read only. Other applications can also READ the same row. In addition, other
applications can CHANGE data on other rows in the table.
An application obtains an IX lock on a table. That application might acquire a lock on a
row for change. Other applications can READ/CHANGE data on other* rows in the
table.
An application obtains an SIX lock on a table. That application might acquire a lock on a
row for change. Other applications can ONLY READ other* rows in the table.
The modes of S, U, X, and Z are used at the table level to enforce the strict table locking
strategy. No row-level locking is used by applications that possess one of these modes.
The following examples are used to further clarify the lock modes of S, U, X, and Z:
An application obtains an S lock on a table. That application can read any data in that
table. It will allow other applications to obtain locks that support read-only requests for
any data in the entire table. No application can CHANGE any data in the table until the
S lock is released.
An application obtains a U lock on a table. That application can read any data in that
table, and might eventually change data in that table by obtaining an X lock. Other
applications can only READ data in the table.
9-8
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
An application obtains an X lock on a table. That application can read and change any
or all of the data in the table. No other application can access data in the entire table for
READ* or CHANGE.
An application obtains a Z lock on a table. That application can read and change any or
all of the data in the table. No other application can access data in the entire table for
READ or CHANGE.
The mode of IN is used at the table to permit the concept of Uncommitted Read. An
application using this lock will not obtain row-level locks.
* Denotes an exception to a given application scenario. Applications that use Uncommitted
Read can read rows that have been changed. More details regarding Uncommitted Read
are provided later in this unit.
Note
Some of the lock modes discussed are also available at the table space level. For
example, an IS lock at the table space level supports an IS or S lock at the table level.
However, further details regarding table space locking are not the focus of this unit.
9-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Row Lock
S
Share
IS
Update
IX
eXclusive
IX
Weak exclusive
IX
NS
IS
NW
IX
Row locks
S, U, X, or Z
CL21311.0
Notes:
The above modes are for row locks. The definitions are similar to the definitions for
corresponding table locks, except that the object of the lock is a row
S Share: The row is being READ by one application and is available for READ ONLY
by other applications.
U Update: The row is being READ by one application but is possibly to be changed
by that application. The row is available for READ ONLY by other applications. The
major difference between the U lock and the S lock is the INTENT TO UPDATE. The U
lock will support cursors that are opened with the FOR UPDATE OF clause. Only one
application can possess a U lock on a row.
X Exclusive: The row is being changed by one application and is not available for
other applications, except those that permit Uncommitted Read.
W Weak Exclusive: This lock is acquired on the row when a row is inserted into a
non-catalog table and a duplicate key for a unique index is encountered. The lock
owner can change the locked row. This lock is similar to an X lock except that it is
compatible with the NW lock.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
NS Next Key Share: The lock owner and all concurrent applications can read, but not
change, the locked row. Only individual rows can be locked in NS mode. This lock is
acquired in place of a share (S) lock on data that is read with the RS or CS isolation
levels.
NW Next Key Weak Exclusive: This lock is acquired on the next row when a row is
inserted into the index of a non-catalog table. The lock owner can read, but not change,
the locked row. This is similar to X and NX locks, except that it is compatible with the W
and NS locks.
Row locks are only requested by applications that have supporting locks at the table level.
These supporting locks are the INTENT locks: IS, IX, and SIX.
* Denotes the least restrictive lock necessary. However, this does not imply that the table
lock listed is the only table lock that supports the row lock listed. For example, an
application that possesses an IX table lock could possess S, U, or X locks on rows.
Likewise, an application that possesses a SIX table lock could possess X locks on rows.
9-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
MODE OF LOCK B
IN
IS
IX
SIX U
X
YES YES YES YES YES YES YES
Z
NO
NO
S
IX
YES YES NO
SIX
U
YES YES NO NO
YES YES YES NO
YES
NO
NO
NO
NO
NO
NO
NO
YES NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
Row Locks
Table Locks
LOCK
A
MODE
NS
NW
YES
YES
NO
NO
YES
NO
YES
NO
NO
NO
YES
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
YES
NS
YES
YES
NO
NO
YES
YES
NW
NO
NO
NO
YES
YES
NO
MODE OF LOCK B
CL21311.0
Notes:
The symbols A and B in the above diagrams are used to represent two different
applications. The chart regarding table locks can be used to determine if the two
applications can run concurrently if they are requesting access to the same table with a
given lock mode.
For example, if application A obtains an IS lock against a given table, application B could
obtain an IN, IS, S, IX, SIX, or U lock against the same table at the same time. However, an
X or Z lock would not be permitted at the same time.
This particular example illustrates the concept of the IS lock acting as a supporting lock for
a lower level of locking. The only table locks that are not compatible are the X and Z locks,
which would require exclusive use of the table. The presence of the IS lock indicates that a
lower level of locking is required for this table, and the X or Z lock request is not given.
Study of the chart simply reinforces the definitions of table and row lock modes presented
on the previous two pages. Review the row for IX under application A. Assume that
application A obtains an IX lock on the table Y. This lock indicates that the application
intends to obtain locks to support change at the row level. The application will allow other
9-12 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
rows to be read and updated, but will prevent access to the target rows (with the exception
of Uncommitted Read applications.) Examine each of the possible competing table locks
that application B might request:
IN: No row lock intention. This lock is compatible. There will be no contention since
application B is requesting Uncommitted Read. Even rows changed and not committed
by application A are available. (The Z lock is the only mode that is not compatible with
IN.)
IS: Intent to lock for read only at the row level. This lock is compatible. There might be
contention at the row level if application A is changing the same row that application B
wants to read. The Row Locks table would need to be examined: if application A has
acquired an X or a W lock on the row that application B is attempting to read, then
application B will need to wait. Otherwise, the two applications can proceed with
concurrency.
S: Share lock at the table level. This lock is NOT compatible, since the S lock states
that the entire table is available for READ ONLY by the application possessing the lock
and all other applications. The IX lock states an intent to change data at the row level,
which contradicts the requirement for READ ONLY. Therefore, application B could not
obtain the S lock.
IX: Intent to lock for change at the row level. This lock is compatible. There might be
contention at the row level if application A is changing the same row that application B
wants to change. The Row Locks table would need to be examined: if application A has
acquired an X or a W lock on the row that application B is attempting to change, then
application B will need to wait. Otherwise, the two applications can proceed with
concurrency.
SIX: The SIX lock states that a lock request for changing data might be required at the
row level for the application possessing the lock. In addition, the rest of the table is
available for READ ONLY applications. The IX lock implies change at the row level as
well. Application B could not obtain the SIX lock on the table because of the S
characteristic of the SIX lock, which is not compatible with the IX lock already assumed
owned by application A.
U: Read with intent to update. This table level lock states that the application
possessing the lock might read any data, and might potentially exchange the U lock for
an X lock. However, until this exchange is done, other applications can obtain locks
supporting READ ONLY. Application B would NOT be able to obtain the U lock at the
same time that application A possessed an IX lock on the same table.
X: The application possessing this mode of lock on the table requires exclusive use of
the table. No other access, with the exception of Uncommitted Read applications, is
permitted. The IX lock possessed by application A would prevent application B from
obtaining an X lock.
Z: The application possessing this mode of lock excludes all other access to the table,
including Uncommitted Read applications. Since application A has obtained an
incompatible lock (IX), application B would not be able to obtain the Z lock at the same
time.
The same type of statements could be logically derived for the other rows in the chart.
Copyright IBM Corp. 1999, 2012
9-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Many different applications could have compatible locks on the same object. For example,
ten transactions might have IS locks on a table, and five different transactions might have
IX locks on the same table. There is no concurrency problem at the table level in such a
scenario. However, there might be lock contention at the row level:
The basic concept of the row lock matrix is that rows being READ by an application can
be READ by other applications, and that rows being changed by an application are not
available to other applications that use row locking.
Note that the U row lock is not compatible with another U row lock. Only one application
can read a row with the INTENT TO UPDATE. This U lock reduces the number of
deadlocks that occur when applications perform updates and deletes via cursors. When a
row is FETCHED using a cursor declared ...FOR UPDATE OF..., the U row lock is used.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The isolation level that is associated with an application process determines the degree to
which the data that is being accessed by that process is locked or isolated from other
concurrently executing processes. The isolation level is in effect for the duration of a unit of
work.
The isolation level of an application process therefore specifies:
The degree to which rows that are read or updated by the application are available to
other concurrently executing application processes
The degree to which the update activity of other concurrently executing application
processes can affect the application
The isolation level for static SQL statements is specified as an attribute of a package
and applies to the application processes that use that package. The isolation level is
specified during the program preparation process by setting the ISOLATION bind or
precompile option.
9-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
For dynamic SQL statements, the default isolation level is the isolation level that was
specified for the package preparing the statement. Use the SET CURRENT
ISOLATION statement to specify a different isolation level for dynamic SQL statements
that are issued within a session. For more information, see "CURRENT ISOLATION
special register".
For both static SQL statements and dynamic SQL statements, the isolation-clause in a
select-statement overrides both the special register (if set) and the bind option value.
For more information, see "Select-statement".
Isolation levels are enforced by locks, and the type of lock that is used limits or prevents
access to the data by concurrent application processes.
Declared temporary tables and their rows cannot be locked because they are only
accessible to the application that declared them.
Beginning with DB2 9.7, the database configuration option CUR_COMMIT can be used to
specify the type of locking performed for read-only access is required with Cursor Stability
isolation level. The traditional DB2 locking performed for CS isolation is acquire a single
row level read lock for the current row being accessed. With the currently committed
method, these row locks are not acquired. If DB2 detects a condition where a row that
needs to be accessed was an uncommitted change, the row data before the change is
retrieved and returned instead.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
DB2
Isolation
ANSI
Isolation
Uncommitted
Read
(UR)
Read
Uncommitted
(Level 0)
Cursor
Stability
(CS)
Read
Committed
(Level 1)
Read
Stability
(RS)
Repeatable
Read
(Level 2)
Repeatable
Read
(RR)
Serializable
(Level 3)
Dirty
Write
Dirty
Read
Fuzzy Read
Phantom
Read
Figure 9-8. DB2 and ANSI isolation levels: How anomalies are allowed or prevented
CL21311.0
Notes:
The DB2 isolation levels can be used to control the anomalies that an application could
experience.
Dirty Write: Lost updates. Since exclusive locks are used for all updates, regardless of
isolation level, DB2 will always prevent a second application from changing a row that
contains an uncommitted change.
Dirty Read: Access to uncommitted data. Only the UR isolation level allows applications to
access an uncommitted change in the database. All other isolation levels prevent dirty
reads.
Non-repeatable reads: Both RS and RR isolation levels hold read locks on all rows
retrieved until the transaction ends, so non-repeatable reads would be prevented.
Phantom Read Phenomenon: Only RR isolation level acquires the locks necessary to
prevent phantom reads from occurring.
9-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Access allowed
to uncommitted
changes
Uncommitted
Read
IN Table lock
No Row locks for Read-Only access
Uncommited rows accessed from buffer pool
Yes
Cursor Stability
IS Table lock
No Row locks for Read-Only access
Old version of rows with uncommitted changes read from log
records
IS - Table lock
NS Row lock held on current row in result set
Lock wait used to delay access to uncommitted changes, in
buffer pool.
No
IS - Table lock
NS Row locks held on result set until commit
Lock wait used to delay access to uncommitted changes, in
buffer pool.
IS - Table lock
S Row locks held on all rows accessed until commit
Lock wait used to delay access to uncommitted changes, in
buffer pool.
No
Using Currently
Committed
Cursor Stability
Not using
Currently
Committed
Read Stability
Repeatable Read
No
No
CL21311.0
Notes:
The types of table and row locks used for read-only access will vary depending on the
isolation level that is in effect for the statement.
For Uncommitted Read (UR), DB2 will acquire the Intent None (IN) lock for the table and
will not acquire any row level locks. In this mode an uncommitted change made read by the
application.
For Cursor Stability (CS), the locking performed will depend on whether the currently
committed mode is being used.
With currently committed on, DB2 will acquire the Intent Share (IS) lock for the table
and will not acquire any row level locks. If DB2 finds a row that has an uncommitted
change, the previous data will be read and returned.
With currently committed off, DB2 will acquire the Intent Share (IS) lock for the table
and will acquire a NS row lock for the current row in a result. The lock is released when
the next row is accessed. any row level locks. If DB2 finds a row that has an
uncommitted change a lock wait condition will occur.
9-18 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
For Read Stability (RS), DB2 will acquire the Intent Share (IS) lock for the table and will
acquire a NS row lock for the current row in a result. These row locks will not be released
until the transaction is committed or rolled back. If DB2 finds a row that has an
uncommitted change a lock wait condition will occur.
For Repeatable Read (RR), DB2 will acquire the Intent Share (IS) lock for the table and will
acquire a S row lock for the any row that is accessed to produce the result. In some cases
DB2 might process a very large number of rows to produce a relatively small result and in
this mode a large number of row locks would be needed. These row locks will not be
released until the transaction is committed or rolled back. If DB2 finds a row that has an
uncommitted change a lock wait condition will occur.
9-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Log Buffer
300 new
200 new
Buffer pools
Read Only
Uncommitted
Update
Application B
Reads Committed rows
using log data when needed
301 new
100
101
I/O Servers
100
Committed
Update
Page Cleaners
300 old
101
200 new
200 old
201 new
301 new
Log Reader
Log Writer
300 old
201 new
201 old
301 new
301 old
201 new
200 old
Table space
Containers
CL21311.0
Notes:
When an application makes changes to a row, that change is reflected immediately in the
data page that is in the buffer pool. The change is recorded in log records that are placed in
the log buffer in memory and then written to a log file.
With the currently committed option ON, DB2 handles the locking and data access for
read-only requests using cursor stability isolation differently.
In the visual, two applications are accessing a DB2 database.
Application A has performed some reads (row 100) and has also made several changes
(rows 200 and 201), and has not committed those changes. The log record containing the
change for row 200 is still in the log buffer, but the change to row 201 has already been
written to a log file on disk.
Application B is running under cursor stability isolation and the currently committed option
is ON. Application needs to read and return the data from rows 101, 200, 201 and 300. The
two rows 101 and 300 are currently in the buffer pool and there is not an uncommitted
change so those can be returned without using any row lock. Since the versions of the rows
9-20 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
200 and 201 that are in the buffer pool contain changes that are not yet committed, DB2
can not allow application B to see those changes under cursor stability rules. Rather than
wait for the changes to be committed or rolled back, DB2 will access the previous version
of the row written in the log records and return that data to application B. No row locks will
be needed for these rows either.
The advantage is the performance gain from avoiding a lock wait and also from reducing
the need for locking memory.
This mode does require that the full previous version of a row is included in the log record
which might increase the amount of information logged. In some cases the increased
logging could impact overall database performance.
For supporting the currently committed locking option, DB2 will only access information in
the log buffer or from an active log file on disk. DB2 will not retrieve any archived log files to
support an application using currently committed mode and will switch to acquire locks
instead.
9-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
IX
X
S
S
Database Manager
determined strategy
Application issues:
LOCK TABLE name
IN
SHARE
MODE
EXCLUSIVE
or
Administrator alters table:
ALTER TABLE tbl
LOCKSIZE {TABLE | ROW}
Copyright IBM Corporation 2012
CL21311.0
Notes:
Isolation level and access strategy are factors that affect the database manager when it
determines the locking strategy to use when reading or manipulating data. Generally,
intent locks at the table level, and row locking, are used to support transaction-oriented
applications.
However, the use of intent locking might not be appropriate for a given application.
The LOCK TABLE statement provides the application programmer with the flexibility to lock
a table at a more restrictive mode than requested by the database manager. Only
applications with a need for a more restrictive mode of lock should issue the LOCK TABLE
statement. Such applications could include report programs that must show snapshots of
the data at a given point in time, or data modifying programs that normally do not make
changes to significant portions of a table except during certain periods such as for
month-end processing.
SHARE MODE allows other processes to SELECT data in the TABLE, but does not allow
INSERT, UPDATE, or DELETE operations.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
EXCLUSIVE MODE prevents any other processes from performing any operation on the
table, with the exception of Uncommitted Read applications.
Locks obtained via the LOCK TABLE statement are acquired when the statement is
executed. These locks are released by commit or rollback.
Note
The visual is not intended to imply that an application can only request a more restrictive
table lock of the same nature (IS to S / IX to X), although this would be the typical case.
The table can also be altered to indicate the size (granularity) of locks used when the table
is accessed. If LOCKSIZE TABLE is indicated, then the appropriate share or exclusive lock
is acquired on the table, and intent locks (except intent none) are not used. Use of this
value might improve the performance of queries by limiting the number of locks that need
to be acquired. However, concurrency is also reduced since all locks are held over the
complete table.
Even though the intent lock strategy is common for typical transaction-oriented
applications, there are situations when strict table locking will be selected by the database
manager. For example, the isolation level of Repeatable Read combined with an access
strategy of TABLE SCAN or INDEX SCAN with no WHERE clause will be supported with a
strict table lock. If the strict table locking that results causes unacceptable concurrency
problems, the applications using Repeatable Read should be examined to determine if a
different access strategy can be used or if the isolation level can be changed. Repeatable
Read can be logically simulated, although the application code required to do so might
carry a high cost for development, maintenance, or both.
Strict table locking cannot be avoided when issuing DDL against a table or index. When
possible, the database administrator should restrict submission of such statements to
periods of low activity.
In any case, strict table locks determined during the optimization process are externalized
by the Explain function.
9-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Lock escalation
maxlocks
e
x
c
e
e
d
e
d
.........
X
X
m
a
x
l
o
c
k
s
..........
locklist
X
IX
ESCALATING APPLICATION
X
X
IX
..........
l
o
c f
k u
l l
i l
s
t
............
............
locklist
OTHER APPLICATIONS
CL21311.0
Notes:
In order to service as many applications as possible, the database manager provides the
function of lock escalation. This process entails obtaining a table lock and releasing row
locks. The desired effect of the process is to reduce the overall storage requirement for
locks by the database manager. This will enable other applications to obtain locks
requested.
Two database configuration parameters have a direct impact on the process of lock
escalation:
locklist: The number of 4 KB pages allocated in the database global memory for lock
storage. This parameter is configurable online.
The default value for locklist and maxlocks in is AUTOMATIC, so the self tuning
memory management routines can adjust the size of the locklist and maxlocks to match
the demands of the current workload.
maxlocks: The percentage of the total locklist permitted by a single application. This
parameter is configurable online.
9-24 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
9-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
12
30
9
6
-1
x x
x
x x
x
LOCK
HOG
x x
WAIT!
x
x
I just want
that one!
CL21311.0
Notes:
Lock waits and timeouts
Lock timeout detection is a database manager feature that prevents applications from
waiting indefinitely for a lock to be released.
For example, a transaction might be waiting for a lock that is held by another user's
application, but the other user has left the workstation without allowing the application to
commit the transaction, which would release the lock. To avoid stalling an application in
such a case, set the locktimeout database configuration parameter to the maximum time
that any application should have to wait to obtain a lock.
Setting this parameter helps to avoid global deadlocks, especially in distributed unit of work
(DUOW) applications. If the time during which a lock request is pending is greater than the
locktimeout value, an error is returned to the requesting application and its transaction is
rolled back. For example, if APPL1 tries to acquire a lock that is already held by APPL2,
APPL1 receives SQLCODE -911 (SQLSTATE 40001) with reason code 68 if the timeout
period expires. The default value for locktimeout is -1, which means that lock timeout
detection is disabled.
9-26 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
For table, row, data partition, and multidimensional clustering (MDC) block locks, an
application can override the locktimeout value by changing the value of the CURRENT
LOCK TIMEOUT special register.
Information about lock waits and timeous can be collected using the CREATE EVENT
MONITOR FOR LOCKING statement.
To log more information about lock-request timeouts in the db2diag log files, set the value
of the diaglevel database manager configuration parameter to 4. The logged information
includes the name of the locked object, the lock mode, and the application that is holding
the lock. The current dynamic SQL or XQuery statement or static package name might also
be logged. A dynamic SQL or XQuery statement is logged only at diaglevel 4.
You can get additional information about lock waits and lock timeouts from the lock wait
information system monitor elements, or from the db.apps_waiting_locks health indicator.
The database configuration option mon_lck_msg_lvl controls the logging of messages to
the administration notification log when lock timeout, deadlock, and lock escalation events
occur.
With the occurrence of lock timeout, deadlock, and lock escalation events, messages
can be logged to the administration notification log by setting this database
configuration parameter to a value appropriate for the level of notification that you want.
The following list outlines the levels of notification that can be set:
0 - Level 0: No notification of lock escalations, deadlocks, and lock timeouts is provided
1 - Level 1: Notification of lock escalations
2 - Level 2: Notification of lock escalations and deadlocks
3 - Level 3: Notification of lock escalations, deadlocks, and lock timeouts
The default level of notification setting for this database configuration parameter is 1.
9-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
RAISIN
BRAN
LK
MI
APPLICATION A
dlchktime
10000
APPLICATION B
milliseconds
CL21311.0
Notes:
A deadlock occurs when applications cannot complete a unit of work due to conflicting lock
requirements that cannot be resolved until the unit of work is completed.
The visual illustrates the concept of deadlocks. The unit of work that both application A and
application B need to complete before committing is to get a bowl of cereal with milk. For
the sake of simplicity, assume there is only enough milk and cereal left for a single bowl.
(Another way for the scenario to work is to assume the milk represents a single row and the
cereal represents a single row.)
1. Application A obtains an X lock on the cereal.
2. Application B obtains an X lock on the milk.
3. Application A wants an X lock on the milk but cannot obtain it until application B
commits.
4. Application B wants an X lock on the cereal but cannot obtain it until application A
commits.
Neither application can proceed to a commit point.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Deadlock Detector
Deadlocks are handled by a background process called the deadlock detector. If a
deadlock is detected, a victim is selected, then the victim is automatically rolled back and
returned a negative SQL code (-911) and reason code 2. Rolling back the victim releases
locks and should allow other processes to continue.
The deadlock check interval (DLCHKTIME) defines the frequency at which the database
manager checks for deadlocks among all the applications connected to a database.
- Time_interval_for_checking_deadlock = dlchktime
- Default [Range]: 10,000 (10 seconds) [1000600,000]
- Unit of measure: milliseconds
dlchktime: Configuration parameter that sets the deadlock check interval. This value is
designated in milliseconds and determines the interval for the asynchronous deadlock
checker to wake up for the database. The valid range of values is 1000 to 600,000
milliseconds. Setting this value high will increase the time that applications will wait before
a deadlock is discovered, but the cost of executing the deadlock checker is saved. If the
value is set low, deadlocks are detected quickly, but a decrease in run-time performance
could be experienced due to checking. The default value corresponds to 10 seconds. This
parameter is configurable online.
9-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
substr(lw.hld_userid,1,10) as "Holder",
substr(lw.req_application_name,1,10) as "Wait App",
substr(lw.req_userid,1,10) as "Waiter",
lw.lock_mode ,
lw.lock_object_type ,
substr(lw.tabname,1,10) as "TabName",
substr(lw.tabschema,1,10) as "Schema",
lw.lock_wait_elapsed_time
as "waiting (s)"
from
sysibmadm.mon_lockwaits lw ;
Hold App
Holder
Wait App
Waiter
LOCK_MODE LOCK_OBJECT_TYPE
TabName Schema waiting (s)
---------- ---------- ---------- ---------- --------- ------------------ -------- ------- ----------db2bp
INST461
db2bp
INST461
X
TABLE
HIST1
CLPM
61
CL21311.0
Notes:
The MON_LOCKWAITS administrative view returns information about agents working on
behalf of applications that are waiting to obtain locks in the currently connected database. It
is a useful query for identifying locking problems.
The sample query shown shows the application names that are waiting for the lock and
holding the lock. The table associated with the lock and the type and mode of the lock
causing the wait are shown. The query also shows how long the application has been
waiting for the lock.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
APPLICATION
----------db2bp
db2bp
db2bp
AUTHID
# Locks
Escalations
Lock Timeouts
---------- -------------------- -------------------- -----------------INST461
2
0
0
INST461
2
1
0
INST461
3
0
0
Deadlocks
Lock Wait Time
-------------------- -------------------0
0
0
0
0
209
3 record(s) selected.
Figure 9-16. Using SQL to monitor Lock escalations, deadlocks and timeouts for active connections
CL21311.0
Notes:
The MON_GET_CONNECTION table function can be used to monitor current database
connections for locking related issues.
The example query and output include:
The number of locks currently held by the connection
The number of lock escalations performed by the application since its connection
The number of lock timouts experienced by the application since its connection
The number of deadlocks experienced by the application since its connection
The total lock wait time for each connection
9-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Lock timeouts
mon_locktimeout - Controls the generation of lock timeout events at the
database level for the Lock Event Monitor.
Collection options for lock events can be set based on DB2 WLM
workload definitions
CL21311.0
Notes:
Beginning with DB2 9.7 a LOCKING Event Monitor can be used to capture descriptive
information about lock events at the time that they occur. The information captured
identifies the key applications involved in the lock contention that resulted in the lock event.
Information is captured for both the lock requestor (the application that received the
deadlock or lock timeout error, or waited for a lock for more than the specified amount of
time) and the current lock owner.
The information collected by the LOCKING Event Monitor can be written in binary format to
an unformatted event table or to a set of standard tables in the database.
The Lock Event Monitor replaces the deprecated deadlock event monitors (CREATE
EVENT MONITOR FOR DEADLOCKS statement and DB2DETAILDEADLOCK) and the
deprecated lock timeout reporting feature (DB2_CAPTURE_LOCKTIMEOUT registry
variable) with a simplified and consistent interface for gathering locking event data, and
adds the ability to capture data on lock waits.
Two steps are required to enable the capturing of lock event data using the Lock Event
Monitor:
9-32 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
1. You must create a LOCKING EVENT monitor using the CREATE EVENT MONITOR
FOR LOCKING statement. You provide a name for the monitor.
2. You can collect data at the database level and affect all DB2 workloads by setting the
appropriate database configuration parameter:
- mon_lockwait: This parameter controls the generation of lock wait events. Best
practice is to enable lock wait data collection at the workload level.
This can be set to NONE, WITHOUT_HIST, WITH_HISTORY or
HIST_AND_VALUES.
The default is NONE.
- mon_lw_thresh: This parameter controls the amount of time spent in lock wait
before an event for mon_lockwait is generated.
The value is set in microseconds, the default is 5000000 (5 seconds).
- mon_locktimeout: This parameter controls the generation of lock timeout events.
Best practice is to enable lock timeout data collection at the database level if they
are unexpected by the application. Otherwise enable at workload level.
This can be set to NONE, WITHOUT_HIST, WITH_HISTORY or
HIST_AND_VALUES.
The default is NONE.
- mon_deadlock: This parameter controls the generation of deadlock events. Best
practice is to enable deadlock data collection at the database level.
This can be set to NONE, WITHOUT_HIST, WITH_HISTORY or
HIST_AND_VALUES.
The default is WITHOUT_HIST.
The capturing of SQL statement history and input values incurs additional overhead, but
this level of detail is often needed to successfully debug a locking problem.
9-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The visual shows a CREATE EVENT MONITOR statement that could be used to define a
new Locking Event Monitor named mon_locks that would write lock event data to a set of
DB2 tables.
The following statement could be used:
create event monitor mon_locks for locking write to table autostart
This same Locking Event Monitor can also be used to collect deadlocks and lock timeouts.
In order to collect information on any application that waits longer than three seconds for a
lock the database configuration options mon_lockwait and mon_lw_thresh could be set.
These can be configured online using the following statements.
db2 update db cfg for salesdb using mon_lockwait with_history
db2 update db cfg for salesdb using mon_lw_thresh 3000000
The visual shows how the SET EVENT MONITOR statement can be used to start and stop
the event monitor data collection.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
9-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
4 record(s) selected
CL21311.0
Notes:
The data collected using a LOCKING event monitor can be reviewed any time after the
locking event is recorded. This is very useful since many lock related problems occur
sporadically over an extended period of time.
The query uses one of the set of DB2 tables associated with the locking event monitor to
show information about each application involved in lock timeout events.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit summary
Having completed this unit, you should be able to:
Explain why locking is needed
List objects that can be locked
Describe and discuss the various lock modes and their
compatibility
Explain four different levels of data protection
Set isolation level and lock time out for current activity
Explain lock conversion and escalation
Describe the situation that causes deadlocks
Create a LOCKING EVENT monitor to collect lock related
diagnostics
Set database configuration options to control locking event
capture
Copyright IBM Corporation 2012
CL21311.0
Notes:
9-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
References
Database Security Guide
10-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Unit objectives
After completing this unit, you should be able to:
Use DB2 access control mechanisms to implement security within the
database
Explain the tasks performed by the SYSADM user, the SECADM user and a
DBADM user
Compare the use of database roles to user groups for security
Describe privileges required for binding and executing an application
package
Describe the difference between explicit privileges and implicit privileges
Use CREATE PERMISSIONS and CREATE MASK statements to define
row and column access controls
List the methods for implementing encryption for database connections
List the advantages of creating a Trusted Context for a three-tier application
system
CL21311.0
Notes:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Mytable
Authentication
Identify the user
Check entered username and
password
CONNECT TO sample
USER jon using pwd
Authorization
Does JON have
authorization
to perform
SELECT from
MYTABLE?
Authentication
Is this
right
password for
JON?
CL21311.0
Notes:
Access to an instance or a database first requires that the user be authenticated. The
authentication type for each instance determines how and where a user will be verified.
The authentication type is stored in the database manager configuration file at the server. It
is initially set when the instance is created. There is one authentication type per instance,
which covers access to that database server and all the databases under its control.
The following authentication types are provided:
SERVER: Specifies that authentication occurs on the server using local operating system
security. If a user ID and password are specified during the connection or attachment
attempt, they are compared to the valid user ID and password combinations at the server
to determine if the user is permitted to access the instance. This is the default security
mechanism.
10-3
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Note
The server code detects whether a connection is local or remote. For local connections,
when authentication is SERVER, a user ID and password are not required for
authentication to be successful.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
3. The client sends this service ticket to the server via the communication channel (which
might be, as an example, TCP/IP).
4. The server validates the clients server ticket. If the clients service ticket is valid, then
the authentication is completed.
It is possible to catalog the databases on the client machine and explicitly specify the
Kerberos authentication type with the servers target principal name. In this way, the first
phase of the connection can be bypassed.
If a user ID and a password are specified, the client will request the ticket-granting ticket for
that user account and use it for authentication.
KRB_SERVER_ENCRYPT: Specifies that the server accepts KERBEROS authentication
or encrypted SERVER authentication schemes. If the client authentication is KERBEROS,
the client is authenticated using the Kerberos security system. If the client authentication is
not KERBEROS, or the Kerberos authentication service is not available, then the system
authentication type is equivalent to SERVER_ENCRYPT.
Note
The DB2 database system provides support for the Kerberos authentication protocol on
AIX, Solaris, Linux IA32 and AMD64, and Windows operating systems. Also, both client
and server machines must either belong to the same Windows domain or belong to
trusted domains. This authentication type should be used when the server supports
Kerberos and some, but not all, of the client machines support Kerberos authentication.
In all cases, DB2 still checks for the following error conditions:
Expired account
Locked account
Invalid user
10-5
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
The SYSADM user could Grant another user SECADM authority for a database
A SECADM user was required to implement Label Based Access Control (LBAC)
SECADM required to define and manage database roles (9.5)
Database level audits can be managed only by a SECADM user (9.5)
Trusted Contexts can be defined by SECADM (9.5)
DB2 9.7 extends SECADM use and limits SYSADM and DBADM:
SECADM user can Grant and Revoke database and object level privileges
The SYSADM user and DBADM user do not automatically have the ability to
Grant and Revoke database and object level privileges
The ACCESSCTRL authority can be granted by a SECADM user to add ability to
GRANT and REVOKE privileges within a database
CL21311.0
Notes:
The methods for security management of DB2 LUW databases have changed and
expanded through a series of product releases. Prior to DB2 9, the SYSADM user or users
managed security for a DB2 instance with almost unlimited authority to access data, run
commands and grant privileges to other users. A DBADM authority could be granted for a
particular database, which gave a user broad access to data and the ability to grant
privileges to other users or groups. The DBADM users were not authorized to perform
system administration tasks like stopping the instance or creating databases and table
spaces. The general trend is a movement from instance level authorities with very broad
privileges to database level authorities with more specific task oriented privileges.
The SECADM user authority was introduced with DB2 9.1. The primary task for the
SECADM user was to implement Label based Access Control (LBAC), to provide an
extended level of security for access to sensitive data, independent from the SYSADM and
DBADM users.
The role of the SECADM user was expanded with DB2 9.5 to include the authority to
implement and manage database roles, trusted contexts and also to define and control
10-6 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
database level auditing. At this level the authority to grant database and object access
privileges was still the domain of the SYSADM and DBADM users. The SYSADM user was
the only authority that could grant SECADM authority to another user for a DB2 database.
10-7
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Authorization for
Administrators before DB2 9.7: Part 1
System Administrators: SYSADM
Defined by SYSADM_GROUP in Instance (DBM) configuration
System-related functions:
Update Instance (DBM) and Database Configurations
Create or Drop a database
Create, alter or drop table spaces
Perform backup and recovery tasks
Monitor performance and handle problem determination (db2pd)
Start and Stop the instance
Full read-write access to all data
Ability to Grant and Revoke database and object level privileges
Required to Grant DBADM and SECADM to other users
SYSCTRL_GROUP and SYSMAINT_GROUP provide subsets of
SYSADM without data access or ability to grant privileges
Database Administrator: DBADM
Full read-write access to all data
Ability to Grant and Revoke database and object level privileges
Create and manage database event monitors
Ability to run data oriented utilities like REORG and RUNSTATS
Copyright IBM Corporation 2012
CL21311.0
Notes:
There are some significant changes in the authorizations for administrators implemented in
DB2 9.7.
Prior to DB2 9.7, system administrators, or SYSADM users held a number of critical
privileges at the instance level. The SYSADM user is defined using a named user group,
SYSADM_GROUP, in the instance configuration. A user included in the SYSADM group
could perform the following tasks:
Update the database and instance configurations
Create a new database or drop an existing database
Manage table spaces using the CREATE, ALTER or DROP statements
Run the database recovery commands like BACKUP, RESTORE, ROLLFORWARD
and RECOVER
Monitor instance level activity using GET SNAPSHOT commands and run the db2pd
command
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Control the instance including starting (db2start) and stopping (db2stop) commands
Authority to Grant DBADM or SECADM to a user for a database
Having SYSADM authority allows the user to access any table or view. The SYSADM user
could also grant and revoke database or object level privileges for any database in the
instance.
The SYSCTRL_GROUP, SYSMAINT_GROUP and SYSMON_GROUP user groups were
available to permit specific subsets of the SYSADM command authority without allowing
the broad access to data.
The DBADM authority could be granted to a database user or group by the SYSADM user.
The DBADM user could access any data contained in the database and grant and revoke
privileges for that database. The DBADM user could run data oriented utilities like REORG
or RUNSTATS.
The SYSADM and DBADM users were not allowed to perform the tasks that were reserved
for a SECADM user, like managing LBAC security objects and creating database roles.
Label based access controls could be used to block access to secured tables by the
SYSADM and DBADM users.
10-9
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Authorization for
Administrators before DB2 9.7: Part 2
Security Administrators: SECADM
Introduced with DB2 9.1
Granted by SYSADM user at the database level
Can only be granted to a user, not a role or group
Required to implement and manage new security features:
Label Based Access Control: Extends standard object security
using security labels and policies to restrict access to data rows
or columns of a table
Define and manage database roles
Database level audit management
Define Trusted Contexts for three tiered applications
Can transfer object ownership using TRANSFER OWNERSHIP
Can GRANT SETSESSIONUSER privilege
No implied access to data
No ability to Grant and Revoke database and object level
privileges
CL21311.0
Notes:
The database level SECADM user authority, introduced with DB2 9.1 could only be granted
to a user by a SYSADM user.
The SECADM user set designed to implement and manage several new security facilities
including:
Label Based Access Control related objects
Creation and management of database roles
Definition and control of database level audit facilities
Trusted Context definitions
Authority to transfer object ownership and permit users to switch between user
identities using a SETSESSIONUSER statement
Prior to DB2 9.7, the SECADM user did not automatically have any authority to grant or
revoke database or object level privileges. So a SECADM could define a database role, but
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
could not grant access to tables or views to that role. The SECADM user authority did not
imply access to any database tables or views.
Some specialized authorities were available prior to DB2 9.7 like the LOAD authority that
allows a user to run the LOAD utility for a particular database. The SYSMON_GROUP in
the instance configuration could be used to allow a group of users to perform limited
monitor tasks for an instance without the broad authority of a SYSADM user.
10-11
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Beginning with DB2 9.7, system administrators, or SYSADM users, retain some but not all
of the privileges that were available in previous releases at the instance level. A SYSADM
user is defined using a named user group, SYSADM_GROUP, in the instance
configuration. A user included in the SYSADM group can perform the following system
administration tasks:
Update the database and instance configurations
Create a new database or drop an existing database
Manage table spaces using the CREATE, ALTER or DROP statements
Run the database recovery commands like BACKUP, RESTORE, ROLLFORWARD
and RECOVER
Monitor instance level activity using GET SNAPSHOT commands and run the db2pd
command
Control the instance including starting (db2start) and stopping (db2stop) commands
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Having SYSADM authority does not automatically allow the user to access data based on
tables or views. The DATAACCESS privilege could be granted by a SECADM user to
permit data access that was inherent in previous releases.
The SYSADM user also does not automatically have the authority to grant and revoke
database or object level privileges for any database in the instance. The ACCESSCTRL
privilege could be granted by a SECADM user to permit granting or revoking privileges that
were inherent in previous releases.
With DB2 9.7, a SYSADM user does not have the authority to grant either DBADM or
SECADM authority to another user.
The SYSCTRL_GROUP, SYSMAINT_GROUP and SYSMON_GROUP user groups were
available to permit specific subsets of the SYSADM command authority.
In order to allow a system administrator to create a DB2 database and begin performing
any required task for that database, the SYSADM user that creates a new DB2 database is
initially granted SECADM and DBADM authorities for that database including the
ACCESSCTRL and DATAACCESS authorities.
10-13
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Beginning with DB2 9.7, DBADM authority can only be granted or revoked by the security
administrator (who holds SECADM authority) and can be granted to a user, a group, or a
role. PUBLIC cannot obtain the DBADM authority either directly or indirectly.
DBADM authority is an administrative authority for a specific database. The database
administrator possesses the privileges required to create objects and issue some database
commands. In addition, users with DBADM authority have SELECT privilege on the system
catalog tables and views, and can execute all system-defined DB2 routines, except audit
routines.
Holding the DBADM authority for a database allows a user to perform these actions on that
database:
Create, alter, and drop non-security related database objects
Read log files
Create, activate, and drop event monitors
Query the state of a table space
10-14 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
10-15
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
Beginning with DB2 9.7 the SECADM authority is the primary security administration
authority for a specific database. This authority allows you to create and manage
security-related database objects and to grant and revoke all database authorities and
privileges. Additionally, the security administrator can execute, and manage who else can
execute, the audit system routines.
SECADM authority has the ability to SELECT from the catalog tables and catalog views,
but cannot access data stored in user tables.
SECADM authority can be granted only by the security administrator (who holds SECADM
authority) and can be granted to a user, a group, or a role. PUBLIC cannot obtain the
SECADM authority directly or indirectly.
SECADM authority gives a user the ability to perform the following operations:
Create, alter, comment on, and drop:
- Audit policies
- Security label components
10-16 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
- Security policies
- Trusted contexts
- Manage Row and Column access controls
Create, comment on, and drop:
- Roles
- Security labels
Grant and revoke database privileges and authorities
Execute the following audit routines to perform the specified tasks:
- The SYSPROC.AUDIT_ARCHIVE stored procedure and table function archive audit
logs.
- The SYSPROC.AUDIT_LIST_LOGS table function allows you to locate logs of
interest.
- The SYSPROC.AUDIT_DELIM_EXTRACT stored procedure extracts data into
delimited files for analysis. Also, the security administrator can grant and revoke
EXECUTE privilege on these routines, therefore enabling the security administrator
to delegate these tasks, if desired. Only the security administrator can grant
EXECUTE privilege on these routines. EXECUTE privilege WITH GRANT OPTION
cannot be granted for these routines (SQLSTATE 42501).
Use of the AUDIT statement to associate an audit policy with a particular database or
database object at the server
Use of the TRANSFER OWNERSHIP statement to transfer objects not owned by the
authorization ID of the statement
Note
No other authority gives these abilities.
The instance owner does not have SECADM authority by default. The creator of a
database is the initial SECADM user for the new database.
Only the security administrator has the ability to grant other users, groups, or roles the
ACCESSCTRL, DATAACCESS, DBADM, and SECADM authorities.
10-17
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
DATAACCESS:
Authority that allows access to data within a specific database
It can be granted to a user, a group, or a role
CL21311.0
Notes:
DB2 9.7 implemented several new administrator authorizations that allow selected
privileges to be granted. These can be used to allow different individuals or groups to
perform certain administrative roles without having additional unnecessary privileges.
ACCESSCTRL authority is the authority required to grant and revoke privileges on objects
within a specific database.
ACCESSCTRL authority has no inherent privilege to access data stored in tables, except
the catalog tables and views.
ACCESSCTRL authority can only be granted by the security administrator (who holds
SECADM authority).
It can be granted to a user, a group, or a role. PUBLIC cannot obtain the ACCESSCTRL
authority either directly or indirectly.
ACCESSCTRL authority gives a user the ability to perform the following operations:
Grant and revoke the following administrative authorities, EXPLAIN, SQLADM and
WLMADM
10-18 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Global Variable
Index
Nickname
Package
Routine (except audit routines)
Schema
Sequence
Server
Table
Table Space
View
XSR Objects
SELECT privilege on the system catalog tables and views
10-19
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
SQLADM authority gives a user the ability to perform the following functions:
Execution of the following SQL statements:
-
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
One
SYSADM
User
Database creator also
Holds SECADM and DBADM
Individual Users with
Distinct Authority
A SYSADM user manages the System
A DBADM user manages the data
A SECADM user manages database security
Groups of Users with Distinct Authorities
Multiple Users defined in SYSADM_GROUP
SECADM granted to multiple users or a role
DBADM granted to multiple users or a role
WITHOUT ACCESSCTRL
Copyright IBM Corporation 2012
CL21311.0
Notes:
There are many ways to administer DB2 security for a database.
For simple environments, a single user logon could hold all of the privileges needed to
perform any database related task. That user could be a SYSADM user, performing all
system level tasks. If that user creates a database, the userid will also be automatically
granted SECADM authority for the database and DBADM authority. This would allow the
one user to manage the database system, grant and revoke privileges and access all of the
data in the database.
In some cases, there might be a need to implement separate privileges for individual users.
One user perform the system level functions as SYSADM, but not have access to data or
grant database permissions.
Another user might manage security as the SECADM for that database, granting any
privilege needed to support the applications, but not having access to data.
10-21
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
A third user might work as an application DBA using the DBADM authority to run utilities
and access data but lack the authority to grant privileges or perform system level
commands.
For larger environments the system administrator (SECADM), security administrator
(SECADM) and DBADM authorities could be set up using groups or roles allowing
individuals to be easily assigned and reassigned authorities based on current projects and
responsibilities.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
User SECA1 can now revoke the extra privileges from SYSA1
REVOKE SECADM on database from user SYSA1
REVOKE ACCESSCTRL,DATAACCESS on database
from user SYSA1
CL21311.0
Notes:
It is possible to separate the database security privileges in a way that allows each person
to perform a portion of the administrative work without having unnecessary privileges.
Assume that a SYSADM user with a logon id of SYSA1 creates a new DB2 database
named TEST1. During database creation DB2 would automatically grant SECADM and
DBADM with ACCESSCTRL and DATAACCESS to the SYSA1 user logon.
If all security administration work for the TEST1 database needs to be performed by
another user SECA1, the user SYSA1, as the initial SECADM user for the database can
grant SECADM to the SECA1 user. At this point either SYSA1 or SECA1 could grant or
revoke any privilege for the TEST1 database.
The user SECA1 can now revoke the database authorities from SYSA1 that are not
needed to perform the system tasks, which could include SECADM, ACCESSCTRL and
DATAACCESS.
The following statements could be used:
REVOKE SECADM on database from user SYSA1
Copyright IBM Corp. 1999, 2012
10-23
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
ALTER
SELECT
INSERT
UPDATE
DELETE
INDEX
REFERENCES
BINDADD
CONNECT
CREATETAB
CREATE_EXTERNAL_ROUTINE
CREATE_NOT_FENCED_ROUTINE
IMPLICIT_SCHEMA
QUIESCE_CONNECT
CREATE_SECURE_OBJECTS
Package privileges
Control
BIND
EXECUTE
For Routine
Function or
Method
EXECUTE
Tablespace
USE
Schema level
privileges
ALTERIN
CREATEIN
DROPIN
CL21311.0
Notes:
The instance configuration allows groups of users to be designated for SYSADM,
SYSCTRL, SYSMAINT or SYSMON authority. These would apply to any database in the
instance.
The SECADM, ACCESSCTRL, DATAACCESS, DBADM, WLMADM, LOAD, SQLADM and
EXPLAIN privileges can be granted for each database.
The database privileges can also be granted:
BINDADD
CONNECT
CREATETAB
CREATE_EXTERNAL_ROUTINE
CREATE_NOT_FENCED_ROUTINE
IMPLICIT_SCHEMA
QUIESCE_CONNECT
The visual shows the privileges that can be granted for DB2 tables, views, schemas,
packages, routines, functions, methods and table spaces.
Copyright IBM Corp. 1999, 2012
10-25
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
ALL
ALTER
CONTROL
DELETE
INDEX
INSERT
REFERENCES
,
(
column-name
SELECT
UPDATE
,
(
column-name
TABLE
table-name
ON
authorization-name
TO
USER
GROUP
ROLE
PUBLIC
view-name
CL21311.0
Notes:
The visual shows the syntax for granting privileges for tables and views.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Using
Database Roles compared to Groups for Security
Database Roles
System Groups
CL21311.0
Notes:
Roles compared to groups
Privileges and authorities granted to groups are not considered when creating views,
materialized query tables (MQTs), SQL routines, triggers, and packages containing static
SQL. Avoid this restriction by using roles instead of groups.
Roles allow users to create database objects using their privileges acquired through roles,
which are controlled by the DB2 database system. Groups and users are controlled
externally from the DB2 database system, for example, by an operating system or an LDAP
server.
10-27
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
2. Mary grants the new developer role to two programmers, John and
Carol:
grant role developer to user john, user carol;
CL21311.0
Notes:
The following example shows how a database role could be defined and utilized to manage
security privileges.
1. Mary, a SECADM user, creates a new role developer, using the following statement:
create role developer
2. Next, Mary grants the new developer role to two programmers, John and Carol:
grant role developer to user john, user carol;
3. Mary or a user with ACCESSCTRL can now grant access to some tables or views to the
developer role:
grant all on table dev.sales to role developer ;grant select on table
dev.products to role developer ;
4. If Carol moves to the software testing department, the security administrator, Mary, can
remove Carol from the developer role:
revoke role developer from user carol;
10-28 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
If Mary tries to access database objects based on privileges that had been granted to the
developer role, those statements would fail.
10-29
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
A schema can be defined whenever it is desirable to group a set of objects. This grouping
might be based on objects for such things as an application, a group of people, or an
individual user. Other controlling mechanisms, such as Roles and Trusted Context will be
covered a bit later in the unit.
The schema is defined explicitly using the CREATE SCHEMA statement. An option
(AUTHORIZATION) on the CREATE SCHEMA statement allows owner specification at the
time the schema is created. A schema is defined to have an owner. The owner of the
schema is given the privilege to create objects using the schema name as the object
qualifier and to drop any objects that are defined in the schema (regardless of definer). The
schema owner also has the privilege to grant the privilege to create objects in the schema
or the privilege to drop objects from the schema to other users. This means that the
schema owner has the ability to manage objects in the schema and might grant similar
ability to manage those objects to others.
When a new database is created, PUBLIC is given IMPLICIT_SCHEMA database
authority. With this authority, any user can create a schema by creating an object and
10-30 DB2 10 for LUW: Basic Admin for AIX
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
specifying a schema name that does not already exist. SYSIBM becomes the owner of the
implicitly created schema and PUBLIC is given the privilege to create objects in this
schema.
If control of who can implicitly create schema objects is required for the database,
IMPLICIT_SCHEMA database authority should be revoked from PUBLIC. Once this is
done, there are only three ways that a schema object is created:
Any user can create a schema using their own authorization name on a CREATE
SCHEMA statement.
Any user with DBADM authority can explicitly create any schema which does not
already exist, and can optionally specify another user as the owner of the schema.
Any user with DBADM authority has IMPLICIT_SCHEMA database authority
(independent of PUBLIC) so that they can implicitly create a schema with any name at
the time they are creating other database objects.
SYSIBM becomes the owner of the implicitly created schema and PUBLIC has the
privilege to create objects in the schema.
A user always has the ability to explicitly create their own schema using their own
authorization name.
10-31
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
UNIT TEST
PGM XX
..
CLP
DELETE
FROM
PAYROLL
..
EXEC SQL
DELETE FROM
PAYROLL
WHERE ID=
:HOSTV
FUNCTION
TEST
PROMOTION
PRODUCTION
BOB
CL21311.0
Notes:
An individual can EXECUTE a package without having the authority for the underlying SQL
statements.
Programs can be coded, tested, and installed to perform database tasks on behalf of the
program executor.
The authority to EXECUTE a program which performs some SQL statement does not imply
authority to perform the same SQL in an ad hoc environment, such as CLP or using tools
like IBM Data Studio.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
ACCESSCTRL
DATAACCESS
DBADM
SECADM
CREATETAB
BINDADD
CONNECT
IMPLICIT_SCHEMA
EXECUTE with GRANT on all functions and procedures in schemas SYSPROC and SQLJ
BIND on all packages created in the NULLID schema
EXECUTE on all packages created in the NULLID schema
CREATEIN on schema SQLJ and NULLID
USE on table space USERSPACE1
SELECT access to the SYSIBM,SYSCAT and SYSSTAT catalog tables and views
UPDATE access to the SYSSTAT catalog views
USAGE privilege on SYSDEFAULTUSERWORKLOAD for workload management
Copyright IBM Corporation 2012
CL21311.0
Notes:
Default privileges granted on creating a database
When you create a database, default database level authorities and default object level
privileges are granted to you within that database.
The authorities and privileges that you are granted are listed according to the system
catalog views where they are recorded:
SYSCAT.DBAUTH
The database creator is granted the following authorities:
ACCESSCTRL
DATAACCESS
DBADM
SECADM
10-33
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
In a non-restrictive database, the special group PUBLIC is granted the following authorities:
CREATETAB
BINDADD
CONNECT
IMPLICIT_SCHEMA
EXECUTE with GRANT on all functions and procedures in schemas SYSPROC and
SQLJ
BIND on all packages created in the NULLID schema
EXECUTE on all packages created in the NULLID schema
CREATEIN on schema SQLJ and NULLID
USE on table space USERSPACE1
SELECT access to the SYSIBM,SYSCAT and SYSSTAT catalog tables and views
UPDATE access to the SYSSTAT catalog views
USAGE privilege on SYSDEFAULTUSERWORKLOAD for workload management
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
BINDADD
CONNECT
CREATETAB
CREATE_EXTERNAL_ROUTINE
CREATE_NOT_FENCED_ROUTINE
IMPLICIT_SCHEMA
QUIESCE_CONNECT
LOAD
These authorities are now part of DBADM authority. When DBADM authority is
revoked in these authorities are lost.
Create view
Internal GRANT to intersection of creator's privileges on base tables to view
creator.
Copyright IBM Corporation 2012
CL21311.0
Notes:
Beginning with DB2 9.7, the DB2 authorization model has been updated to clearly separate
the duties of the system administrator, the database administrator, and the security
administrator. As part of this enhancement, the abilities given by the DBADM authority
have changed.
In releases prior to DB2 9.7, DBADM authority automatically included the ability to access
data and to grant and revoke privileges for a database. In DB2 9.7, these abilities are given
by the new authorities, DATAACCESS and ACCESSCTRL.
Also, in releases prior to Version 9.7, granting DBADM authority automatically granted the
following authorities to:
BINDADD
CONNECT
CREATETAB
CREATE_EXTERNAL_ROUTINE
CREATE_NOT_FENCED_ROUTINE
IMPLICIT_SCHEMA
10-35
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
QUIESCE_CONNECT
LOAD
Before DB2 9.7, when DBADM authority was revoked, these authorities were not revoked.
These authorities are now part of DBADM authority.
Now, when DBADM authority is revoked these authorities are lost.
When a user creates a table, DB2 automatically grants the control privilege to the creator.
When a user creates a view, DB2 will only grant privileges on the view that are held on the
base tables. If a user holds the select authority on a table, and they create a view based on
that table, they would still be limited to select access through that view.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
The visual lists the system catalog views that contain information about privileges granting
on various database objects.
If you do not want any user to be able to know what objects other users have access to,
you should consider restricting access to these catalog views.
Because the system catalog views describe every object in the database, if you have
sensitive data, you might want to restrict their access.
The following authorities have SELECT privilege on all catalog tables:
ACCESSCTRL
DATAACCESS
DBADM
SECADM
SQLADM
10-37
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
In addition, the following instance level authorities have the ability to select from
SYSCAT.BUFFERPOOLS, SYSCAT.DBPARTITIONGROUPS,
SYSCAT.DBPARTITIONGROUPDEF, SYSCAT.PACKAGES, and SYSCAT.TABLES:
SYSADM
SYSCTRL
SYSMAINT
SYSMON
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Column mask
A column mask is a database object that expresses a column access control rule
for a specific column in a table.
A column access control rule is an SQL CASE expression that describes what
column values a user is permitted to see and under what conditions.
Copyright IBM Corporation 2012
CL21311.0
Notes:
Row and column access control (RCAC) overview
DB2 Version 10.1 introduces row and column access control (RCAC), as an additional
layer of data security. Row and column access control is sometimes referred to as
fine-grained access control or FGAC. RCAC controls access to a table at the row level,
column level, or both. RCAC can be used to complement the table privileges model.
To comply with various government regulations, you might implement procedures and
methods to ensure that information is adequately protected. Individuals in your
organization are permitted access to only the subset of data that is required to perform their
job tasks. For example, government regulations in your area might state that a doctor is
authorized to view the medical records of their own patients, but not of other patients. The
same regulations might also state that, unless a patient gives their consent, a healthcare
provider is not permitted access to patient personal information, such as the patients home
phone number.
You can use row and column access control to ensure that your users have access to only
the data that is required for their work. For example, a hospital system running DB2 for
Copyright IBM Corp. 1999, 2012
10-39
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Linux, UNIX, and Windows and RCAC can filter patient information and data to include only
that data which a particular doctor requires. Other patients do not exist as far as the doctor
is concerned. Similarly, when a patient service representative queries the patient table at
the same hospital, they are able to view the patient name and telephone number columns,
but the medical history column is masked for them. If data is masked, a NULL, or an
alternate value is displayed, instead of the actual medical history.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
'-+----+--correlation-name-'
>--FOR ROWS WHERE--search-condition--ENFORCED FOR ALL ACCESS---->
.-DISABLE-.
>--+---------+-------------------------------------------------><
'-ENABLE--'
A row permission specifies a search condition under which rows of the table can be
accessed
A row that does not match a permission will be excluded from the results
Permissions can be enabled or disabled using ALTER PERMISSION statements
For a PERMISSION to take effect you must use ALTER TABLE with the ACTIVATE
ROW ACCESS CONTROL
CL21311.0
Notes:
Authorization needed to create a PERMISSION object
The privileges held by the authorization ID of the statement must include SECADM
authority. SECADM authority can create a row permission in any schema. Additional
privileges are not needed to reference other objects in the permission definition. For
example, the SELECT privilege is not needed to retrieve from a table, and the EXECUTE
privilege is not needed to call a user-defined function.
The permission includes a search condition that specifies a condition that can be true or
false for a row of the table. This follows the same rules used by the search condition in a
WHERE clause of a subselect query. In addition, the search condition must not reference
any of the following objects or elements (SQLSTATE 428HB):
A created global temporary table or a declared global temporary table.
A nickname.
A table function.
A method.
Copyright IBM Corp. 1999, 2012
10-41
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
We will describe implementing row security for a Health-care scenario. We will have four
categories of individuals for which we will want to enforce certain access rules.
We will create access three rules that are listed in the visual:
1. Patients will have access to their own personal information
2. Physicians will have access to information for their assigned patients.
3. Some hospital workers need access to information about all patients, membership
officers, the accounting department and drug researchers
Once we implement PERMISSION objects to provide these three types of access, all other
database users will be blocked from access to the table(s).
10-43
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
These are the required definitions for the three rules described on the previous page for our
Health-care scenario. Note that we can define all the rules in a single CREATE statement.
In each case we verify the user role and in the case of the patient the data will match the
session user of the patient. For a physician, it will compare the data to allow the physician
to see their own patients. The third area allows access three roles defined for the other
hospital workers.
The PERMISSION object will effect ALL DML statements, not just SELECT statements.
The last step in implementing the row access control is to ALTER the table to ACTIVATE
the row access control.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
SIN
USERID
NAME
ADDRESS
PHARMACY
ACCT_BALANCE
PCP_ID
MAX
Max
First St.
hypertension
89.70
LEE
MIKE
Mike
Long St.
diabetics
8.30
JAMES
SAM
Sam
Big St.
codeine
12.50
LEE
DOUG
Doug
Good St.
influenza
7.68
JAMES
BOB
Bob
hypertension
9.00
LEE
CL21311.0
Notes:
Using the previous definition, we will allow Dr Lee to update data in the table for patients
who are his patients.
Sam is a patient of Dr. Lee. Thus, Dr. Lee can access Sams data. In this case he can
update column data in row associated with patient SAM. This UPDATE statement is
successful since SAM is a patient of Dr. Lee.
10-45
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
SIN
USERID
NAME
ADDRESS
PHARMACY
ACCT_BALANCE
PCP_ID
MAX
Max
First St.
hypertension
89.70
LEE
MIKE
Mike
Long St.
diabetics
8.30
JAMES
SAM
Sam
Big St.
codeine
12.50
LEE
DOUG
Doug
Good St.
influenza
7.68
JAMES
BOB
Bob
hypertension
9.00
LEE
CL21311.0
Notes:
Since Doug is not a patient of Dr. Lee, the PERMISSION we defined does not allow Dr. Lee
to access Dougs data.
Note that in this case the result to Dr Lee implies that there is no row for patient DOUG.
This is consistent with Dr Lee being able to see or know about only his patients. We do not
indicate that there is a patient called DOUG but that Dr. Lee does not have access. To Dr.
Lee he just sees that there is NO patient called DOUG.
This is important for ensuring data is only visible to those with appropriate access rights.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
ACTIVATE column
access control
CL21311.0
Notes:
We will now discuss how you would implement the column access control. Remember that
RCAC provides both column access and row access controls. Column access control is
implemented in the form of a mask, or lack thereof, on the data
This slide shows how we would define the column access control. Note that we have three
similar steps to those for defining row access control
First we create the mask we use CASE expressions to determine the result
Second we ENABLE the permission
-Third (and again important as it was with row permissions) we must ACTIVATE the
permission on the table involved.
Note that the ability for someone to update a column with a mask on it will be based not on
the presence of a mask, but the presence of any row permission. Someone can update the
column data which is masked, even if they cannot see the masked data as long as they
have update permission on the table and the ability to see the row.
10-47
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
We will now discuss how you would implement the column access control. Remember that
RCAC provides both column access and row access controls. Column access control is
implemented in the form of a mask, or lack thereof, on the data.
Using our Health-care scenario as the base, we will implement column access control rules
in two forms:
We MASK the account balance column
- Only the ACCOUNTING team can see the account balance in the table
- All others see a balance of zero
We MASK the SIN column (Social Insurance Number column)
- Only the PATIENT themselves can see the full Social Insurance number
- All others see only the last three digits of the number
10-49
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
We use the CREATE MASK statements to provide the column based security.
The first mask allows the ACCOUNTING role to see the account balance. All others see a
balance of zero.
The second MASK only allows a patient, to see the full SIN number, other users see the
last three characters of data in the column, with a series of X characters as a prefix.
Both MASK definitions verify the role of the user executing the statement. The case
expressions determine the resulting column data that will be returned by DB2 to the user
executing the query.
Each mask is enabled separately, and we must ACTIVATE column access control on the
table before the MASK objects are used by the database.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
SIN
USERID
NAME
ADDRESS
PHARMACY
ACCT_BALANCE
PCP_ID
MAX
Max
First St.
hypertension
89.70
LEE
MIKE
Mike
Long St.
diabetics
8.30
JAMES
SAM
Sam
Big St.
codeine
12.50
LEE
DOUG
Doug
Good St.
influenza
7.68
JAMES
BOB
Bob
hypertension
9.00
LEE
CL21311.0
Notes:
In this scenario we have a user who is part of the Accounting team executing an inquiry
against the table. Since John is in accounting the result set will show him all the correct
balances. However John will not be able to see the full SIN numbers of the patients.
This result matches the column access rules we defined on the previous page.
Do not forget however that we also defined early on the ROW ACCESS controls for this
table. At that time we indicated that anyone in accounting would be able to see ALL ROWS.
This also matches the result that John sees.
This example provides a result based on BOTH row access and column access controls.
We will use this example as we move forward.
10-51
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
USERID
NAME
ADDRESS
PHARMACY
ACCT_BALANCE
PCP_ID
MAX
Max
First St.
hypertension
0.00
LEE
SAM
Sam
Big St.
codeine
0.00
LEE
BOB
Bob
hypertension
0.00
LEE
CL21311.0
Notes:
Let us now see how this affects Dr. Lee. This will be a more restrictive example, since as
you recall, Dr. Lee can only see data for his patients.
Again, BOTH ROW and COLUMN access control rules are applied, and the result is that
Dr. Lee only sees his three patients and does not see either the BALANCE or the full SIN
value.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
CL21311.0
Notes:
One aspect of security for a database system is the need to protect user and password
information and possibly all application data being transferred between the application
system and a database server over a network.
The configuration setting for authentication in the DBM configuration file specifies and
determines how and where authentication of a user takes place.
If authentication is SERVER, the user ID and password are sent from the client to the
server so that authentication can take place on the server. The value SERVER_ENCRYPT
provides the same behavior as SERVER, except that any user IDs and passwords sent
over the network are encrypted.
A value of DATA_ENCRYPT means the server accepts encrypted SERVER authentication
schemes and the encryption of user data. The authentication works exactly the same way
as SERVER_ENCRYPT.
The following user data are encrypted when using this authentication type:
SQL statements
Copyright IBM Corp. 1999, 2012
10-53
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Client System
SSL Setup
Based on
Client Type
DB2 Database
Server
Encrypted communication
Using TCP/IP
Signer
certificate
database
SSL Setup
Using GSkit
Digital
certificates
database
Import a Signer
Certificate
Extract a
Certificate
iKeyman tool
Add or Create
Certificates
CL21311.0
Notes:
The DB2 database system supports the use of Secure Sockets Layer (SSL) and its
successor, Transport Layer Security (TLS), to enable a client to authenticate a server, and
to provide private communication between the client and server by use of encryption.
Authentication is performed by the exchange of digital certificates.
Without encryption, packets of information travel through networks in full view of anyone
who has access. You can use SSL to protect data in transit on all networks that use TCP/IP
(you can think of an SSL connection as a secured TCP/IP connection).
The DB2 database system supports SSL, which means that a DB2 client application that
also supports SSL can connect to a DB2 database using an SSL socket. CLI, CLP,
and .Net Data Provider client applications and applications that use the IBM Data Server
Driver for JDBC and SQLJ (Type 4 connections) support SSL.
The IBM Global Security Kit (GSKit) libraries are installed with the DB2 server code to
provide SSL support. Some setup on the client and server systems is required to enable
the digital certificates to be stored and exchanged when the connections are established.
10-55
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
Set the ssl_svr_stash configuration parameter to the fully qualified path of the stash file.
db2 update dbm cfg using SSL_SVR_STASH /home/db2/sqllib/security/ssl/mydb.sth
Set the ssl_svr_label configuration parameter to the label of the digital certificate of the server.
Set the ssl_svcename configuration parameter to the port that the DB2 database system should
listen on for SSL connections. Must not be equal to SVCENAME port.
Optionally, set options to select a ciphers suite:
CL21311.0
Notes:
To configure SSL support, first, you create a key database to manage your digital
certificates. These certificates and encryption keys are used for establishing the SSL
connections. Second, the DB2 instance owner must configure the DB2 instance for SSL
support.
1. Create a key database and set up your digital certificates.
- Use the GSKCapiCmd tool to create your key database. It must be a Certificate
Management System (CMS) type key database.
The GSKCapiCmd is a non-Java-based command-line tool, and Java does not
need to be installed on your system to use this tool.
You invoke GSKCapiCmd using the gskcapicmd command, as described in the
GSKCapiCmd User's Guide.
For example, the following command creates a key database called
mydbserver.kdb and a stash file called mydbserver.sth:
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
The -stash option creates a stash file at the same path as the key database, with
a file extension of .sth.
At instance start-up, GSKit uses the stash file to obtain the password to the key
database.
When you create a key database, it is automatically populated with signer
certificates from a few certificate authorities (CAs), such as Verisign.
2. Add a certificate for your server to your key database. The server sends this certificate
to clients during the SSL handshake to provide authentication for the server.
3. Extract the certificate you just created to a file, so that you can distribute it to computers
running clients that will be establishing SSL connections to your DB2 server.
4. To set up your DB2 server for SSL support, log in as the DB2 instance owner and set
the following configuration parameters and the DB2COMM registry variable.
- Set the ssl_svr_keydb configuration parameter to the fully qualified path of the key
database file.
For example:
db2 update dbm cfg using SSL_SVR_KEYDB
/home/test/sqllib/security/keystore/key.kdb
If ssl_svr_keydb is null (unset), SSL support is not enabled.
- Set the ssl_svr_stash configuration parameter to the fully qualified path of the stash
file.
For example:
db2 update dbm cfg using SSL_SVR_STASH
/home/test/sqllib/security/keystore/mydbserver.sth
If ssl_svr_stash is null (unset), SSL support is not enabled.
- Set the ssl_svr_label configuration parameter to the label of the digital certificate of
the server, which you added. If ssl_svr_label is not set, the default certificate in the
key database is used. If there is no default certificate in the key database, SSL is not
enabled.
For example:
db2 update dbm cfg using SSL_SVR_LABEL myselfsigned
where myselfsigned is a sample label.
- Set the ssl_svcename configuration parameter to the port that the DB2 database
system should listen on for SSL connections. If TCP/IP and SSL are both enabled
(the DB2COMM registry variable is set to 'TCPIP, SSL'), you must set ssl_svcename
to a different port than the port to which svcename is set. The svcename
Copyright IBM Corp. 1999, 2012
10-57
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
configuration parameter sets the port that the DB2 database system listens on for
TCP/IP connections. If you set ssl_svcename to the same port as svcename, neither
TCP/IP or SSL will be enabled. If ssl_svcename is null (unset), SSL support is not
enabled.
Note
When the DB2COMM registry variable is set to 'TCPIP,SSL', if TCPIP support is not
properly enabled, for example due to the svcename configuration parameter being set to
null, the error SQL5043N is returned and SSL support is not enabled.
- (Optional) If you want to specify which cipher suites the server can use, set the
ssl_cipherspecs configuration parameter. If you leave ssl_cipherspecs as null
(unset), this allows GSKit to pick the strongest available cipher suite that is
supported by both the client and the server. See Supported cipher suites for
information about which cipher suites are available.
- Add the value SSL to the DB2COMM registry variable.
For example:
db2set -i db2inst1 DB2COMM=SSL
where db2inst1 is the DB2 instance name. The database manager can support
multiple protocols at the same time.
For example, to enable both TCP/IP and SSL communication protocols:
db2set -i db2inst1 DB2COMM=SSL,TCPIP
- Restart the DB2 instance.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Database
security for three-tier application systems
For many three-tiered application systems:
Individual Users are authenticated by the application server
A common user name and password is used to connect to the DB2 server which is
unknown to the end user
All database/SQL processing is performed using a single user name
A set of database access privileges are granted to the common application logon
name to allow all aspects of application processing to be performed
These characteristics can lead to over-granting privileges to a single user name that
could be misused to bypass security policies
CL21311.0
Notes:
The three-tiered application model extends the standard two-tiered client and server model
by placing a middle tier between the client application and the database server. It has
gained great popularity in recent years particularly with the emergence of Web-based
technologies and the Java 2 Enterprise Edition (J2EE) platform. An example of a software
product that supports the three-tier application model is IBM WebSphere Application
Server (WAS).
In a three-tiered application model, the middle tier is responsible for authenticating the
users running the client applications and for managing the interactions with the database
server. Traditionally, all the interactions with the database server occur through a database
connection established by the middle tier using a combination of a user ID and a credential
that identify that middle tier to the database server. In other words, the database server
uses the database privileges associated with the middle tier's user ID for all authorization
checking and auditing that must occur for any database access, including access
performed by the middle tier on behalf of a user.
10-59
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
A trusted context is a database object that defines a trust relationship for a connection
between the database and an external entity such as an application server.
The trust relationship is based upon the following set of attributes:
System authorization ID: Represents the user that establishes a database connection
IP address (or domain name): Represents the host from which a database connection
is established
Data stream encryption: Represents the encryption setting (if any) for the data
communication between the database server and the database client
When a user establishes a database connection, the DB2 database system checks
whether the connection matches the definition of a trusted context object in the database.
When a match occurs, the database connection is said to be trusted.
A trusted connection allows the initiator of this trusted connection to acquire additional
capabilities that might not be available outside the scope of the trusted connection. The
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
10-61
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL21311.0
Notes:
The example defines a trusted context named APPSERVER.
DB2 would consider a connection to match the trusted context only if:
The user name on the connection matches BERNARD
The connection is made from the system with a tcpip ip address of 9.26.113.204
Any type of encryption could be used. The default for encryption type is NONE.
The default security role of APPSERV_ROLE would be assigned to connections matching
this trusted context.
When the current user of the connection is switched to user ID JOE, authentication is not
required. However, authentication is required when the current user of the connection is
switched to user ID BOB.
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V5.4
Student Notebook
Uempty
Unit summary
Having completed this unit, you should be able to:
Use DB2 access control mechanisms to implement security within the
database
Explain the tasks performed by the SYSADM user, the SECADM user and a
DBADM user
Compare the use of database roles to user groups for security
Describe privileges required for binding and executing an application
package
Describe the difference between explicit privileges and implicit privileges
Use CREATE PERMISSIONS and CREATE MASK statements to define
row and column access controls
List the methods for implementing encryption for database connections
List the advantages of creating a Trusted Context for a three-tier application
system
CL21311.0
Notes:
10-63
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
Student Notebook
IX
Index
A
archival logging
7-9
C
candidates for table space backup strategy
check constraints - definition 5-31
create view 5-20
7-27
D
database directories 4-14
db2ubind.lst 4-15
deadlock causes 9-28
deadlock definition 9-28
deadlock detection interval
deadlock detector 9-28
9-28
E
escalation of locks
EXPLAIN 8-35
9-24
L
lock escalation 9-24
lock mode compatibility 9-12
LOCK TABLE statement 9-22
locking at table level only 9-22
locklist configuration parameter 9-24
log retention logging 7-9
logging and ACTIVATE DATABASE 7-9
M
maxlocks configuration parameter
9-24
R
rollforward 7-30
row lock compatibility
9-12
S
strict table locking
9-22
T
table lock compatibility 9-12
table space backup/restore considerations 7-27
Index
X-1
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
Student Notebook
X-2
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
V7.0.1
backpg
Back page
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
CL213G
Srinivas Kondaveeti
8/24/15
skondaveeti@massmutual.com
8/27/15
TP-051240
Virtual Eastern
TR-214053
99999
As a global IT solutions distributor, Avnet Technology Solutions transforms technology into business solutions for customers around the
world. It collaborates with customers and suppliers to create and deliver services, software and hardware solutions that address the
changing needs of end-user customers. The group serves customers and suppliers in North America, Latin America and Caribbean, Asia
Pacific, and Europe, Middle East and Africa. It generated US $11.0 billion in annual revenue for fiscal year 2014. Avnet Technology
Solutions is an operating group of Avnet, Inc. For more information, visit http://www.ats.avnet.com.
2015 Avnet, Inc. All rights reserved. The Avnet Technology Solutions logo and SolutionsPath are registered trademarks and PayNow,
CloudReady and Accelerating Your Success are trademarks of Avnet, Inc. All other products, brands and names are trademarks or
registered trademarks of their respective owners.