Db2application Development
Db2application Development
SC09-2949-01
SC09-2949-01
Before using this information and the product it supports, be sure to read the general information under
Appendix G. Notices on page 827.
This document contains proprietary information of IBM. It is provided under a license agreement and is protected by
copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
Order publications through your IBM representative or the IBM branch office serving your locality or by calling
1-800-879-2755 in the United States or 1-800-IBM-4YOU in Canada.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1993, 2001. All rights reserved.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Part 1. DB2 Application
Development Concepts . . . . . . 1
45
61
with DB2
. . . . . . . 3
. . . . . . . 3
. . . . . . . 4
. . . . . . . 4
. . . . . . . 7
. . . . . . . 8
. 9
. 9
. 10
. 11
. 16
. 17
. 19
. 19
. 20
. 21
. 23
. 25
. 27
. 29
.
.
.
.
.
.
.
30
33
34
34
35
35
36
.
.
.
.
.
36
37
37
37
38
. 40
. 41
45
47
49
49
52
53
56
56
57
58
58
61
62
63
64
66
67
69
71
71
71
73
75
77
80
81
81
82
84
92
92
92
92
iii
. 102
. 102
. 102
. 104
. 105
. 105
.
.
.
.
116
116
116
117
. 117
. 118
. 119
|
|
|
|
|
|
|
|
. 119
iv
. 93
. 102
127
127
127
128
131
131
133
143
175
176
176
177
179
180
181
181
183
185
187
187
188
189
189
190
144
145
146
146
147
147
151
152
153
154
161
161
162
170
170
171
173
|
|
|
.
.
.
.
.
.
193
193
194
196
198
199
. 216
.
.
.
.
.
217
229
229
229
230
230
. 231
|
|
|
.
.
. 231
. 231
. . 233
a
. . 234
. . 244
247
247
248
250
251
251
253
254
254
256
256
257
257
257
258
259
260
260
263
263
269
269
270
270
271
271
271
Part 4. Object-Relational
Programming . . . . . . . . . 273
281
281
282
282
283
283
283
283
284
284
285
285
285
286
287
288
288
289
289
290
|
|
315
349
349
vi
317
319
321
321
322
322
322
323
324
326
328
329
348
350
351
353
353
354
356
359
360
361
363
366
366
368
368
369
370
. 372
373
373
374
377
378
379
379
379
380
380
381
382
382
383
383
384
384
385
385
386
387
387
388
389
393
393
395
395
408
410
418
419
420
420
422
423
425
425
426
427
428
431
432
434
435
436
436
439
441
442
442
443
447
448
448
450
453
453
457
461
463
471
474
476
480
an Active
. . . . . 483
. . . . . 483
. . . . . 484
. . . . . 485
. . . . . 486
. . . . . 487
. . . . . 487
. . . . . 488
. . . . . 489
. . . . . 490
. . . . . 492
. . . . . 492
. 493
.
.
.
.
.
.
.
.
493
494
495
495
.
.
.
.
.
.
.
.
.
.
496
496
497
497
498
Contents
vii
|
|
|
|
|
|
viii
.
.
.
.
555
555
555
555
557
570
571
571
573
573
574
574
. 578
. 579
. 579
. 579
. 580
. 581
. 581
582
. 582
. 582
. 583
. 583
. 586
. 586
588
588
589
562
568
569
569
570
. 584
. 586
588
588
|
|
593
593
593
593
594
594
595
598
599
600
600
601
606
606
608
611
612
613
613
614
616
617
619
620
621
621
626
627
633
633
635
637
637
637
675
675
675
675
676
677
677
678
638
638
638
639
639
639
641
641
642
644
644
647
647
648
649
651
652
654
660
660
660
662
663
664
665
665
668
672
673
673
673
.
.
.
.
.
.
.
.
.
679
680
683
685
685
685
689
689
690
691
. 691
. 694
. 694
. 695
. 695
. 699
. 699
. 699
. 700
701
701
701
701
702
702
702
702
705
705
707
707
707
710
710
711
711
712
714
714
715
ix
717
717
718
718
719
721
721
721
722
722
724
724
725
726
726
728
729
729
730
732
732
734
737
743
747
750
752
754
754
756
758
760
761
762
763
765
765
765
766
766
767
768
769
769
770
771
777
778
779
781
784
785
787
788
789
789
789
789
789
789
790
790
790
790
791
791
792
793
793
793
794
794
794
795
795
795
796
797
799
799
. 800
. 801
. 803
. . . 809
. . . 809
. . . 809
. . . 818
. . . 819
. . . 820
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
820
822
824
825
826
. 833
Contents
xi
xii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 3
. 4
. 4
Conventions . . .
Related Publications.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 7
. 8
support for stored procedures with enhanced portability and scalability across
platforms. Stored procedures are discussed in Chapter 7. Stored Procedures
on page 193.
You can use object-based extensions to DB2 to make your DB2 application
programs more powerful, flexible, and active than traditional DB2
applications. The extensions include large objects (LOBs), distinct types,
structured types, user-defined functions (UDFs), and triggers. These features
of DB2 are described in:
v Chapter 10. Using the Object-Relational Capabilities on page 275
v Chapter 11. User-defined Distinct Types on page 281
v Chapter 12. Working with Complex Objects: User-Defined Structured
Types on page 291
v Chapter 13. Using Large Objects (LOBs) on page 349
v Chapter 14. User-Defined Functions (UDFs) and Methods on page 373
v Chapter 15. Writing User-Defined Functions (UDFs) and Methods on
page 393
v Chapter 16. Using Triggers in an Active DBMS on page 483
References to DB2 in this book should be understood to mean the DB2
Universal Database product on UNIX, Linux, OS/2, and Windows 32-bit
operating systems. References to DB2 on other platforms use a specific
product name and platform, such as DB2 Universal Database for AS/400.
Conventions
This book uses the following conventions:
Directories and Paths
This book uses the UNIX convention for delimiting directories, for
example: sqllib/samples/java. You can convert these paths to
Windows 32-bit operating system and OS/2 paths by changing the /
to a \ and prepending the appropriate installation drive and directory.
Italics
UPPERCASE
Indicates one of the following:
v Abbreviations
v Database manager data types
v SQL statements
Example
Indicates one of the following:
v Coding examples and code fragments
v Examples of output, similar to what is displayed by the system
v Examples of specific data values
v Examples of system messages
v File and directory names
v Information that you are instructed to type
v Java method names
v Function names
v API names
Bold
Related Publications
The following manuals describe how to develop applications for international
use and for specific countries:
Form Number
Book Title
SE09-8001-03
SE09-8002-03
27
27
28
28
29
29
29
29
30
30
32
33
33
34
34
35
35
36
36
37
37
37
38
40
41
10
Certain SQL statements must appear at the beginning and end of the program
to handle the transition from the host language to the embedded SQL
statements.
The beginning of every program must contain:
v Declarations of all variables and data structures that the database manager
uses to interact with the host program
v SQL statements that provide for error handling by setting up the SQL
Communications Area (SQLCA)
Note that DB2 applications written in Java throw an SQLException, which
you handle in a catch block, rather than using the SQLCA.
The body of every program contains the SQL statements that access and
manage data. These statements constitute transactions. Transactions must
include the following statements:
v The CONNECT statement, which establishes a connection to a database
server
v One or more:
Data manipulation statements (for example, the SELECT statement)
Data definition statements (for example, the CREATE statement)
Data control statements (for example, the GRANT statement)
v Either the COMMIT or ROLLBACK statement to end the transaction
The end of the application program typically contains SQL statements that:
v Release the programs connection to the database server
v Clean up any resource
11
The attributes of each host variable depend on how the variable is used in the
SQL statement. For example, variables that receive data from or store data in
DB2 tables must have data type and length attributes compatible with the
column being accessed. To determine the data type for each variable, you
must be familiar with DB2 data types, which are explained in Data Types
on page 77.
Declaring Variables that Represent SQL Objects: For DB2 Version 7, the
names of tables, aliases, views, and correlations have a maximum length of
128 bytes. Column names have a maximum length of 30 bytes. In DB2 Version
7, schema names have a maximum length of 30 bytes. Future releases of DB2
may increase the lengths of column names and other identifiers of SQL objects
up to 128 bytes. If you declare variables that represent SQL objects with less
than 128 byte lengths, future increases in SQL object identifier lengths may
affect the stability of your applications. For example, if you declare the
variable char[9]schema_name in a C++ application to hold a schema name,
your application functions properly for the allowed schema names in DB2
Version 6, which have a maximum length of 8 bytes.
char[9] schema_name; /* holds null-delimited schema name of up to 8 bytes;
works for DB2 Version 6, but may truncate schema names in future releases */
However, if you migrate the database to DB2 Version 7, which accepts schema
names with a maximum length of 30 bytes, your application cannot
differentiate between the schema names LONGSCHEMA1 and LONGSCHEMA2. The
database manager truncates the schema names to their 8-byte limit of
LONGSCHE, and any statement in your application that depends on
differentiating the schema names fails. To increase the longevity of your
application, declare the schema name variable with a 128-byte length as
follows:
char[129] schema_name; /* holds null-delimited schema name of up to 128 bytes
good for DB2 Version 7 and beyond */
12
#include <sql.h>
char[SQL_MAX_IDENT+1]
char[SQL_MAX_IDENT+1]
char[SQL_MAX_IDENT+1]
char[SQL_MAX_IDENT+1]
schema_name;
table_name;
employee_column;
manager_column;
It contains two output host variables, hdate and lvl, and one input host
variable, idno. The database manager uses the data stored in the host variable
idno to determine the EMPNO of the row that is retrieved from the
EMPLOYEE table. If the database manager finds a row that meets the search
criteria, hdate and lvl receive the data stored in the columns HIREDATE and
EDLEVEL, respectively. This statement illustrates an interaction between the
host program and the database manager using columns of the EMPLOYEE
table.
Each column of a table is assigned a data type in the CREATE TABLE
definition. You must relate this data type to the host language data type
defined in the Supported SQL Data Types section of each language-specific
chapter in this document. For example, the INTEGER data type is a 32-bit
signed integer. This is equivalent to the following data description entries in
each of the host languages, respectively:
C/C++:
Java:
sqlint32 variable_name;
int variable_name;
COBOL:
01
variable-name
FORTRAN:
INTEGER*4 variable_name
13
For the list of supported SQL data types and the corresponding host language
data types, see the following:
v for C/C++, Supported SQL Data Types in C and C++ on page 627
v for Java, Supported SQL Data Types in Java on page 639
v for COBOL, Supported SQL Data Types in COBOL on page 695
v for FORTRAN, Supported SQL Data Types in FORTRAN on page 712
v for REXX, Supported SQL Data Types in REXX on page 726
In order to determine exactly how to define the host variable for use with a
column, you need to find out the SQL data type for that column. Do this by
querying the system catalog, which is a set of views containing information
about all tables created in the database. The SQL Reference describes this
catalog.
After you have determined the data types, you can refer to the conversion
charts in the host language chapters and code the appropriate declarations.
The Declaration Generator utility (db2dclgn) is also available for generating
the appropriate declarations for a given table in a database. For more
information on db2dclgn, see Declaration Generator - db2dclgn on page 73
and refer to the Command Reference.
Table 4 on page 74 shows examples of declarations in the supported host
languages. Note that REXX applications do not need to declare host variables
except for LOB locators and file reference variables. The contents of the
variable determine other host variable data types and sizes at run time.
Table 4 also shows the BEGIN and END DECLARE SECTION statements.
Observe how the delimiters for SQL statements differ for each language. For
the exact rules of placement, continuation, and delimiting of these statements,
see the language-specific chapters of this book.
Handling Errors and Warnings
The SQL Communications Area (SQLCA) is discussed in detail later in this
chapter. This section presents an overview. To declare the SQLCA, code the
INCLUDE SQLCA statement in your program.
For C or C++ applications use:
EXEC SQL INCLUDE SQLCA;
For Java applications: You do not explicitly use the SQLCA in Java. Instead,
use the SQLException instance methods to get the SQLSTATE and SQLCODE
values. See SQLSTATE and SQLCODE Values in Java on page 641 for more
details.
For COBOL applications use:
EXEC SQL INCLUDE SQLCA END-EXEC.
14
When you preprocess your program, the database manager inserts host
language variable declarations in place of the INCLUDE SQLCA statement.
The system communicates with your program using the variables for warning
flags, error codes, and diagnostic information.
After executing each SQL statement, the system returns a return code in both
SQLCODE and SQLSTATE. SQLCODE is an integer value that summarizes
the execution of the statement, and SQLSTATE is a character field that
provides common error codes across IBMs relational database products.
SQLSTATE also conforms to the ISO/ANS SQL92 and FIPS 127-2 standard.
Note: FIPS 127-2 refers to Federal Information Processing Standards Publication
127-2 for Database Language SQL. ISO/ANS SQL92 refers to American
National Standard Database Language SQL X3.135-1992 and International
Standard ISO/IEC 9075:1992, Database Language SQL.
Note that if SQLCODE is less than 0, it means an error has occurred and the
statement has not been processed. If the SQLCODE is greater than 0, it means
a warning has been issued, but the statement is still processed. See the
Message Reference for a listing of SQLCODE and SQLSTATE error conditions.
If you want the system to control error checking after each SQL statement, use
the WHENEVER statement.
Note: Embedded SQL for Java (SQLJ) applications cannot use the
WHENEVER statement. Use the SQLException methods described in
SQLSTATE and SQLCODE Values in Java on page 641 to handle
errors returned by SQL statements.
The following WHENEVER statement indicates to the system what to do
when it encounters a negative SQLCODE:
WHENEVER SQLERROR GO TO errchk
That is, whenever an SQL error occurs, program control is transferred to code
that follows the label, such as errchk. This code should include logic to
analyze the error indicators in the SQLCA. Depending upon the ERRCHK
definition, action may be taken to execute the next sequential program
instruction, to perform some special functions, or as in most situations, to roll
back the current transaction and terminate the program. See Coding
Transactions on page 17 for more information on a transaction and
Diagnostic Handling and the SQLCA Structure on page 116 for more
information about how to control error checking in your application program.
15
If your application must be compliant with the ISO/ANS SQL92 or FIPS 127-2
standard, do not use the above statements or the INCLUDE SQLCA statement.
For more information on the ISO/ANS SQL92 and FIPS 127-2 standards, see
Definition of FIPS 127-2 and ISO/ANS SQL92 on page 15. For the
alternative to coding the above statements, see the following:
v For C or C++ applications, see SQLSTATE and SQLCODE Variables in C
and C++ on page 635
v For COBOL applications, SQLSTATE and SQLCODE Variables in COBOL
on page 699
v For FORTRAN applications, SQLSTATE and SQLCODE Variables in
FORTRAN on page 714
Using Additional Nonexecutable Statements
Generally, other nonexecutable SQL statements are also part of this section of
the program. Both the SQL Reference and subsequent chapters of this manual
discuss nonexecutable statements. Examples of nonexecutable statements are:
v INCLUDE text-file-name
v INCLUDE SQLDA
v DECLARE CURSOR
16
Coding Transactions
A transaction is a sequence of SQL statements (possibly with intervening host
language code) that the database manager treats as a whole. An alternative
term that is often used for transaction is unit of work.
To ensure the consistency of data at the transaction level, the system makes
sure that either all operations within a transaction are completed, or none are
completed. Suppose, for example, that the program is supposed to deduct
money from one account and add it to another. If you place both of these
updates in a single transaction, and a system failure occurs while they are in
progress, then when you restart the system, the database manager
automatically restores the data to the state it was in before the transaction
began. If a program error occurs, the database manager restores all changes
made by the statement in error. The database manager will not undo work
performed in the transaction prior to execution of the statement in error,
unless you specifically roll it back.
You can code one or more transactions within a single application program,
and it is possible to access more than one database from within a single
transaction. A transaction that accesses more than one database is called a
Chapter 2. Coding a DB2 Application
17
multisite update. For information on these topics, see Remote Unit of Work
on page 535 and Multisite Update on page 535.
Beginning a Transaction
A transaction begins implicitly with the first executable SQL statement and
ends with either a COMMIT or a ROLLBACK statement, or when the
program ends.
In contrast, the following six statements do not start a transaction because
they are not executable statements:
BEGIN DECLARE SECTION
END DECLARE SECTION
DECLARE CURSOR
INCLUDE SQLCA
INCLUDE SQLDA
WHENEVER
18
For more information about program termination, see Ending the Program
and Diagnostic Handling and the SQLCA Structure on page 116.
Using the ROLLBACK Statement: This statement ends the current
transaction, and restores the data to the state it was in prior to beginning the
transaction.
The ROLLBACK statement has no effect on the contents of host variables.
If you use a ROLLBACK statement in a routine that was entered because of
an error or warning and you use the SQL WHENEVER statement, then you
should specify WHENEVER SQLERROR CONTINUE and WHENEVER
SQLWARNING CONTINUE before the ROLLBACK. This avoids a program
loop if the ROLLBACK fails with an error or warning.
In the event of a severe error, you will receive a message indicating that you
cannot issue a ROLLBACK statement. Do not issue a ROLLBACK statement if
a severe error occurs such as the loss of communications between the client
and server applications, or if the database gets corrupted. After a severe error,
the only statement you can issue is a CONNECT statement.
19
v Whether the application uses the context APIs (see Multiple Thread
Database Access on page 543)
On Most Supported Operating Systems
DB2 implicitly commits a transaction if the termination is normal, or implicitly
rolls back the transaction if it is abnormal. Note that what your program
considers to be an abnormal termination may not be considered abnormal by
the database manager. For example, you may code exit(-16) when your
application encounters an unexpected error and terminate your application
abruptly. The database manager considers this to be a normal termination and
commits the transaction. The database manager considers items such as an
exception or a segmentation violation as abnormal terminations.
On Windows 32-bit Operating Systems
DB2 always rolls back the transaction regardless of whether your application
terminates normally or abnormally, unless you explicitly commit the
transaction using the COMMIT statement.
When Using the DB2 Context APIs
Your application can use any of the DB2 APIs to set up and pass application
contexts between threads as described in Multiple Thread Database Access
on page 543. If your application uses these DB2 APIs, DB2 implicitly rolls
back the transaction regardless of whether your application terminates
normally or abnormally. Unless you explicitly commit the transaction using
the COMMIT statement, DB2 rolls back the transaction.
|
|
|
| Application
| Setup
|
|
|
|
(program logic)
EXEC SQL CONNECT TO database A USER :userid USING :pw
EXEC SQL SELECT ...
EXEC SQL INSERT ...
(more SQL statements)
EXEC SQL COMMIT
(more program logic)
20
|
|
| First Unit
| of Work
|
|
|
| Second Unit
| of Work
|
|
|
| Third Unit
| of Work
|
|
|
| Application
| Cleanup
|
21
22
In some cases, the correct answer is to include the enforcement in both the
application (perhaps due to application specific requirements) and in the
database (perhaps due to other interactive uses outside the application).
Access to Data
In a relational database, you must use SQL to access the desired data, but you
may choose how to integrate the SQL into your application. You can choose
from the following interfaces and their supported languages:
Embedded SQL
C/C++, COBOL, FORTRAN, Java (SQLJ), REXX
DB2 CLI and ODBC
C/C++, Java (JDBC)
Microsoft Specifications, including ADO, RDO, and OLE DB
Visual Basic, Visual C++
Perl DBI
Perl
Query Products
Lotus Approach, IBM Query Management Facility
Embedded SQL
Embedded SQL has the advantage that it can consist of either static or
dynamic SQL or a mixture of both types. If the content and format of your
SQL statements will be frozen when your application is in use, you should
consider using embedded static SQL in your application. With static SQL, the
person who executes the application temporarily inherit the privileges of the
user that bound the application to the database. Unless you bind the
application with the DYNAMICRULES BIND option, dynamic SQL uses the
privileges of the person who executes the application. In general, you should
use embedded dynamic SQL where the executable statements are determined
at run time. This creates a more secure application program that can handle a
greater variety of input.
Note: Embedded SQL for Java (SQLJ) applications can only embed static SQL
statements. However, you can use JDBC to make dynamic SQL calls in
SQLJ applications.
You must precompile embedded SQL applications to convert the SQL
statements into host language commands before using your programming
language compiler. In addition, you must bind the SQL in the application to
the database for the application to run.
For additional information on using embedded SQL, refer to Chapter 4.
Writing Static SQL Programs on page 61.
23
REXX Considerations: REXX applications use APIs which enable them to use
most of the features provided by database manager APIs and SQL. Unlike
applications written in a compiled language, REXX applications are not
precompiled. Instead, a dynamic SQL handler processes all SQL statements.
By combining REXX with these callable APIs, you have access to most of the
database manager capabilities. Although REXX does not directly support some
APIs using embedded SQL, they can be accessed using the DB2 Command
Line Processor from within the REXX application.
As REXX is an interpretive language, you may find it is easier to develop and
debug your application prototypes in REXX as compared to compiled host
languages. Note that while DB2 applications coded in REXX do not provide
the performance of DB2 applications that use compiled languages, they do
provide the ability to create DB2 applications without precompiling,
compiling, linking, or using additional software.
For details of coding and building DB2 applications using REXX, see
Chapter 25. Programming in REXX on page 717.
DB2 Call Level Interface (DB2 CLI) and Open Database Connectivity
(ODBC)
The DB2 Call Level Interface (DB2 CLI) is IBMs callable SQL interface to the
DB2 family of database servers. It is a C and C++ application programming
interface for relational database access, and it uses function calls to pass
dynamic SQL statements as function arguments. A callable SQL interface is an
application program interface (API) for database access, which uses function
calls to invoke dynamic SQL statements. It is an alternative to embedded
dynamic SQL, but unlike embedded SQL, it does not require precompiling or
binding.
DB2 CLI is based on the Microsoft Open Database Connectivity (ODBC)
specification, and the X/Open specifications. IBM chose these specifications
to follow industry standards, and to provide a shorter learning curve for DB2
application programmers who are familiar with either of these database
interfaces.
For more information on the ODBC support in DB2, see the CLI Guide and
Reference.
JDBC
DB2s Java support includes JDBC, a vendor-neutral dynamic SQL interface
that provides data access to your application through standardized Java
methods. JDBC is similar to DB2 CLI in that you do not have to precompile or
bind a JDBC program. As a vendor-neutral standard, JDBC applications offer
increased portability.
24
An application written using JDBC uses only dynamic SQL. The JDBC
interface imposes additional processing overhead.
For additional information on JDBC, refer to JDBC Programming on
page 644.
Microsoft Specifications
You can write database applications that conform to the ActiveX Data Object
(ADO) in Microsoft Visual Basic or Visual C++. ADO applications use the
OLE DB Bridge. You can write database applications that conform to the
Remote Data Object (RDO) specifications in Visual Basic. You can also define
OLE DB table functions that return data from OLE DB providers. For more
information on OLE DB table functions, see OLE DB Table Functions on
page 431.
This book does not attempt to provide a tutorial on writing applications that
conform to the ADO and RDO specifications. For full samples of DB2
applications that use the ADO and RDO specifications, refer to the following
directories:
v For samples written in Visual Basic, refer to sqllib\samples\VB
v For samples written in Visual C++, refer to sqllib\samples\VC
v For samples that use the RDO specification, refer to sqllib\samples\RDO
v For samples that use the Microsoft Transaction Server, refer to
sqllib\samples\MTS
Perl DBI
DB2 supports the Perl Database Interface (DBI) specification for data access
through the DBD::DB2 driver. For more information on creating appliations
with the Perl DBI that access DB2 databases, see Chapter 22. Programming in
Perl on page 675. The DB2 Universal Database Perl DBI Web site at
http://www.ibm.com/software/data/db2/perl/ contains the latest DBD::DB2
driver and information on the support available for your platform.
Query Products
Query products including IBM Query Management Facility (QMF) and Lotus
Notes support query development and reporting. The products vary in how
SQL statements are developed and the degree of logic that can be introduced.
Depending on your needs, this approach may meet your requirements to
access data. This book does not provide further information on query
products.
25
26
27
28
Application Logic
You may decide to write code to enforce rules or perform related operations
in the application instead of the database. You must do this for cases where
you cannot generally apply the rules to the database. You may also choose to
place the logic in the application when you do not have control over the
definitions of the data in the database or you believe the application logic can
handle the rules or operations more efficiently.
29
Triggers
In Triggers on page 28, it is noted that triggers can be used to invoke
user-defined functions. This is useful when you always want a certain
non-SQL operation performed when specific statements occur, or data values
are changed. Examples include such operations as issuing an electronic mail
message under specific circumstances or writing alert type information to a
file.
For additional information on triggers, refer to Chapter 16. Using Triggers in
an Active DBMS on page 483.
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++ is a
collection of management tools and wizards that plug into the Visual C++
component of Visual Studio IDE. The tools and wizards automate and
simplify the various tasks involved in developing applications for DB2 using
embedded SQL.
You can use the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++ to develop, package, and deploy:
v Stored procedures written in C/C++ for DB2 Universal Database on
Windows 32-bit operating systems
v Windows 32-bit C/C++ embedded SQL client applications that access DB2
Universal Database servers
v Windows 32-bit C/C++ client applications that invoke stored procedures
using C/C++ function call wrappers
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++
allows you to focus on the design and logic of your DB2 applications rather
than the actual building and deployment of it.
Some of the tasks performed by the IBM DB2 Universal Database Project
Add-In for Microsoft Visual C++ include:
v Creating a new embedded SQL module
v Inserting SQL statements into an embedded SQL module using SQL Assist
v Adding imported stored procedures
v Creating an exported stored procedure
v Packaging the DB2 Project
v Deploying the DB2 project from within Visual C++
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++ is
presented in the form of a toolbar. The toolbar buttons include:
30
31
DB2 project
The collection of DB2 project objects that are inserted into the IDE
project. DB2 project objects can be inserted into any Visual C++
project. The DB2 project allows you to manage the various DB2
objects such as embedded SQL modules, imported stored procedures,
and exported stored procedures. You can add, delete, and modify
these objects and their properties.
module
A C/C++ source code file that might contain SQL statements.
development database
The database that is used to compile embedded SQL modules. The
development database is also used to look up the list of importable
database stored procedure definitions.
embedded SQL module
A C/C++ source code file that contains embedded static or dynamic
SQL.
imported stored procedure
A stored procedure, already defined in the database, that the project
invokes.
exported stored procedure
A database stored procedure that is built and defined by the project.
Activating the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++
To activate the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++, perform the following steps:
Step 1. Start and stop Visual C++ at least once with your current login ID.
The first time you run Visual C++, a profile is created for your user
ID, and that is what gets updated by the db2vccmd command. If you
have not started it once, and you try to run db2vccmd, you may see
errors like the following:
|
|
|
|
|
|
|
Step 2. Register the add-in, if you have not already done so, by entering:
db2vccmd register
Step 3.
Step 4.
Step 5.
Step 6.
32
Note: If the toolbar is accidentally closed, you can either deactivate then
reactivate the add-in or use the Microsoft Visual C++ standard
customization options to redisplay the toolbar.
Activating the IBM DB2 Universal Database Tools Add-In for Microsoft
Visual C++
The DB2 Tools Add-In is a toolbar that enables the launch of some of the DB2
administration and development tools from within the Visual C++ integrated
development environment.
|
|
|
|
|
|
|
To activate the IBM DB2 Universal Database Tools Add-In for Microsoft Visual
C++, perform the following steps:
Step 1. Start and stop Visual C++ at least once with your current login ID.
The first time you run Visual C++, a profile is created for your user
ID, and that is what gets updated by the db2vccmd command. If you
have not started it once, and you try to run db2vccmd, you may see
errors like the following:
"Registering DB2 Project add-in ...Failed! (rc = 2)"
Step 2. Register the add-in, if you have not already done so, by entering:
db2vccmd register
33
use Table 38 on page 737 as a quick reference aid. For a complete discussion of
all the statements, including their syntax, refer to the SQL Reference.
Authorization Considerations
An authorization allows a user or group to perform a general task such as
connecting to a database, creating tables, or administering a system. A privilege
gives a user or group the right to access one specific database object in a
specified way. DB2 uses a set of privileges to provide protection for the
information that you store in it. For more information about the different
privileges, refer to the Administration Guide: Planning.
Most SQL statements require some type of privilege on the database objects
which the statement utilizes. Most API calls usually do not require any
privilege on the database objects which the call utilizes, however, many APIs
require that you possess the necessary authority in order to invoke them. The
DB2 APIs enable you to perform the DB2 administrative functions from
within your application program. For example, to recreate a package stored in
the database without the need for a bind file, you can use the sqlarbnd (or
REBIND) API. For details on each DB2 API, refer to the Administrative API
Reference.
For information on the required privilege to issue each SQL statement, refer to
the SQL Reference. For information on the required privilege and authority to
issue each API call, refer to the Administrative API Reference.
When you design your application, consider the privileges your users will
need to run the application. The privileges required by your users depend on:
v whether your application uses dynamic SQL, including JDBC and DB2 CLI,
or static SQL
v which APIs the application uses
Dynamic SQL
To use dynamic SQL in a package bound with DYNAMICRULES RUN
(default), the person that runs a dynamic SQL application must have the
privileges necessary to issue each SQL request performed, as well as the
EXECUTE privilege on the package. The privileges may be granted to the
users authorization ID, to any group of which the user is a member, or to
PUBLIC.
If you bind the application with the DYNAMICRULES BIND option, DB2
associates your authorization ID with the application packages. This allows
any user that runs the application to inherit the privileges associated your
authorization ID.
34
The person binding the application (for embedded dynamic SQL applications)
only needs the BINDADD authority on the database, if the program contains
no static SQL. Again, this privilege can be granted to the users authorization
ID, to a group of which the user is a member, or to PUBLIC.
When you bind a dynamic SQL package with the DYNAMICRULES BIND
option, the user that runs the application only needs the EXECUTE privilege
on the package. To bind a dynamic SQL application with the
DYNAMICRULES BIND option, you must have the privileges necessary to
perform all the dynamic and static SQL statements in the application. If you
have SYSADM or DBADM authority and bind packages with
DYNAMICRULES BIND, consider using the OWNER BIND option to
designate a different authorization ID. OWNER BIND prevents the package
from automatically inheriting SYSADM or DBADM privileges on dynamic
SQL statements. For more information on DYNAMICRULES BIND and
OWNER BIND, refer to the BIND command in the Command Reference.
Static SQL
To use static SQL, the user running the application only needs the EXECUTE
privilege on the package. No privileges are required for each of the statements
that make up the package. The EXECUTE privilege may be granted to the
users authorization ID, to any group of which the user is a member, or to
PUBLIC.
Unless you specify the VALIDATE RUN option when binding the application,
the authorization ID you use to bind the application must have the privileges
necessary to perform all the statements in the application. If VALIDATE RUN
was specified at BIND time, all authorization failures for any static SQL
within this package will not cause the BIND to fail and those statements will
be revalidated at run time. The person binding the application must always
have BINDADD authority. The privileges needed to execute the statements
must be granted to the users authorization ID or to PUBLIC. Group
privileges are not used when binding static SQL statements. As with dynamic
SQL, the BINDADD privilege can be granted to the user authorization ID, to a
group of which the user is a member, or to PUBLIC.
These properties of static SQL give you very precise control over access to
information in DB2. See the example at the end of this section for a possible
application of this.
Using APIs
Most of the APIs provided by DB2 do not require the use of privileges,
however, many do require some kind of authority to invoke. For the APIs that
do require a privilege, the privilege must be granted to the user running the
application. The privilege may be granted to the users authorization ID, to
35
Example
Consider two users, PAYROLL and BUDGET, who need to perform queries
against the STAFF table. PAYROLL is responsible for paying the employees of
the company, so it needs to issue a variety of SELECT statements when
issuing paychecks. PAYROLL needs to be able to access each employees
salary. BUDGET is responsible for determining how much money is needed to
pay the salaries. BUDGET should not, however, be able to see any particular
employees salary.
Since PAYROLL issues many different SELECT statements, the application you
design for PAYROLL could probably make good use of dynamic SQL. This
would require that PAYROLL have SELECT privilege on the STAFF table. This
is not a problem since PAYROLL needs full access to the table anyhow.
BUDGET, on the other hand, should not have access to each employees
salary. This means that you should not grant SELECT privilege on the STAFF
table to BUDGET. Since BUDGET does need access to the total of all the
salaries in the STAFF table, you could build a static SQL application to
execute a SELECT SUM(SALARY) FROM STAFF, bind the application and
grant the EXECUTE privilege on your applications package to BUDGET. This
lets BUDGET get the needed information without exposing the information
that BUDGET should not see.
36
v Provide the run-time interface for precompiled SQL statements. These APIs
are not usually called directly by the programmer. Instead, they are inserted
into the modified source file by the precompiler after processing.
The database manager includes APIs for language vendors who want to write
their own precompiler, and other APIs useful for developing applications.
For complete details on the APIs available with the database manager and
how to call them, see the examples in the Administrative API Reference.
37
Insert
Rows
Delete
Rows
Column Name
Data Type
Update
Access
TEST.TEMPL
No
No
EMPNO
LASTNAME
WORKDEPT
PHONENO
JOBCODE
CHAR(6)
VARCHAR(15)
CHAR(3)
CHAR(4)
DECIMAL(3)
Yes
Yes
Yes
TEST.TDEPT
No
No
DEPTNO
MGRNO
CHAR(3)
CHAR(6)
TEST.TPROJ
Yes
Yes
PROJNO
DEPTNO
RESPEMP
PRSTAFF
PRSTDATE
PRENDATE
CHAR(6)
CHAR(3)
CHAR(6)
DECIMAL(5,2)
DECIMAL(6)
DECIMAL(6)
Yes
Yes
Yes
Yes
Yes
When the description of the application data access is complete, construct the
test tables and views that are needed to test the application:
v Create a test table when the application modifies data in a table or a view.
Create the following test tables using the CREATE TABLE SQL statement:
TEMPL
TPROJ
v Create a test view when the application does not modify data in the
production database.
In this example, create a test view of the TDEPT table using the CREATE
VIEW SQL statement.
If the database schema is being developed along with the application, the
definitions of the test tables might be refined repeatedly during the
development process. Usually, the primary application cannot both create the
tables and access them because the database manager cannot bind statements
that refer to tables and views that do not exist. To make the process of
creating and changing tables less time-consuming, consider developing a
separate application to create the tables. Of course you can always create test
tables interactively using the Command Line Processor (CLP).
38
v The IMPORT or LOAD utility inserts large amounts of new or existing data
from a defined source.
v The RESTORE utility can be used to duplicate the contents of an existing
database into an identical test database by using a BACKUP copy of the
original database.
For information about the INSERT statement, refer to the SQL Reference. For
information about the IMPORT, LOAD, and RESTORE utilities, refer to the
Administration Guide.
The following SQL statements demonstrate a technique you can use to
populate your tables with randomly generated test data. Suppose the table
EMP contains four columns, ENO (employee number), LASTNAME (last
name), HIREDATE (date of hire) and SALARY (employees salary) as in the
following CREATE TABLE statement:
CREATE TABLE EMP (ENO INTEGER, LASTNAME VARCHAR(30),
HIREDATE DATE, SALARY INTEGER);
Suppose you want to populate this table with employee numbers from 1 to a
number, say 100, with random data for the rest of the columns. You can do
this using the following SQL statement:
INSERT INTO EMP
-- generate 100 records
WITH DT(ENO) AS (VALUES(1) UNION ALL
SELECT ENO+1 FROM DT WHERE ENO < 100 ) 1
-- Now, use the generated records in DT to create other columns
-- of the employee record.
SELECT ENO, 2
TRANSLATE(CHAR(INTEGER(RAND()*1000000)), 3
CASE MOD(ENO,4) WHEN 0 THEN 'aeiou' || 'bcdfg'
WHEN 1 THEN 'aeiou' || 'hjklm'
WHEN 2 THEN 'aeiou' || 'npqrs'
ELSE 'aeiou' || 'twxyz' END,
'1234567890') AS LASTNAME,
CURRENT DATE - (RAND()*10957) DAYS AS HIREDATE, 4
INTEGER(10000+RAND()*200000) AS SALARY 5
FROM DT;
SELECT * FROM EMP;
39
40
v Make full use of the error-handling APIs. For example, you can use
error-handling APIs to print all messages during the testing phase. For
more information about error-handling APIs, see the Administrative API
Reference.
41
42
43
44
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
47
49
49
51
52
53
53
54
54
55
56
56
57
58
58
C/C++
EXEC SQL UPDATE staff SET job = 'Clerk' WHERE job = 'Mgr';
if ( SQLCODE < 0 )
printf( "Update Error: SQLCODE = %ld \n", SQLCODE );
Java (SQLJ)
try {
#sql { UPDATE staff SET job = 'Clerk' WHERE job = 'Mgr' };
}
catch (SQLException e) {
println( "Update Error: SQLCODE = " + e.getErrorCode() );
}
COBOL
EXEC SQL UPDATE staff SET job = 'Clerk' WHERE job = 'Mgr' END_EXEC.
IF SQLCODE LESS THAN 0
DISPLAY 'UPDATE ERROR: SQLCODE = ', SQLCODE.
FORTRAN
EXEC SQL UPDATE staff SET job = 'Clerk' WHERE job = 'Mgr'
if ( sqlcode .lt. 0 ) THEN
write(*,*) 'Update error: sqlcode = ', sqlcode
45
SQL statements placed in an application are not specific to the host language.
The database manager provides a way to convert the SQL syntax for
processing by the host language.
For the C, C++, COBOL or FORTRAN languages, this conversion is handled
by the DB2 precompiler. The DB2 precompiler is invoked using the PREP
command. The precompiler converts embedded SQL statements directly into
DB2 run-time services API calls.
For the Java language, the SQLJ translator converts SQLJ clauses into JDBC
statements. The SQLJ translator is invoked with the SQLJ command.
When the precompiler processes a source file, it specifically looks for SQL
statements and avoids the non-SQL host language. It can find SQL statements
because they are surrounded by special delimiters. For the syntax information
necessary to embed SQL statements in the language you are using, see the
following:
v for C/C++, Embedding SQL Statements in C and C++ on page 599
v for Java (SQLJ), Embedding SQL Statements in Java on page 654
v for COBOL, Embedding SQL Statements in COBOL on page 683
v for FORTRAN, Embedding SQL Statements in FORTRAN on page 705
v for REXX, Embedding SQL Statements in REXX on page 719
Table 3 shows how to use delimiters and comments to create valid embedded
SQL statements in the supported compiled host languages.
Table 3. Embedding SQL Statements in a Host Language
Language
46
C/C++
SQLJ
FORTRAN
47
you read through the following sections about what happens at each stage of
program preparation.
Source Files
With SQL
Statements
Precompiler
(db2 PREP)
Source Files
Without SQL
Statements
Object
Files
Executable
Program
Bind
File
(Package)
48
BINDFILE
Create a
Bind File
Modified
Source Files
Libraries
PACKAGE
Create a
Package
Binder
(db2 BIND)
Precompiling
After you create the source files, you must precompile each host language file
containing SQL statements with the PREP command for host language source
files. The precompiler converts SQL statements contained in the source file to
comments, and generates the DB2 run-time API calls for those statements.
Before precompiling an application you must connect to a server, either
implicitly or explicitly. Although you precompile application programs at the
client workstation and the precompiler generates modified source and
messages on the client, the precompiler uses the server connection to perform
some of the validation.
The precompiler also creates the information the database manager needs to
process the SQL statements against a database. This information is stored in a
package, in a bind file, or in both, depending on the precompiler options
selected.
A typical example of using the precompiler follows. To precompile a C
embedded SQL source file called filename.sqc, you can issue the following
command to create a C source file with the default name filename.c and a
bind file with the default name filename.bnd:
DB2 PREP filename.sqc BINDFILE
For detailed information on precompiler syntax and options, see the Command
Reference.
49
Bind File
50
Message File
51
52
v The database manager library containing the database manager APIs for
your operating environment. Refer to the Application Building Guide or other
programming documentation for your operating platform for the specific
name of the database manager library you need for your database manager
APIs.
Binding
Binding is the process that creates the package the database manager needs in
order to access the database when the application is executed. Binding can be
done implicitly by specifying the PACKAGE option during precompilation, or
explicitly by using the BIND command against the bind file created during
precompilation.
A typical example of using the BIND command follows. To bind a bind file
named filename.bnd to the database, you can issue the following command:
DB2 BIND filename.bnd
For detailed information on BIND command syntax and options, refer to the
Command Reference.
One package is created for each separately precompiled source code module.
If an application has five source files, of which three require precompilation,
three packages or bind files are created. By default, each package is given a
name that is the same as the name of the source module from which the .bnd
file originated, but truncated to 8 characters. If the name of this newly created
package is the same as a package that currently exists in the target database,
the new package replaces the previously existing package. To explicitly specify
a different package name, you must use the PACKAGE USING option on the
PREP command. See the Command Reference for details.
Renaming Packages
When creating multiple versions of an application, you should avoid
conflicting names by renaming your package. For example, if you have an
application called foo (compiled from foo.sqc), you precompile it and send it
to all the users of your application. The users bind the application to the
database, and then run the application. To make subsequent changes, create a
new version of foo and send this application and its bind file to the users that
require the new version. The new users bind foo.bnd and the new application
runs without any problem. However, when users attempt to run the old
version of the application, they receive a timestamp conflict on the FOO
package (which indicates that the package in the database does not match the
application being run) so they rebind the client. (See Timestamps on page 58
for more information on package timestamps.) Now the users of the new
application receive a timestamp conflict. This problem is caused because both
applications use packages with the same name.
53
The solution is to use package renaming. When you build the first version of
FOO, you precompile it with the command:
DB2 PREP FOO.SQC BINDFILE PACKAGE USING FOO1
After you distribute this application, users can bind and run it without any
problem. When you build the new version, you precompile it with the
command:
DB2 PREP FOO.SQC BINDFILE PACKAGE USING FOO2
After you distribute the new application, it will also bind and run without
any problem. Since the package name for the new version is FOO2 and the
package name for the first version is FOO1, there is no naming conflict and
both versions of the application can be used.
Binding Dynamic Statements
For dynamically prepared statements, the values of a number of special
registers determine the statement compilation environment:
v The CURRENT QUERY OPTIMIZATION special register determines which
optimization class is used.
v The CURRENT FUNCTION PATH special register determines the function
path used for UDF and UDT resolution.
v The CURRENT EXPLAIN SNAPSHOT register determines whether explain
snapshot information is captured.
v The CURRENT EXPLAIN MODE register determines whether explain table
information is captured, for any eligible dynamic SQL statement. The
default values for these special registers are the same defaults used for the
related bind options. For information on special registers and their
interaction with BIND options, refer to the appendix of the SQL Reference.
Resolving Unqualified Table Names
You can handle unqualified table names in your application by using one of
the following methods:
v For each user, bind the package with different COLLECTION parameters
from different authorization identifiers by using the following commands:
CONNECT TO db_name USER user_name
BIND file_name COLLECTION schema_name
In the above example, db_name is the name of the database, user_name is the
name of the user, and file_name is the name of the application that will be
bound. Note that user_name and schema_name are usually the same value.
Then use the SET CURRENT PACKAGESET statement to specify which
package to use, and therefore, which qualifiers will be used. The default
qualifier is the authorization identifier that is used when binding the
package. For an example of how to use the SET CURRENT PACKAGESET
statement, refer to the SQL Reference.
54
v Create views for each user with the same name as the table so the
unqualified table names resolve correctly. (Note that the QUALIFIER option
is DB2 Connect only, meaning that it can only be used when using a host
server.)
v Create an alias for each user to point to the desired table.
Other Binding Considerations
If your application code page uses a different code page from your database
code page, you may need to consider which code page to use when binding.
See Conversion Between Different Code Pages on page 515.
If your application issues calls to any of the database manager utility APIs,
such as IMPORT or EXPORT, you must bind the supplied utility bind files to
the database. For details, refer to the Quick Beginnings guide for your
platform.
You can use bind options to control certain operations that occur during
binding, as in the following examples:
v The QUERYOPT bind option takes advantage of a specific optimization
class when binding.
v The EXPLSNAP bind option stores Explain Snapshot information for
eligible SQL statements in the Explain tables.
v The FUNCPATH bind option properly resolves user-defined distinct types
and user-defined functions in static SQL.
For information on bind options, refer to the section on the BIND command in
the Command Reference.
If the bind process starts but never returns, it may be that other applications
connected to the database hold locks that you require. In this case, ensure that
no applications are connected to the database. If they are, disconnect all
applications on the server and the bind process will continue.
If your application will access a server using DB2 Connect, you can use the
BIND options available for that server. For details on the BIND command and
its options, refer to the Command Reference.
Bind files are not backward compatible with previous versions of DB2
Universal Database. In mixed-level environments, DB2 can only use the
functions available to the lowest level of the database environment. For
example, if a V5.2 client connects to a V5.0 server, the client will only be able
to use V5.0 functions. As bind files express the functionality of the database,
they are subject to the mixed-level restriction.
If you need to rebind higher-level bind files on lower-level systems, you can:
Chapter 3. Embedded SQL Overview
55
56
WW
db2bfd
-h
-b
-s
-v
(1)
(2)
filespec
(5)
WY
(3)
(4)
Notes:
1
57
Timestamps
When generating a package or a bind file, the precompiler generates a
timestamp. The timestamp is stored in the bind file or package and in the
modified source file.
When an application is precompiled with binding enabled, the package and
modified source file are generated with timestamps that match. When the
application is run, the timestamps are checked for equality. An application
and an associated package must have matching timestamps for the application
to run, or an SQL0818N error is returned to the application.
Remember that when you bind an application to a database, the first eight
characters of the application name are used as the package name unless you
override the default by using the PACKAGE USING option on the PREP command.
This means that if you precompile and bind two programs using the same
name, the second will override the package of the first. When you run the
first program, you will get a timestamp error because the timestamp for the
modified source file no longer matches that of the package in the database.
When an application is precompiled with binding deferred, one or more bind
files and modified source files are generated with matching timestamps. To
run the application, the bind files produced by the application modules can
execute. The binding process must be done for each bind file as discussed in
Binding on page 53.
The application and package timestamps match because the bind file contains
the same timestamp as the one that was stored in the modified source file
during precompilation.
Rebinding
Rebinding is the process of recreating a package for an application program
that was previously bound. You must rebind packages if they have been
marked invalid or inoperative. In some situations, however, you may want to
rebind packages that are valid. For example, you may want to take advantage
of a newly created index, or make use of updated statistics after executing the
RUNSTATS command.
58
59
60
102
102
102
102
103
103
103
104
104
105
105
105
108
110
112
114
116
116
116
117
117
118
119
119
120
122
123
125
61
62
static.sqc
Java
Static.sqlj
COBOL
static.sqb
The REXX language does not support static SQL, so a sample is not provided.
This sample program contains a query that selects a single row. Such a query
can be performed using the SELECT INTO statement.
The SELECT INTO statement selects one row of data from tables in a
database, and the values in this row are assigned to host variables specified in
the statement. Host variables are discussed in detail in Using Host Variables
on page 71. For example, the following statement will deliver the salary of
the employee with the last name of 'HAAS' into the host variable empsal:
SELECT SALARY
INTO :empsal
FROM EMPLOYEE
WHERE LASTNAME='HAAS'
A SELECT INTO statement must be specified to return only one or zero rows.
Finding more than one row results in an error, SQLCODE -811 (SQLSTATE
21000). If several rows can be the result of a query, a cursor must be used to
process the rows. See Selecting Multiple Rows Using a Cursor on page 81
for more information.
For more details on the SELECT INTO statement, refer to the SQL Reference.
Chapter 4. Writing Static SQL Programs
63
Java
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for
the source code for this error checking utility.
64
65
C Example: STATIC.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
}
/* end of program : STATIC.SQC */
66
// URL is jdbc:db2:dbname
}
else
{
throw new Exception("\nUsage: java Static [username password]\n");
}
// Set the default context
DefaultContext ctx = new DefaultContext(con);
DefaultContext.setDefaultContext(ctx);
String firstname = null;
#sql { SELECT FIRSTNME INTO :firstname
FROM employee
WHERE LASTNAME = 'JOHNSON' } ; 4
67
68
System.out.println (e);
1
2
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: STATIC".
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* with the length of the
inspect passwd-name
before initial "
3
4
5
6
69
70
Retrieving Data
One of the most common tasks of an SQL application program is to retrieve
data. This is done using the select-statement, which is a form of query that
searches for rows of tables in the database that meet specified search
conditions. If such rows exist, the data is retrieved and put into specified
variables in the host program, where it can be used for whatever it was
designed to do.
After you have written a select-statement, you code the SQL statements that
define how information will be passed to your application.
You can think of the result of a select-statement as being a table having rows
and columns, much like a table in the database. If only one row is returned,
you can deliver the results directly into host variables specified by the
SELECT INTO statement.
If more than one row is returned, you must use a cursor to fetch them one at a
time. A cursor is a named control structure used by an application program to
point to a specific row within an ordered set of rows. For information about
how to code and use cursors, see the following sections:
v Declaring and Using the Cursor on page 81,
v Selecting Multiple Rows Using a Cursor on page 81,
v Example: Cursor Program on page 84.
71
Host variables are declared in compiled host languages, and are delimited by
BEGIN DECLARE SECTION and END DECLARE SECTION statements.
These statements enable the precompiler to find the declarations.
Note: Java JDBC and SQLJ programs do not use declare sections. Host
variables in Java follow the normal Java variable declaration syntax.
Host variables are declared using a subset of the host language. For a
description of the supported syntax for your host language, see:
v Chapter 20. Programming in C and C++ on page 593
v Chapter 21. Programming in Java on page 637
v Chapter 23. Programming in COBOL on page 679
v Chapter 24. Programming in FORTRAN on page 701
v Chapter 25. Programming in REXX on page 717.
The following rules apply to host variable declaration sections:
v All host variables must be declared in the source file before they are
referenced, except for host variables referring to SQLDA structures.
v Multiple declare sections may be used in one source file.
v The precompiler is unaware of host language variable scoping rules.
With respect to SQL statements, all host variables have a global scope
regardless of where they are actually declared in a single source file.
Therefore, host variable names must be unique within a source file.
This does not mean that the DB2 precompiler changes the scope of host
variables to global so that they can be accessed outside the scope in which
they are defined. Consider the following example:
foo1(){
.
.
.
BEGIN SQL DECLARE SECTION;
int x;
END SQL DECLARE SECTION;
x=10;
.
.
.
}
foo2(){
.
.
.
y=x;
.
.
.
}
72
Depending on the language, the above example will either fail to compile
because variable x is not declared in function foo2() or the value of x
would not be set to 10 in foo2(). To avoid this problem, you must either
declare x as a global variable, or pass x as a parameter to function foo2()
as follows:
foo1(){
.
.
.
BEGIN SQL DECLARE SECTION;
int x;
END SQL DECLARE SECTION;
x=10;
foo2(x);
.
.
.
}
foo2(int x){
.
.
.
y=x;
.
.
.
}
73
For example, to generate the declarations for the STAFF table in the SAMPLE
database in C in the output file staff.h, issue the following command:
db2dclgn -d sample -t staff -l C
74
Language
C/C++
Java
COBOL
FORTRAN
C/C++
JAVA (SQLJ)
COBOL
FORTRAN
75
Java (SQLJ)
#SQL { FETCH :c1 INTO :cm };
if ( cm == null )
System.out.println( "Commission is NULL\n" );
COBOL
EXEC SQL FETCH C1 INTO :cm INDICATOR :cmind END-EXEC
IF cmind LESS THAN 0
DISPLAY 'Commission is NULL'
FORTRAN
EXEC SQL FETCH C1 INTO :cm INDICATOR :cmind
IF ( cmind .LT. 0 ) THEN
WRITE(*,*) 'Commission is NULL'
ENDIF
In the figure, cmind is examined for a negative value. If it is not negative, the
application can use the returned value of cm. If it is negative, the fetched
value is NULL and cm should not be used. The database manager does not
change the value of the host variable in this case.
Note: If the database configuration parameter DFT_SQLMATHWARN is set to
YES, the value of cmind may be -2. This indicates a NULL that was
caused by evaluating an expression with an arithmetic error or by an
overflow while attempting to convert the numeric result value to the
host variable.
If the data type can handle NULLs, the application must provide a NULL
indicator. Otherwise, an error may occur. If a NULL indicator is not used, an
SQLCODE -305 (SQLSTATE 22002) is returned.
76
Data Types
Each column of every DB2 table is given an SQL data type when the column is
created. For information about how these types are assigned to columns, refer
to the CREATE TABLE statement in the SQL Reference. The database manager
supports the following column data types:
SMALLINT
16-bit signed integer.
77
INTEGER
32-bit signed integer. INT can be used as a synonym for this type.
BIGINT
64-bit signed integer.
DOUBLE
Double-precision floating point. DOUBLE PRECISION and
FLOAT(n) (where n is greater than 24) are synonyms for this type.
REAL Single-precision floating point. FLOAT(n) (where n is less than 24) is a
synonym for this type.
DECIMAL
Packed decimal. DEC, NUMERIC, and NUM are synonyms for this
type.
CHAR
Fixed-length character string of length 1 byte to 254 bytes.
CHARACTER can be used as a synonym for this type.
VARCHAR
Variable-length character string of length 1 byte to 32672 bytes.
CHARACTER VARYING and CHAR VARYING are synonyms for
this type.
LONG VARCHAR
Long variable-length character string of length 1 byte to 32 700 bytes.
CLOB Large object variable-length character string of length 1 byte to 2
gigabytes.
BLOB Large object variable-length binary string of length 1 byte to 2
gigabytes.
DATE Character string of length 10 representing a date.
TIME Character string of length 8 representing a time.
TIMESTAMP
Character string of length 26 representing a timestamp.
The following data types are supported only in double-byte character set
(DBCS) and Extended UNIX Code (EUC) character set environments:
GRAPHIC
Fixed-length graphic string of length 1 to 127 double-byte characters.
VARGRAPHIC
Variable-length graphic string of length 1 to 16336 double-byte
characters.
78
LONG VARGRAPHIC
Long variable-length graphic string of length 1 to 16 350 double-byte
characters.
DBCLOB
Large object variable-length graphic string of length 1 to 1 073 741 823
double-byte characters.
Notes:
1. Every supported data type can have the NOT NULL attribute. This is
treated as another type.
2. The above set of data types can be extended by defining user-defined
distinct types (UDT). UDTs are separate data types which use the
representation of one of the built-in SQL types.
Supported host languages have data types that correspond to the majority of
the database manager data types. Only these host language data types can be
used in host variable declarations. When the precompiler finds a host variable
declaration, it determines the appropriate SQL data type value. The database
manager uses this value to convert the data exchanged between itself and the
application.
As the application programmer, it is important for you to understand how the
database manager handles comparisons and assignments between different
data types. Simply put, data types must be compatible with each other during
assignment and comparison operations, whether the database manager is
working with two SQL column data types, two host-language data types, or
one of each.
The general rule for data type compatibility is that all supported host-language
numeric data types are comparable and assignable with all database manager
numeric data types, and all host-language character types are compatible with
all database manager character types; numeric types are incompatible with
character types. However, there are also some exceptions to this general rule
depending on host language idiosyncrasies and limitations imposed when
working with large objects.
Within SQL statements, DB2 provides conversions between compatible data
types. For example, in the following SELECT statement, SALARY and BONUS
are DECIMAL columns; however, each employees total compensation is
returned as DOUBLE data:
SELECT EMPNO, DOUBLE(SALARY+BONUS) FROM EMPLOYEE
Note that the execution of the above statement includes conversion between
DECIMAL and DOUBLE data types. To make the query results more readable
on your screen, you could use the following SELECT statement:
Chapter 4. Writing Static SQL Programs
79
To convert data within your application, contact your compiler vendor for
additional routines, classes, built-in types, or APIs that supports this
conversion.
Character data types may also be subject to character conversion. If your
application code page is not the same as your database code page, see
Conversion Between Different Code Pages on page 515.
For the list of supported SQL data types and the corresponding host language
data types, see the following:
v for C/C++, Supported SQL Data Types in C and C++ on page 627
v for Java, Supported SQL Data Types in Java on page 639
v for COBOL, Supported SQL Data Types in COBOL on page 695
v for FORTRAN, Supported SQL Data Types in FORTRAN on page 712
v for REXX, Supported SQL Data Types in REXX on page 726.
For more information about SQL data types, the rules of assignments and
comparisons, and data conversion and conversion errors, refer to the SQL
Reference.
80
81
Language
Example Source Code
C/C++
EXEC SQL DECLARE C1 CURSOR FOR
SELECT PNAME, DEPT FROM STAFF
WHERE JOB=:host_var;
Java (SQLJ)
#sql iterator cursor1(host_var data type);
#sql cursor1 = { SELECT PNAME, DEPT FROM STAFF
WHERE JOB=:host_var };
COBOL
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME, DEPT FROM STAFF
WHERE JOB=:host-var END-EXEC.
FORTRAN
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME, DEPT FROM STAFF
WHERE JOB=:host_var
+
+
82
2. Open the cursor and fetch data from the result table one row at a time:
EXEC SQL OPEN EMPLUPDT
.
.
.
EXEC SQL FETCH EMPLUPDT
INTO :upd_emp, :upd_lname, :upd_tele, :upd_jobcd, :upd_wage,
4. After a COMMIT is issued, you must issue a FETCH before you can
update another row.
You should include code in your application to detect and handle an
SQLCODE -501 (SQLSTATE 24501), which can be returned on a FETCH or
CLOSE statement if your application either:
v Uses cursors declared WITH HOLD
v Executes more than one unit of work and leaves a WITH HOLD cursor
open across the unit of work boundary (COMMIT WORK).
If an application invalidates its package by dropping a table on which it is
dependent, the package gets rebound dynamically. If this is the case, an
SQLCODE -501 (SQLSTATE 24501) is returned for a FETCH or CLOSE
statement because the database manager closes the cursor. The way to handle
83
cursor.sqc
Java
Cursor.sqlj
COBOL
cursor.sqb
Since REXX does not support static SQL, a sample is not provided. See
Example: Dynamic SQL Program on page 133 for a REXX example that
processes a cursor dynamically.
How the Cursor Program Works
1. Declare the cursor. The DECLARE CURSOR statement associates the
cursor c1 to a query. The query identifies the rows that the application
retrieves using the FETCH statement. The job field of staff is defined to
be updatable, even though it is not specified in the result table.
2. Open the cursor. The cursor c1 is opened, causing the database manager
to perform the query and build a result table. The cursor is positioned
before the first row.
3. Retrieve a row. The FETCH statement positions the cursor at the next row
and moves the contents of the row into the host variables. This row
becomes the current row.
4. Close the cursor. The CLOSE statement is issued, releasing the resources
associated with the cursor. The cursor can be opened again, however.
84
Java
COBOL
FORTRAN
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
85
C Example: CURSOR.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
86
EMB_SQL_CHECK("CLOSE CURSOR");
EXEC SQL ROLLBACK;
EMB_SQL_CHECK("ROLLBACK");
printf( "\nOn second thought -- changes rolled back.\n" );
EXEC SQL CONNECT RESET;
EMB_SQL_CHECK("CONNECT RESET");
return 0;
}
/* end of program : CURSOR.SQC */
87
// URL is jdbc:db2:dbname
}
else
{
throw new Exception("\nUsage: java Cursor [username password]\n");
}
// Set the default context
DefaultContext ctx = new DefaultContext(con);
DefaultContext.setDefaultContext(ctx);
// Enable transactions
con.setAutoCommit(false);
// Using cursors
try
{
CursorByName cursorByName;
CursorByPos cursorByPos;
88
}
cursorByName.close(); 4
}
cursorByPos.close(); 4
}
catch( Exception e )
{
throw e;
}
finally
{
// Rollback the transaction
System.out.println("\nRollback the transaction...");
#sql { ROLLBACK };
System.out.println("Rollback done.");
}
}
catch( Exception e )
{
System.out.println (e);
}
89
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: CURSOR".
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* with the length of the
inspect passwd-name
before initial "
90
1
2
4
3
End-Prog.
stop run.
91
Types of Cursors
Cursors fall into three categories:
Read only
The rows in the cursor can only be read, not updated. Read-only
cursors are used when an application will only read data, not modify
it. A cursor is considered read only if it is based on a read-only
select-statement. See the rules in Updating Retrieved Data for
select-statements which define non-updatable result tables.
There can be performance advantages for read-only cursors. For more
information on read-only cursors, refer to the Administration Guide:
Implementation.
92
Updatable
The rows in the cursor can be updated. Updatable cursors are used
when an application modifies data as the rows in the cursor are
fetched. The specified query can only refer to one table or view. The
query must also include the FOR UPDATE clause, naming each
column that will be updated (unless the LANGLEVEL MIA
precompile option is used).
Ambiguous
The cursor cannot be determined to be updatable or read only from
its definition or context. This can happen when a dynamic SQL
statement is encountered that could be used to change a cursor that
would otherwise be considered read-only.
An ambiguous cursor is treated as read only if the BLOCKING ALL
option is specified when precompiling or binding. Otherwise, it is
considered updatable.
Note: Cursors processed dynamically are always ambiguous.
For a complete list of criteria used to determine whether a cursor is read-only,
updatable, or ambiguous, refer to the SQL Reference.
openftch.sqc
Java
COBOL
openftch.sqb
The REXX language does not support static SQL, so a sample is not provided.
How the OPENFTCH Program Works
1. Declare the cursor. The DECLARE CURSOR statement associates the
cursor c1 to a query. The query identifies the rows that the application
retrieves using the FETCH statement. The job field of staff is defined to
be updatable, even though it is not specified in the result table.
2. Open the cursor. The cursor c1 is opened, causing the database manager
to perform the query and build a result table. The cursor is positioned
before the first row.
3. Retrieve a row. The FETCH statement positions the cursor at the next row
and moves the contents of the row into the host variables. This row
becomes the current row.
Chapter 4. Writing Static SQL Programs
93
4. Update OR Delete the current row. The current row is either updated or
deleted, depending upon the value of dept returned with the FETCH
statement.
If an UPDATE is performed, the position of the cursor remains on this row
because the UPDATE statement does not change the position of the
current row.
If a DELETE statement is performed, a different situation arises, because
the current row is deleted. This is equivalent to being positioned before the
next row, and a FETCH statement must be issued before additional
WHERE CURRENT OF operations are performed.
5. Close the cursor. The CLOSE statement is issued, releasing the resources
associated with the cursor. The cursor can be opened again, however.
The CHECKERR macro/function is an error checking utility which is external to
the program. The location of this error checking utility depends upon the
programming language used:
C
Java
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
94
C Example: OPENFTCH.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
95
}
else
{
printf ("%-10.10s in dept. %2d will be DELETED!\n",
pname, dept);
EXEC SQL DELETE FROM staff WHERE CURRENT OF c1;
EMB_SQL_CHECK("DELETE");
} /* endif */
} while ( 1 );
EXEC SQL CLOSE c1; 5
EMB_SQL_CHECK("CLOSE CURSOR");
EXEC SQL ROLLBACK;
EMB_SQL_CHECK("ROLLBACK");
printf( "\nOn second thought -- changes rolled back.\n" );
EXEC SQL CONNECT RESET;
EMB_SQL_CHECK("CONNECT RESET");
return 0;
}
/* end of program : OPENFTCH.SQC */
96
import sqlj.runtime.ForUpdate;
#sql public iterator OpF_Curs implements ForUpdate (String, short);
Openftch.sqlj
import java.sql.*;
import sqlj.runtime.*;
import sqlj.runtime.ref.*;
class Openftch
{
static
{
try
{
Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{
System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
public static void main(String argv[])
{
try
{
System.out.println (" Java Openftch Sample");
String url = "jdbc:db2:sample";
Connection con = null;
// URL is jdbc:db2:dbname
}
else
{
throw new Exception(
"\nUsage: java Openftch [username password]\n");
} // if - else if - else
// Set the default context
DefaultContext ctx = new DefaultContext(con);
DefaultContext.setDefaultContext(ctx);
// Enable transactions
Chapter 4. Writing Static SQL Programs
97
con.setAutoCommit(false);
// Executing SQLJ positioned update/delete statements.
try
{
OpF_Curs forUpdateCursor;
String name = null;
short dept=0;
#sql forUpdateCursor =
{
SELECT name, dept
FROM staff
WHERE job='Mgr'
}; // #sql
12
while (true)
{
#sql
{
FETCH :forUpdateCursor
INTO :name, :dept
}; // #sql
3
if (forUpdateCursor.endFetch()) break;
if (dept > 40)
{
System.out.println (
name + " in dept. "
+ dept + " will be demoted to Clerk");
#sql
{
UPDATE staff SET job = 'Clerk'
WHERE CURRENT OF :forUpdateCursor
}; // #sql
4
}
else
{
System.out.println (
name + " in dept. " + dept
+ " will be DELETED!");
#sql
{
DELETE FROM staff
WHERE CURRENT OF :forUpdateCursor
}; // #sql
} // if - else
}
forUpdateCursor.close(); 5
}
catch( Exception e )
{
throw e;
}
finally
{
// Rollback the transaction
System.out.println("\nRollback the transaction...");
#sql { ROLLBACK };
System.out.println("Rollback done.");
} // try - catch - finally
}
catch( Exception e )
98
{
System.out.println (e);
} // try - catch
} // main
} // class Openftch
99
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: OPENFTCH".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* with the length of the
inspect passwd-name
before initial "
1
2
100
5
3
4
End-Fetch-Loop. exit.
End-Prog.
stop run.
101
102
Now, suppose that you want to return to the rows that start with DEPTNO =
'M95' and fetch sequentially from that point. Code the following:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'CALIFORNIA'
AND DEPTNO >= 'M95'
ORDER BY DEPTNO
103
Because of the subtle relationships between the form of an SQL statement and
the values in this statement, never assume that two different SQL statements
will return rows in the same order unless the order is uniquely determined by
an ORDER BY clause.
Retrieving in Reverse Order
Ascending ordering of rows is the default. If there is only one row for each
value of DEPTNO, then the following statement specifies a unique ascending
ordering of rows:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'CALIFORNIA'
ORDER BY DEPTNO
To retrieve the same rows in reverse order, specify that the order is
descending, as in the following statement:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'CALIFORNIA'
ORDER BY DEPTNO DESC
A cursor on the second statement retrieves rows in exactly the opposite order
from a cursor on the first statement. Order of retrieval is guaranteed only if
the first statement specifies a unique ordering sequence.
For retrieving rows in reverse order, it can be useful to have two indexes on
the DEPTNO column, one in ascending order and the other in descending order.
For this example, the following statement positions the cursor at the row with
the highest DEPTNO value:
104
Note, however, that if several rows have the same value, the cursor is
positioned on the first of them.
updat.sqc
Java
Updat.sqlj
COBOL
updat.sqb
REXX
updat.cmd
105
3.
4.
5.
6.
7.
Host variables are used to pass data to and from the database manager.
They are prefixed with a colon (:) when referenced in an SQL statement.
Java and REXX applications do not need to declare host variables, except
(for REXX) in the case of LOB file reference variables and locators. Host
variable data types and sizes are determined at run time when the
variables are referenced.
Connect to database. The program connects to the sample database, and
requests shared access to it. (It is assumed that a START DATABASE
MANAGER API call or db2start command has been issued.) Other
programs that connect to the same database using shared access are also
granted access.
Execute the UPDATE SQL statement. The SQL statement is executed
statically with the use of a host variable. The job column of the staff
tables is set to the value of the host variable, where the job column has a
value of Mgr.
Execute the DELETE SQL statement The SQL statement is executed
statically with the use of a host variable. All rows that have a job column
value equal to that of the specified host variable, (jobUpdate/jobupdate/job_update) are deleted.
Execute the INSERT SQL statement A row is inserted into the STAFF table.
This insertion implements the use of a host variable which was set prior to
the execution of this SQL statement.
End the transaction. End the unit of work with a ROLLBACK statement.
The result of the SQL statement executed previously can be either made
permanent using the COMMIT statement, or undone using the
ROLLBACK statement. All SQL statements within the unit of work are
affected.
106
Java
COBOL
REXX
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
107
C Example: UPDAT.SQC
#include
#include
#include
#include
#include
<stdio.h>
<string.h>
<stdlib.h>
<sqlenv.h>
"utilemb.h"
UPDAT \n");
if (argc == 1)
{
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3)
{
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd; 3
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else
{
printf ("\nUSAGE: updat [userid passwd]\n\n");
return 1;
} /* endif */
strcpy (jobUpdate, "Clerk");
EXEC SQL UPDATE staff SET job = :jobUpdate WHERE job = 'Mgr'; 4
EMB_SQL_CHECK("UPDATE STAFF");
printf ("All 'Mgr' have been demoted to 'Clerk'!\n" );
strcpy (jobUpdate, "Sales");
EXEC SQL DELETE FROM staff WHERE job = :jobUpdate; 5
EMB_SQL_CHECK("DELETE FROM STAFF");
printf ("All 'Sales' people have been deleted!\n");
EXEC SQL INSERT INTO staff
VALUES (999, 'Testing', 99, :jobUpdate, 0, 0, 0); 6
EMB_SQL_CHECK("INSERT INTO STAFF");
printf ("New data has been inserted\n");
EXEC SQL ROLLBACK; 7
108
EMB_SQL_CHECK("ROLLBACK");
printf( "On second thought -- changes rolled back.\n" );
EXEC SQL CONNECT RESET;
EMB_SQL_CHECK("CONNECT RESET");
return 0;
}
/* end of program : UPDAT.SQC */
109
// URL is jdbc:db2:dbname
}
else
{
throw new Exception("\nUsage: java Updat [username password]\n");
}
// Set the default context
DefaultContext ctx = new DefaultContext(con);
DefaultContext.setDefaultContext(ctx);
// Enable transactions
con.setAutoCommit(false);
// UPDATE/DELETE/INSERT
try
{
String jobUpdate = null;
jobUpdate="Clerk";
#sql {UPDATE staff SET job = :jobUpdate WHERE job = 'Mgr'}; 4
110
}
catch( Exception e )
{
throw e;
}
finally
{
// Rollback the transaction
System.out.println("\nRollback the transaction...");
#sql { ROLLBACK };
System.out.println("Rollback done.");
}
7
}
catch (Exception e)
{
System.out.println (e);
}
111
1
2
pic x(80).
pic s9(9) comp-5.
pic s9(9) comp-5.
UPDAT".
112
3
4
"Sales" to job-update.
SQL DELETE FROM staff WHERE job=:job-update END-EXEC.
"DELETE FROM STAFF" to errloc.
"checkerr" using SQLCA errloc.
5
6
7
113
exit -1
/* connect to database */
SAY
SAY 'Connect to' dbname
IF password= "" THEN
CALL SQLEXEC 'CONNECT TO' dbname
ELSE
CALL SQLEXEC 'CONNECT TO' dbname 'USER' userid 'USING' password
CALL CHECKERR 'Connect to '
SAY "Connected"
say 'Sample REXX program: UPDAT.CMD'
jobupdate = "'Clerk'"
st = "UPDATE staff SET job =" jobupdate "WHERE job = 'Mgr'"
call SQLEXEC 'EXECUTE IMMEDIATE :st' 4
call CHECKERR 'UPDATE'
say "All 'Mgr' have been demoted to 'Clerk'!"
114
jobupdate = "'Sales'"
st = "DELETE FROM staff WHERE job =" jobupdate
call SQLEXEC 'EXECUTE IMMEDIATE :st' 5
call CHECKERR 'DELETE'
say "All 'Sales' people have been deleted!"
st = "INSERT INTO staff VALUES (999, 'Testing', 99," jobupdate ", 0, 0, 0)"
call SQLEXEC 'EXECUTE IMMEDIATE :st' 6
call CHECKERR 'INSERT'
say 'New data has been inserted'
call SQLEXEC 'ROLLBACK' 7
call CHECKERR 'ROLLBACK'
say 'On second thought...changes rolled back.'
call SQLEXEC 'CONNECT RESET'
call CHECKERR 'CONNECT RESET'
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
if (SQLCA.SQLCODE < 0 ) then
exit
else do
say 'WARNING - CONTINUING PROGRAM WITH ERRORS'
return 0
end
end
return 0
115
Return Codes
Most database manager APIs pass back a zero return code when successful. In
general, a non-zero return code indicates that the secondary error handling
mechanism, the SQLCA structure, may be corrupt. In this case, the called API
is not executed. A possible cause for a corrupt SQLCA structure is passing an
invalid address for the structure.
116
Refer to the Administrative API Reference for more information about the
SQLCA structure, and the Message Reference for a listing of SQLCODE and
SQLSTATE error conditions.
Note: If you want to develop applications that access various IBM RDBMS
servers you should:
v Where possible, have your applications check the SQLSTATE rather
than the SQLCODE.
v If your applications will use DB2 Connect, consider using the
mapping facility provided by DB2 Connect to map SQLCODE
conversions between unlike databases.
117
CONTINUE
Indicates to continue with the next instruction in the application.
GO TO label
Indicates to go to the statement immediately following the label
specified after GO TO. (GO TO can be two words, or one word,
GOTO.)
If the WHENEVER statement is not used, the default action is to continue
processing if an error, warning, or exception condition occurs during
execution.
The WHENEVER statement must appear before the SQL statements you want
to affect. Otherwise, the precompiler does not know that additional
error-handling code should be generated for the executable SQL statements.
You can have any combination of the three basic forms active at any time. The
order in which you declare the three forms is not significant. To avoid an
infinite looping situation, ensure that you undo the WHENEVER handling
before any SQL statements are executed inside the handler. You can do this
using the WHENEVER SQLERROR CONTINUE statement.
For a complete description of the WHENEVER statement, refer to the SQL
Reference.
UNIX Usually, pressing Ctrl-C generates the SIGINT interrupt signal. Note
that keyboards can easily be redefined so SIGINT may be generated
by a different key sequence on your machine.
For other operating systems that are not in the above list, refer to the
Application Building Guide.
Do not put SQL statements (other than COMMIT or ROLLBACK) in
exception, signal, and interrupt handlers. With these kinds of error conditions,
you normally want to do a ROLLBACK to avoid the risk of inconsistent data.
Note that you should exercise caution when coding a COMMIT and
ROLLBACK in exception/signal/interrupt handlers. If you call either of these
118
119
C Example: UTILAPI.C
#include
#include
#include
#include
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<sql.h>
<sqlenv.h>
<sqlda.h>
<sqlca.h>
<string.h>
<ctype.h>
"utilemb.h"
sqlInfo[1024];
sqlInfoToken[1024];
char
char
sqlstateMsg[1024];
errorMsg[1024];
120
= %s\n", appMsg);
= %d\n", line);
= %s\n", file);
= %ld\n", pSqlca->sqlcode);
}
else
{
sprintf( sqlInfoToken, "--- end warning report ---\n");
strcat( sqlInfo, sqlInfoToken);
printf("%s", sqlInfo);
return 0;
} /* endif */
} /* endif */
}
return 0;
/******************************************************************************
**
1.2 - TransRollback - rolls back the transaction
******************************************************************************/
void TransRollback( )
{
int
rc = 0;
/* rollback the transaction */
printf( "\nRolling back the transaction ...\n") ;
EXEC SQL ROLLBACK;
rc = SqlInfoPrint( "ROLLBACK", &sqlca, __LINE__, __FILE__);
if( rc == 0)
{
printf( "The transaction was rolled back.\n") ;
}
}
121
122
*
*
*
*
*
"== by == ==
by == ==.
by
by
by
by
value
value
reference
reference
buffer-size
line-width
sqlstate
state-buffer
123
returning state-rc.
if error-rc is greater than 0
display error-buffer.
if state-rc is greater than 0
display state-buffer.
if state-rc is less than 0
display "return code from GET SQLSTATE =" state-rc.
if SQLCODE is less than 0
display "--- end error report ---"
go to End-Prog.
display "--- end error report ---"
display "CONTINUING PROGRAM WITH WARNINGS!".
End-Checkerr. exit program.
End-Prog. stop run.
124
125
126
127
127
128
131
131
133
133
135
137
139
141
143
144
145
146
146
147
147
151
152
152
153
153
154
154
157
161
161
162
162
164
166
168
170
170
171
173
127
Dynamic SQL support statements are required to transform the host variable
containing SQL text into an executable form and operate on it by referencing
the statement name. These statements are:
EXECUTE IMMEDIATE
Prepares and executes a statement that does not use any host
variables. All EXECUTE IMMEDIATE statements in an application are
cached in the same place at run time, so only the last statement is
known. Use this statement as an alternative to the PREPARE and
EXECUTE statements.
PREPARE
Turns the character string form of the SQL statement into an
executable form of the statement, assigns a statement name, and
optionally places information about the statement in an SQLDA
structure.
EXECUTE
Executes a previously prepared SQL statement. The statement can be
executed repeatedly within a connection.
DESCRIBE
Places information about a prepared statement into an SQLDA.
An application can execute most SQL statements dynamically. See Table 38 on
page 737 for the complete list of supported SQL statements.
Note: The content of dynamic SQL statements follows the same syntax as
static SQL statements, but with the following exceptions:
v Comments are not allowed.
v The statement cannot begin with EXEC SQL.
v The statement cannot end with the statement terminator. An
exception to this is the CREATE TRIGGER statement which can
contain a semicolon (;).
128
actual choice. When in doubt, prototyping your statements as static SQL, then
as dynamic SQL, and then comparing the differences is the best approach.
Table 6. Comparing Static and Dynamic SQL
Consideration
Likely Best
Choice
v Static
v either
v Dynamic
Data Uniformity
v Uniform data distribution
v Slight non-uniformity
v Highly non-uniform distribution
v Static
v either
v Dynamic
v Static
v either
v Dynamic
Repetitious Execution
v Runs many times (10 or more times)
v Runs a few times (less than 10 times)
v Runs once
v either
v either
v Static
Nature of Query
v Random
v Permanent
v Dynamic
v either
v either
v Dynamic
v either
Frequency of RUNSTATS
v Very infrequently
v Regularly
v Frequently
v Static
v either
v Dynamic
In general, an application using dynamic SQL has a higher start-up (or initial)
cost per SQL statement due to the need to compile the SQL statements prior
to using them. Once compiled, the execution time for dynamic SQL compared
to static SQL should be equivalent and, in some cases, faster due to better
access plans being chosen by the optimizer. Each time a dynamic statement is
executed, the initial compilation cost becomes less of a factor. If multiple users
are running the same dynamic application with the same statements, only the
first application to issue the statement realizes the cost of statement
compilation.
129
In a mixed DML and DDL environment, the compilation cost for a dynamic
SQL statement may vary as the statement may be implicitly recompiled by the
system while the application is running. In a mixed environment, the choice
between static and dynamic SQL must also factor in the frequency in which
packages are invalidated. If the DDL does invalidate packages, dynamic SQL
may be more efficient as only those queries executed are recompiled when
they are next used. Others are not recompiled. For static SQL, the entire
package is rebound once it has been invalidated.
Now suppose your particular application contains a mixture of the above
characteristics and some of these characteristics suggest that you use static
while others suggest dynamic. In this case, there is no clear cut decision and
you should probably use whichever method you have the most experience
with, and with which you feel most comfortable. Note that the considerations
in the above table are listed roughly in order of importance.
Note: Static and dynamic SQL each come in two types that make a difference
to the DB2 optimizer. These are:
1. Static SQL containing no host variables
This is an unlikely situation which you may see only for:
v Initialization code
v Novice training examples
This is actually the best combination from a performance perspective in
that there is no run-time performance overhead and yet the DB2
optimizers capabilities can be fully realized.
2. Static SQL containing host variables
This is the traditional legacy style of DB2 applications. It avoids the run
time overhead of a PREPARE and catalog locks acquired during statement
compilation. Unfortunately, the full power of the optimizer cannot be
harnessed since it does not know the entire SQL statement. A particular
problem exists with highly non-uniform data distributions.
3. Dynamic SQL containing no parameter markers
This is the typical style for random query interfaces (such as the CLP) and
is the optimizers preferred flavor of SQL. For complex queries, the
overhead of the PREPARE statement is usually worthwhile due to
improved execution time. For more information on parameter markers, see
Using Parameter Markers on page 161.
4. Dynamic SQL containing parameter markers
This is the most common type of SQL for CLI applications. The key benefit
is that the presence of parameter markers allows the cost of the PREPARE
to be amortized over the repeated executions of the statement, typically a
select or insert. This amortization is true for all repetitive dynamic SQL
applications. Unfortunately, just like static SQL with host variables, parts
130
131
In the dynamic SQL case, the query is associated with a statement name
assigned in a PREPARE statement. Any referenced host variables are
represented by parameter markers. Table 7 shows a DECLARE statement
associated with a dynamic SELECT statement.
Table 7. Declare Statement Associated with a Dynamic SELECT
Language
C/C++
Java (JDBC)
COBOL
FORTRAN
The main difference between a static and a dynamic cursor is that a static
cursor is prepared at precompile time, and a dynamic cursor is prepared at
run time. Additionally, host variables referenced in the query are represented
by parameter markers, which are replaced by run-time host variables when
the cursor is opened.
For more information about how to use cursors, see the following sections:
v Selecting Multiple Rows Using a Cursor on page 81
v Example: Cursor Program on page 84
v Using Cursors in REXX on page 728
132
dynamic.sqc
Java
Dynamic.java
COBOL
dynamic.sqb
REXX
dynamic.cmd
133
Java
COBOL
REXX
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
134
C Example: DYNAMIC.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
135
}
/* end of program : DYNAMIC.SQC */
136
}
else
{
throw new Exception("\nUsage: java Dynamic [username password]\n");
}
// Enable transactions
con.setAutoCommit(false);
// Perform dynamic SQL SELECT using JDBC
try
{
PreparedStatement pstmt1 = con.prepareStatement(
"SELECT tabname FROM syscat.tables " +
"WHERE tabname <> ? " +
"ORDER BY 1"); 2
// set cursor name for the positioned update statement
pstmt1.setCursorName("c1");
pstmt1.setString(1, "STAFF");
ResultSet rs = pstmt1.executeQuery();
System.out.print("\n");
while( rs.next() )
3
4
5
137
138
}
catch( Exception e )
{
System.out.println(e);
}
7
1
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: DYNAMIC".
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* with the length of the
inspect passwd-name
before initial "
move
"
"
EXEC
move
call
2
3
139
move
EXEC
move
call
"STAFF" to parm-var.
SQL OPEN c1 USING :parm-var END-EXEC.
"OPEN" to errloc.
"checkerr" using SQLCA errloc.
4
6
140
5
exit -1
/* connect to database */
SAY
SAY 'Connect to' dbname
IF password= "" THEN
CALL SQLEXEC 'CONNECT TO' dbname
ELSE
CALL SQLEXEC 'CONNECT TO' dbname 'USER' userid 'USING' password
CALL CHECKERR 'Connect to '
SAY "Connected"
say 'Sample REXX program: DYNAMIC'
st = "SELECT tabname FROM syscat.tables WHERE tabname <> ? ORDER BY 1"
call SQLEXEC 'PREPARE s1 FROM :st' 2
call CHECKERR 'PREPARE'
call SQLEXEC 'DECLARE c1 CURSOR FOR s1' 3
call CHECKERR 'DECLARE'
parm_var = "STAFF"
call SQLEXEC 'OPEN c1 USING :parm_var' 4
141
do while ( SQLCA.SQLCODE = 0 )
call SQLEXEC 'FETCH c1 INTO :table_name' 5
if (SQLCA.SQLCODE = 0) then
say 'Table = ' table_name
end
call SQLEXEC 'CLOSE c1' 6
call CHECKERR 'CLOSE'
call SQLEXEC 'CONNECT RESET'
call CHECKERR 'CONNECT RESET'
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
if (SQLCA.SQLCODE < 0 ) then
exit
else do
say 'WARNING - CONTINUING PROGRAM WITH ERRORS'
return 0
end
end
return 0
142
sqldaid CHAR
sqldabc INTEGER
sqln SMALLINT
sqld SMALLINT
HEADER
sqldata POINTER
sqlind POINTER
143
144
145
146
corresponds to a plain LOB host variable, that is, the whole LOB will be
stored in memory at one time. This will work for small LOBs (up to a few
MB), but you cannot use this data type for large LOBs (say 1 GB). It will
be necessary for your application to change its column definition in the
SQLVAR to be either SQL_TYP_xLOB_LOCATOR or
SQL_TYPE_xLOB_FILE. (Note that changing the SQLTYPE field of the
SQLVAR also necessitates changing the SQLLEN field.) After changing the
column definition in the SQLVAR, your application can then allocate the
correct amount of storage for the new type. For more information on
LOBs, see Chapter 10. Using the Object-Relational Capabilities on
page 275.
2. Allocate storage for the value of that column.
3. Store the address of the allocated storage in the SQLDATA field of the
SQLDA structure.
These steps are accomplished by analyzing the description of each column
and replacing the content of each SQLDATA field with the address of a
storage area large enough to hold any values from that column. The length
attribute is determined from the SQLLEN field of each SQLVAR entry for data
items that are not of a LOB type. For items with a type of BLOB, CLOB, or
DBCLOB, the length attribute is determined from the SQLLONGLEN field of
the secondary SQLVAR entry.
In addition, if the specified column allows nulls, then the application must
replace the content of the SQLIND field with the address of an indicator
variable for the column.
147
The effect of this macro is to calculate the required storage for an SQLDA
with n SQLVAR elements.
To create an SQLDA structure with COBOL, you can either embed an
INCLUDE SQLDA statement or use the COPY statement. Use the COPY
statement when you want to control the maximum number of SQLVARs and
hence the amount of storage that the SQLDA uses. For example, to change the
default number of SQLVARs from 1489 to 1, use the following COPY
statement:
COPY "sqlda.cbl"
replacing --1489-by --1--.
148
Language
C/C++
COBOL
WORKING-STORAGE SECTION.
77 SALARY
PIC S99999V99 COMP-3.
77 SAL-IND
PIC S9(4)
COMP-5.
EXEC SQL INCLUDE SQLDA END-EXEC
* Or code a useful way to save unused SQLVAR entries.
* COPY "sqlda.cbl" REPLACING --1489-- BY --1--.
01 decimal-sqllen pic s9(4) comp-5.
01 decimal-parts redefines decimal-sqllen.
05 precision pic x.
05 scale pic x.
* Initialize one element of output SQLDA
MOVE 1 TO SQLN
MOVE 1 TO SQLD
MOVE SQL-TYP-NDECIMAL TO SQLTYPE(1)
* Length = 7 digits precision and 2 digits scale
SET SQLDATA(1) TO ADDRESS OF SALARY
SET SQLIND(1) TO ADDRESS OF SAL-IND
149
Language
FORTRAN
include 'sqldact.f'
integer*2 sqlvar1
parameter ( sqlvar1 = sqlda_header_sz + 0*sqlvar_struct_sz )
C
! Header
integer*2
integer*2
integer*4
integer*4
integer*2
character*30
! First Variable
out_sqltype1
out_sqllen1
out_sqldata1
out_sqlind1
out_sqlnamel1
out_sqlnamec1
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
out_sqlda(sqlda_sqldaid_ofs), out_sqldaid )
out_sqlda(sqlda_sqldabc_ofs), out_sqldabc )
out_sqlda(sqlda_sqln_ofs), out_sqln
)
out_sqlda(sqlda_sqld_ofs), out_sqld
)
out_sqlda(sqlvar1+sqlvar_type_ofs), out_sqltype1 )
out_sqlda(sqlvar1+sqlvar_len_ofs), out_sqllen1
)
out_sqlda(sqlvar1+sqlvar_data_ofs), out_sqldata1 )
out_sqlda(sqlvar1+sqlvar_ind_ofs), out_sqlind1
)
out_sqlda(sqlvar1+sqlvar_name_length_ofs),
+
out_sqlnamel1
)
equivalence( out_sqlda(sqlvar1+sqlvar_name_data_ofs),
+
out_sqlnamec1
)
C
150
SQLTYPE numeric
value
DATE
384/385
SQL_TYP_DATE / SQL_TYP_NDATE
TIME
388/389
SQL_TYP_TIME / SQL_TYP_NTIME
TIMESTAMP
392/393
SQL_TYP_STAMP / SQL_TYP_NSTAMP
n/a
400/401
SQL_TYP_CGSTR / SQL_TYP_NCGSTR
BLOB
404/405
SQL_TYP_BLOB / SQL_TYP_NBLOB
CLOB
408/409
SQL_TYP_CLOB / SQL_TYP_NCLOB
DBCLOB
412/413
SQL_TYP_DBCLOB / SQL_TYP_NDBCLOB
VARCHAR
448/449
SQL_TYP_VARCHAR / SQL_TYP_NVARCHAR
CHAR
452/453
SQL_TYP_CHAR / SQL_TYP_NCHAR
LONG VARCHAR
456/457
SQL_TYP_LONG / SQL_TYP_NLONG
n/a
460/461
SQL_TYP_CSTR / SQL_TYP_NCSTR
VARGRAPHIC
464/465
SQL_TYP_VARGRAPH / SQL_TYP_NVARGRAPH
GRAPHIC
468/469
SQL_TYP_GRAPHIC / SQL_TYP_NGRAPHIC
LONG VARGRAPHIC
472/473
SQL_TYP_LONGRAPH / SQL_TYP_NLONGRAPH
FLOAT
480/481
SQL_TYP_FLOAT / SQL_TYP_NFLOAT
REAL4
480/481
SQL_TYP_FLOAT / SQL_TYP_NFLOAT
DECIMAL5
484/485
SQL_TYP_DECIMAL / SQL_TYP_DECIMAL
INTEGER
496/497
SQL_TYP_INTEGER / SQL_TYP_NINTEGER
SMALLINT
500/501
SQL_TYP_SMALL / SQL_TYP_NSMALL
n/a
804/805
SQL_TYP_BLOB_FILE / SQL_TYPE_NBLOB_FILE
n/a
808/809
SQL_TYP_CLOB_FILE / SQL_TYPE_NCLOB_FILE
n/a
812/813
SQL_TYP_DBCLOB_FILE / SQL_TYPE_NDBCLOB_FILE
151
Table 8. DB2 V2 SQLDA SQL Types (continued). Numeric Values and Corresponding Symbolic Names
SQL Column Type
SQLTYPE numeric
value
n/a
960/961
SQL_TYP_BLOB_LOCATOR / SQL_TYP_NBLOB_LOCATOR
n/a
964/965
SQL_TYP_CLOB_LOCATOR / SQL_TYP_NCLOB_LOCATOR
n/a
968/969
SQL_TYP_DBCLOB_LOCATOR /
SQL_TYP_NDBCLOB_LOCATOR
Note: These defined types can be found in the sql.h include file located in the include sub-directory of
the sqllib directory. (For example, sqllib/include/sql.h for the C programming language.)
1. For the COBOL programming language, the SQLTYPE name does not use underscore (_) but uses a
hyphen (-) instead.
2. This is a null-terminated graphic string.
3. This is a null-terminated character string.
4. The difference between REAL and DOUBLE in the SQLDA is the length value (4 or 8).
5. Precision is in the first byte. Scale is in the second byte.
152
153
You must save the source SQL statements, not the prepared versions. This
means that you must retrieve and then prepare each statement before
executing the version stored in the table. In essence, your application prepares
an SQL statement from a character string and executes this statement
dynamically.
154
155
init_da
Allocates memory for a prepared SQL statement. An internally
described function called SQLDASIZE is used to calculate the proper
amount of memory.
alloc_host_vars
Allocates memory for data from an SQLDA pointer.
free_da
Frees up the memory that has been allocated to use an SQLDA data
structure.
print_var
Prints out the SQLDA SQLVAR variables. This procedure first
determines data type then calls the appropriate subroutines that are
required to print out the data.
display_da
Displays the output of a pointer that has been passed through. All
pertinent information on the structure of the output data is available
from this pointer, as examined in the procedure print_var.
156
C Example: ADHOC.SQC
#include
#include
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<sqlenv.h>
<sqlcodes.h>
<sqlda.h> 1
"utilemb.h"
#ifdef DB268K
/* Need to include ASLM for 68K applications */
#include <LibraryManager.h>
#endif
EXEC SQL INCLUDE SQLCA ; 2
#define SQLSTATE sqlca.sqlstate
int process_statement( char * ) ;
int main( int argc, char *argv[] ) {
int rc ;
char sqlInput[256] ;
char st[1024] ;
EXEC SQL BEGIN DECLARE SECTION ; 3
char userid[9] ;
char passwd[19] ;
EXEC SQL END DECLARE SECTION ;
#ifdef DB268K
/*
Before making any API calls for 68K environment,
need to initial the Library Manager
*/
InitLibraryManager(0,kCurrentZone,kNormalMemory) ;
atexit(CleanupLibraryManager) ;
#endif
printf( "Sample C program : ADHOC interactive SQL\n" ) ;
/* Initialize the connection to a database. */
if ( argc == 1 ) {
EXEC SQL CONNECT TO sample ;
EMB_SQL_CHECK( "CONNECT TO SAMPLE" ) ;
}
else if ( argc == 3 ) {
strcpy( userid, argv[1] ) ;
strcpy( passwd, argv[2] ) ;
EXEC SQL CONNECT TO sample USER :userid USING :passwd ; 4
EMB_SQL_CHECK( "CONNECT TO SAMPLE" ) ; 5
}
else {
printf( "\nUSAGE: adhoc [userid passwd]\n\n" ) ;
Chapter 5. Writing Dynamic SQL Programs
157
return( 1 ) ;
} /* endif */
printf( "Connected to database SAMPLE\n" ) ;
/* Enter the continuous command line loop. */
*sqlInput = '\0' ;
while ( ( *sqlInput != 'q' ) && ( *sqlInput != 'Q' ) ) { 6
printf( "Enter an SQL statement or 'quit' to Quit :\n" ) ;
gets( sqlInput ) ;
if ( ( *sqlInput == 'q' ) || ( *sqlInput == 'Q' ) ) break ;
if ( *sqlInput == '\0' ) { /* Don't process the statement */
printf( "No characters entered.\n" ) ;
continue ;
}
strcpy( st, sqlInput ) ;
while ( sqlInput[strlen( sqlInput ) - 1] == '\\' ) {
st[strlen( st ) - 1] = '\0' ;
gets( sqlInput ) ;
strcat( st, sqlInput ) ;
}
/* Process the statement. */
rc = process_statement( st ) ;
}
printf( "Enter 'c' to COMMIT or Any Other key to ROLLBACK the transaction :\n" ) ;
gets( sqlInput ) ;
if ( ( *sqlInput == 'c' ) || ( *sqlInput == 'C' ) ) {
printf( "COMMITING the transactions.\n" ) ;
EXEC SQL COMMIT ; 7
EMB_SQL_CHECK( "COMMIT" ) ;
}
else { /* assume that the transaction is to be rolled back */
printf( "ROLLING BACK the transactions.\n" ) ;
EXEC SQL ROLLBACK ; 8
EMB_SQL_CHECK( "ROLLBACK" ) ;
}
EXEC SQL CONNECT RESET ; 9
EMB_SQL_CHECK( "CONNECT RESET" ) ;
return( 0 ) ;
}
/******************************************************************************
* FUNCTION : process_statement
* This function processes the inputted statement and then prepares the
* procedural SQL implementation to take place.
158
******************************************************************************/
int process_statement ( char * sqlInput ) {
int counter = 0 ;
struct sqlda * sqldaPointer ;
short sqlda_d ;
EXEC SQL BEGIN DECLARE SECTION ; 3
char st[1024] ;
EXEC SQL END DECLARE SECTION ;
strcpy( st, sqlInput ) ; 10
/* allocate an initial SQLDA temp pointer to obtain information
about the inputted "st" */
init_da( &sqldaPointer, 1 ) ; 11
EXEC SQL PREPARE statement1 from :st ;
/* EMB_SQL_CHECK( "PREPARE" ) ; */
EXEC SQL DESCRIBE statement1 INTO :*sqldaPointer ;
/* Expecting a return code of 0 or SQL_RC_W236,
SQL_RC_W237, SQL_RC_W238, SQL_RC_W239 for cases
where this statement is a SELECT statment. */
if ( SQLCODE != 0
&&
SQLCODE != SQL_RC_W236 &&
SQLCODE != SQL_RC_W237 &&
SQLCODE != SQL_RC_W238 &&
SQLCODE != SQL_RC_W239
) {
/* An unexpected warning/error has occurred. Check the SQLCA. */
EMB_SQL_CHECK( "DESCRIBE" ) ;
} /* end if */
sqlda_d = sqldaPointer->sqld ;
free( sqldaPointer ) ;
if ( sqlda_d > 0 ) { 12
/* this is a SELECT statement, a number of columns
are present in the SQLDA */
if ( SQLCODE == SQL_RC_W236 || SQLCODE == 0)
/* this out only needs a SINGLE SQLDA */
init_da( &sqldaPointer, sqlda_d ) ;
if ( SQLCODE == SQL_RC_W237 ||
SQLCODE == SQL_RC_W238 ||
SQLCODE == SQL_RC_W239 )
/* this output contains columns that need a DOUBLED SQLDA */
init_da( &sqldaPointer, sqlda_d * 2 ) ;
/* need to reassign the SQLDA with the correct number
of columns to the SQL statement */
Chapter 5. Writing Dynamic SQL Programs
159
160
161
Note that using a parameter marker with dynamic SQL is like using host
variables with static SQL. In either case, the optimizer does not use
distribution statistics, and possibly may not choose the best access plan.
The rules that apply to parameter markers are listed under the PREPARE
statement in the SQL Reference.
varinp.sqc
Java
Varinp.java
COBOL
varinp.sqb
162
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
163
C Example: VARINP.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
164
do
{
}
/* end of program : VARINP.SQC */
165
}
else
{
throw new Exception("\nUsage: java Varinp [username password]\n");
}
// Enable transactions
con.setAutoCommit(false);
// Perform dynamic SQL using JDBC
try
{
PreparedStatement pstmt1 = con.prepareStatement(
"SELECT name, dept FROM staff WHERE job = ? FOR UPDATE OF job"); 1
// set cursor name for the positioned update statement
pstmt1.setCursorName("c1");
2
pstmt1.setString(1, "Mgr");
ResultSet rs = pstmt1.executeQuery();
3
PreparedStatement pstmt2 = con.prepareStatement(
"UPDATE staff SET job = ? WHERE CURRENT OF c1");
pstmt2.setString(1, "Clerk");
166
4
System.out.print("\n");
while( rs.next() )
{
String name = rs.getString("name");
short dept = rs.getShort("dept");
System.out.println(name + " in dept. " + dept
+ " will be demoted to Clerk");
pstmt2.executeUpdate();
};
6
rs.close();
pstmt1.close();
pstmt2.close();
}
catch( Exception e )
{
throw e;
}
finally
{
// Rollback the transaction
System.out.println("\nRollback the transaction...");
con.rollback();
System.out.println("Rollback done.");
}
5
7
}
catch( Exception e )
{
System.out.println(e);
}
167
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: VARINP".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* with the length of the
inspect passwd-name
before initial "
168
1
2
3
4
7
5
6
End-Fetch-Loop. exit.
End-Prog.
stop run.
169
The DB2 Call Level Interface (CLI) Differences Between DB2 CLI and Embedded
SQL
An application that uses an embedded SQL interface requires a precompiler to
convert the SQL statements into code, which is then compiled, bound to the
database, and executed. In contrast, a DB2 CLI application does not have to
be precompiled or bound, but instead uses a standard set of functions to
execute SQL statements and related services at run time.
This difference is important because, traditionally, precompilers have been
specific to each database product, which effectively ties your applications to
that product. DB2 CLI enables you to write portable applications that are
independent of any particular database product. This independence means
DB2 CLI applications do not have to be recompiled or rebound to access
different DB2 databases, including DRDA databases. They just connect to the
appropriate database at run time.
170
v DB2 CLI supports scrollable cursors. With scrollable cursors, you can scroll
through a static cursor as follows:
Forward by one or more rows
Backward by one or more rows
From the first row by one or more rows
From the last row by one or more rows.
Despite these differences, there is an important common concept between
embedded SQL and DB2 CLI: DB2 CLI can execute any SQL statement that can
be prepared dynamically in embedded SQL.
Note: DB2 CLI can also accept some SQL statements that cannot be prepared
dynamically, such as compound SQL statements.
Table 38 on page 737 lists each SQL statement, and indicates whether or not it
can be executed using DB2 CLI. The table also indicates if the command line
processor can be used to execute the statement interactively, (useful for
prototyping SQL statements).
Each DBMS may have additional statements that you can dynamically
prepare. In this case, DB2 CLI passes the statements to the DBMS. There is
one exception: the COMMIT and ROLLBACK statement can be dynamically
prepared by some DBMSs but are not passed. In this case, use the
SQLEndTran() function to specify either the COMMIT or ROLLBACK
statement.
171
v DB2 CLI eliminates the need for application controlled, often complex data
areas, such as the SQLDA and SQLCA, typically associated with embedded
SQL applications. Instead, DB2 CLI allocates and controls the necessary
data structures, and provides a handle for the application to reference them.
v DB2 CLI enables the development of multi-threaded thread-safe
applications where each thread can have its own connection and a separate
commit scope from the rest. DB2 CLI achieves this by eliminating the data
areas described above, and associating all such data structures that are
accessible to the application with a specific handle. Unlike embedded SQL,
a multi-threaded CLI application does not need to call any of the context
management DB2 APIs; this is handled by the DB2 CLI driver
automatically.
v DB2 CLI provides enhanced parameter input and fetching capability,
allowing arrays of data to be specified on input, retrieving multiple rows of
a result set directly into an array, and executing statements that generate
multiple result sets.
v DB2 CLI provides a consistent interface to query catalog (Tables, Columns,
Foreign Keys, Primary Keys, etc.) information contained in the various
DBMS catalog tables. The result sets returned are consistent across DBMSs.
This shields the application from catalog changes across releases of database
servers, as well as catalog differences amongst different database servers;
thereby saving applications from writing version specific and server specific
catalog queries.
v Extended data conversion is also provided by DB2 CLI, requiring less
application code when converting information between various SQL and C
data types.
v DB2 CLI incorporates both the ODBC and X/Open CLI functions, both of
which are accepted industry specifications. DB2 CLI is also aligned with the
emerging ISO CLI standard. Knowledge that application developers invest
in these specifications can be applied directly to DB2 CLI development, and
vice versa. This interface is intuitive to grasp for those programmers who
are familiar with function libraries but know little about product specific
methods of embedding SQL statements into a host language.
v DB2 CLI provides the ability to retrieve multiple rows and result sets
generated from a stored procedure residing on a DB2 Universal Database
(or DB2 for MVS/ESA version 5 or later) server. However, note that this
capability exists for Version 5 DB2 Universal Database clients using
embedded SQL if the stored procedure resides on a server accessible from a
DataJoiner Version 2 server.
v DB2 CLI supports server-side scrollable cursors that can be used in
conjunction with array output. This is useful in GUI applications that
display database information in scroll boxes that make use of the Page Up,
Page Down, Home and End keys. You can declare a read-only cursor as
172
scrollable then move forward or backward through the result set by one or
more rows. You can also fetch rows by specifying an offset from:
The current row
The beginning or end of the result set
A specific row you have previously set with a bookmark.
v DB2 CLI applications can dynamically describe parameters in an SQL
statement the same way that CLI and Embedded SQL applications describe
result sets. This enables CLI applications to dynamically process SQL
statements that contain parameter markers without knowing the data type
of those parameter markers in advance. When the SQL statement is
prepared, describe information is returned detailing the data types of the
parameters.
173
It is also possible to write a mixed application that uses both DB2 CLI and
embedded SQL, taking advantage of their respective benefits. In this case,
DB2 CLI is used to provide the base application, with key modules written
using static SQL for performance or security reasons. This complicates the
application design, and should only be used if stored procedures do not meet
the applications requirements. For more information, refer to the section on
Mixing Embedded SQL and DB2 CLI in the CLI Guide and Reference.
Ultimately, the decision on when to use each interface, will be based on
individual preferences and previous experience rather than on any one factor.
174
Generated Columns . . . . . . . . .
Identity Columns . . . . . . . . . .
Generating Sequential Values . . . . . .
Controlling Sequence Behavior . . . .
Improving Performance with Sequence
Objects . . . . . . . . . . . .
Comparing Sequence Objects and Identity
Columns . . . . . . . . . . . .
Declared Temporary Tables . . . . . .
Controlling Transactions with Savepoints
176
176
177
179
180
181
181
183
|
|
185
187
187
188
189
189
190
175
Generated Columns
A generated column is a column that derives the values for each row from an
expression, rather than from an insert or update operation. While combining
an update trigger and an insert trigger can achieve a similar effect, using a
generated column guarantees that the derived value is consistent with the
expression.
To create a generated column in a table, use the GENERATED ALWAYS AS
clause for the column and include the expression from which the value for the
column will be derived. You can include the GENERATED ALWAYS AS clause
in ALTER TABLE or CREATE TABLE statements. The following example
creates a table with two regular columns, c1 and c2, and two generated
columns, c3 and c4, that are derived from the regular columns of the
table.
CREATE TABLE T1(c1 INT, c2 DOUBLE,
c3 DOUBLE GENERATED ALWAYS AS (c1 + c2),
c4 GENERATED ALWAYS AS
(CASE
WHEN c1 > c2 THEN 1
ELSE NULL
END)
);
Identity Columns
Identity columns provide DB2 application developers with an easy way of
automatically generating a numeric column value for every row in a table.
You can have this value generated as a unique value, then define the identity
column as the primary key for the table. To create an identity column, include
the IDENTITY clause in the CREATE TABLE or ALTER TABLE statement.
|
|
|
|
|
176
increments the number, and then commits the transaction to unlock the
counter. Unfortunately, this design only allows a single transaction to
increment the counter at a time.
In contrast, if you use an identity column to automatically generate primary
keys, the application can achieve much higher levels of concurrency. With
identity columns, DB2 maintains the counter so that transactions do not have
to lock the counter. Applications that use identity columns can perform better
because an uncommitted transaction that has incremented the counter does
not prevent other subsequent transactions from also incrementing the counter.
The counter for the identity column is incremented or decremented
independently of the transaction. If a given transaction increments an identity
counter two times, that transaction may see a gap in the two numbers that are
generated because there may be other transactions concurrently incrementing
the same identity counter.
An identity column may appear to have generated gaps in the counter, as the
result of a transaction that was rolled back, or because the database cached a
range of values that have been deactivated (normally or abnormally) before all
the cached values were assigned.
|
|
To retrieve the generated value after inserting a new row into a table with an
identity column, use the identity_val_local() function.
For more information on identity columns, refer to the Administration Guide.
For more information on the IDENTITY clause of the CREATE TABLE and
ALTER TABLE stataments, refer to the SQL Reference.
|
|
|
177
|
|
|
|
|
|
|
|
|
To generate the first value in the application session for the sequence object,
issue a VALUES statement using the NEXTVAL expression:
|
|
|
|
|
|
|
|
|
To display the current value of the sequence object, issue a VALUES statement
using the PREVVAL expression:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can repeatedly retrieve the current value of the sequence object, and the
value that the sequence object returns does not change until you issue a
NEXTVAL expression. In the following example, the PREVVAL expression
returns a value of 1, until the NEXTVAL expression in the application process
increments the value of the sequence object:
178
|
|
|
|
|
|
1
----------2
1 record(s) selected.
|
|
|
|
|
To update the value of a column with the next value of the sequence object,
include the NEXTVAL expression in the UPDATE statement, as follows:
|
|
|
|
To insert a new row into a table using the next value of the sequence object,
include the NEXTVAL expression in the INSERT statement, as follows:
|
|
|
|
|
|
|
|
UPDATE staff
SET id = NEXTVAL FOR id_values
WHERE id = 350
|
|
|
|
|
|
|
|
Data type
The AS clause of the CREATE SEQUENCE statement specifies the
numeric data type of the sequence object. The data type, as specified
in the SQL Limits appendix of the SQL Reference, determines the
possible minimum and maximum values of the sequence object. You
cannot change the data type of a sequence object; instead, you must
drop the sequence object by issuing the DROP SEQUENCE statement
and issuing a CREATE SEQUENCE statement with the new data type.
|
|
|
|
|
Start value
The START WITH clause of the CREATE SEQUENCE statement sets
the initial value of the sequence object. The RESTART WITH clause of
the ALTER SEQUENCE statement resets the value of the sequence
object to a specified value.
|
|
|
Minimum value
The MINVALUE clause sets the minimum value of the sequence
object.
179
|
|
|
Maximum value
The MAXVALUE clause sets the maximum value of the sequence
object.
|
|
|
|
Increment value
The INCREMENT BY clause sets the value that each NEXTVAL
expression adds to the current value of the sequence object. To
decrement the value of the sequence object, specify a negative value.
|
|
|
|
Sequence cycling
The CYCLE clause causes the value of a sequence object that reaches
its maximum or minimum value to generate its respective minimum
value or maximum value on the following NEXTVAL expression.
|
|
|
|
|
|
|
|
|
For example, to create a sequence object called id_values that starts with a
minimum value of 0, has a maximum value of 1000, increments by 2 with
each NEXTVAL expression, and returns to its minimum value when the
maximum value is reached, issue the following statement:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sequence objects avoid the locking issues that are associated with the
single-column table approach and can cache sequence values in memory to
improve DB2 response time. To maximize the performance of applications that
use sequence objects, ensure that your sequence object caches an appropriate
amount of sequence values. The CACHE clause of the CREATE SEQUENCE
and ALTER SEQUENCE statements specifies the maximum number of
sequence values that DB2 generates and stores in memory.
|
|
|
180
|
|
|
|
|
|
|
|
|
|
|
181
To select the contents of the column1 column from the declared temporary
table created in the previous example, use the following statement:
SELECT column1 FROM SESSION.TT1;
Note that DB2 also enables you to create persistent tables with the SESSION
schema. If you create a persistent table with the qualified name SESSION.TT3,
you can then create a declared temporary table with the qualified name
SESSION.TT3. In this situation, DB2 always resolves references to persistent
and declared temporary tables with identical qualified names to the declared
temporary table. To avoid confusing persistent tables with declared temporary
tables, you should not create persistent tables using the SESSION schema.
If you create an application that includes a static SQL reference to a table,
view, or alias qualified with the SESSION schema, the DB2 precompiler does
not compile that statement at bind time and marks the statement as needing
compilation. At run time, DB2 compiles the statement. This behavior is called
incremental binding. DB2 automatically performs incremental binding for static
SQL references to tables, views, and aliases qualified with the SESSION
schema. You do not need to specify the VALIDATE RUN option on the BIND
or PRECOMPILE command to enable incremental binding for these
statements.
If you issue a ROLLBACK statement for a transaction that includes a
DECLARE GLOBAL TEMPORARY TABLE statement, DB2 drops the declared
temporary table. If you issue a DROP TABLE statement for a declared
temporary table, issuing a ROLLBACK statement for that transaction only
restores an empty declared temporary table. A ROLLBACK of a DROP TABLE
statement does not restore the rows that existed in the declared temporary
table.
182
The default behavior of a declared temporary table is to delete all rows from
the table when you commit a transaction. However, if one or more WITH
HOLD cursors are still open on the declared temporary table, DB2 does not
delete the rows from the table when you commit a transaction. To avoid
deleting all rows when you commit a transaction, create the temporary table
using the ON COMMIT PRESERVE ROWS clause in the DECLARE GLOBAL
TEMPORARY TABLE statement.
If you modify the contents of a declared temporary table using an INSERT,
UPDATE, or DELETE statement within a transaction, and roll back that
transaction, DB2 deletes all of the rows of the declared temporary table. If you
attempt to modify the contents of a declared temporary table using an
INSERT, UPDATE, or DELETE statement, and the statement fails, DB2 deletes
all of the rows of the declared temporary table.
In a partitioned environment, when a node failure is encountered, all declared
temporary tables that have a partition on the failed node become unusable.
Any subsequent access to those unusable declared temporary tables returns an
error (SQL1477N). When your application encounters an unusable declared
temporary table the application can either drop the table or recreate the table
by specifying the WITH REPLACE clause in the DECLARE GLOBAL
TEMPORARY TABLE statement.
Declared temporary tables are subject to a number of restrictions. For
example, you cannot define indexes, aliases, or views for declared temporary
tables. You cannot use IMPORT and LOAD to populate declared temporary
tables. For the complete syntax of the DECLARE GLOBAL TEMPORARY
TABLE statement, and a complete list of the restrictions on declared
temporary tables, refer to the SQL Reference.
|
|
|
|
|
|
|
183
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Savepoints give you better performance and a cleaner application design than
using multiple COMMIT and ROLLBACK statements. When you issue a
COMMIT statement, DB2 must do some extra work to commit the current
transaction and start a new transaction. Savepoints allow you to break a
transaction into smaller units or steps without the added overhead of multiple
COMMIT statements. The following example demonstrates the performance
penalty incurred by using multiple transactions instead of savepoints:
|
|
|
|
|
|
|
|
184
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
-- Pseudo-SQL:
IF SQLSTATE = "No Power Cord"
-- roll back current transaction, start new transaction
ROLLBACK
ELSE
-- commit current transaction, start new transaction
COMMIT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When you issue a compound SQL block, DB2 simultaneously acquires the
locks needed for the entire compound SQL block of statements. When you set
an application savepoint, DB2 acquires locks as each statement in the scope of
the savepoint is issued. The locking behavior of savepoints can lead to
Chapter 6. Common DB2 Application Techniques
185
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For example, in 186, the application sets a savepoint and issues two INSERT
statements within the scope of the savepoint. The application uses an IF
statement that, when true, calls the function add_batteries(). The
add_batteries() function issues an SQL statement that in this context is
included within the scope of the savepoint. Finally, the application either rolls
back the work performed within the savepoint (including the SQL statement
issued by the add_batteries() function), or commits the work performed in the
entire transaction:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
186
|
|
|
|
|
RELEASE SAVEPOINT
To release a savepoint, issue a RELEASE SAVEPOINT SQL statement.
If you do not explicitly release a savepoint with a RELEASE
SAVEPOINT SQL statement, it is released at the end of the
transaction. For example:
RELEASE SAVEPOINT savepoint1
ROLLBACK TO SAVEPOINT
To rollback to a savepoint, issue a ROLLBACK TO SAVEPOINT SQL
statement. For example:
ROLLBACK TO SAVEPOINT
Savepoint Restrictions
DB2 Universal Database places the following restrictions on your use of
savepoints in applications:
Atomic compound SQL
DB2 does not enable you to use savepoints within atomic compound
SQL. You cannot use atomic compound SQL within a savepoint.
Nested Savepoints
DB2 does not support the use of a savepoint within another
savepoint.
Triggers
DB2 does not support the use of savepoints in triggers.
SET INTEGRITY statement
Within a savepoint, DB2 treats SET INTEGRITY statements as DDL
statements. For more information on using DDL in savepoints, see
Savepoints and Data Definition Language (DDL) on page 188.
187
188
When your application issues the first FETCH on table t1, the DB2 server
sends a block of column values (1, 2 and 3) to the client application. These
column values are stored locally by the client. When your application issues
the ROLLBACK TO SAVEPOINT SQL statement, column values '2' and '3' are
189
deleted from the table. After the ROLLBACK TO SAVEPOINT statement, the
next FETCH from the table returns column value '2' even though that value
no longer exists in the table. The application receives this value because it
takes advantage of the cursor blocking option to improve performance and
accesses the data that it has stored locally.
For more information on precompiling and binding applications, refer to the
Command Reference.
190
191
192
|
|
193
194
196
198
198
198
198
199
199
212
213
214
215
215
216
217
219
221
223
224
|
|
|
|
|
|
. 225
.
.
.
.
.
227
229
229
229
230
230
. 231
. 231
. 231
. 233
. 234
. 236
. 237
. 244
193
You can use the DB2 Stored Procedure Builder (SPB) to help develop Java or
SQL stored procedures. You can integrate SPB with popular application
development tools, including Microsoft Visual Studio and IBM Visual Age for
Java, or you can use it as a standalone utility. To help you create your stored
procedures, SPB provides design assistants that guide you through basic
design patterns, help you create SQL queries, and estimate the performance
cost of invoking a stored procedure.
For more information on the DB2 Stored Procedure Builder, see Chapter 9.
IBM DB2 Stored Procedure Builder on page 269.
Database
Server
DB2
DB2 Client
Network
Database
All database access must go across the network which, in some cases, results
in poor performance.
Using stored procedures allows a client application to pass control to a stored
procedure on the database server. This allows the stored procedure to perform
intermediate processing on the database server, without transmitting
194
unnecessary data across the network. Only those records that are actually
required at the client need to be transmitted. This can result in reduced
network traffic and better overall performance. Figure 4 shows this feature.
Database
Client
Database
Server
Client
Application
DB2
Stored
Procedure
DB2 Client
Database
Figure 4. Application Using a Stored Procedure
195
196
GENERAL
You must register each stored procedure for the previously listed parameter
styles with a CREATE PROCEDURE statement. The CREATE PROCEDURE
statement specifies the procedure name, arguments, location, and parameter
style of each stored procedure. These parameter styles offer increased
portability and scalability of your stored procedure code across the DB2
family.
For information on using the only styles of stored procedures supported by
versions of DB2 prior to DB2 Universal Database Version 6, that is, the
DB2DARI and DB2GENERAL parameter styles, see Appendix C. DB2DARI
and DB2GENERAL Stored Procedures and UDFs on page 765.
197
Client Application
The client application performs several steps before calling the stored
procedure. It must be connected to a database, and it must declare, allocate,
and initialize host variables or an SQLDA structure. The SQL CALL statement
can accept a series of host variables, or an SQLDA structure. Refer to the SQL
Reference for descriptions of the SQL CALL statement and the SQLDA
structure. For information on using the SQLDA structure in a client
application, see Appendix C. DB2DARI and DB2GENERAL Stored
Procedures and UDFs on page 765.
Allocating Host Variables
Use the following steps to allocate the necessary input host variables on the
client side of a stored procedure:
1. Declare enough host variables for all input variables that will be passed to
the stored procedure.
2. Determine which input host variables can also be used to return values
back from the stored procedure to the client.
3. Declare host variables for any additional values returned from the stored
procedure to the client.
When writing the client portion of your stored procedure, you should attempt
to overload as many of the host variables as possible by using them for both
input and output. This will increase the efficiency of handling multiple host
variables. For example, when returning an SQLCODE to the client from the
stored procedure, try to use an input host variable that is declared as an
INTEGER to return the SQLCODE.
Note: Do not allocate storage for these structures on the database server. The
database manager automatically allocates duplicate storage based upon
the storage allocated by the client application. Do not alter any storage
pointers for the input/output parameters on the stored procedure side.
Attempting to replace a pointer with a locally created storage pointer
will cause an error with SQLCODE -1133 (SQLSTATE 39502).
Calling Stored Procedures
You can invoke a stored procedure stored at the location of the database by
using the SQL CALL statement. Refer to the SQL Reference for a complete
description of the CALL statement. Using the CALL statement is the
recommended method of invoking stored procedures.
Running the Client Application
The client application must ensure that a database connection has been made
before invoking the stored procedure, or an error is returned. After the
database connection and data structure initialization, the client application
198
calls the stored procedure and passes any required data. The application
disconnects from the database. Note that you can code SQL statements in any
of the above steps.
Procedure name
Mode, name, and SQL data type of each parameter
EXTERNAL name and location
PARAMETER STYLE
However, DB2 will fail to register the second stored procedure in the
following example because it has the same number of parameters as the first
stored procedure with the same name:
CREATE PROCEDURE OVERLOADFAIL (IN VAR1 INTEGER) ...
CREATE PROCEDURE OVERLOADFAIL (IN VAR2 VARCHAR(15)) ...
199
Passes a value to the stored procedure from the client application, but
returns no value to the client application when control returns to the
client application
OUT
Stores a value that is passed to the client application when the stored
procedure terminates
INOUT
Passes a value to the stored procedure from the client application, and
returns a value to the client application when the stored procedure
terminates
Location: The EXTERNAL clause of the CREATE PROCEDURE statement
tells the database manager the location of the library that contains the stored
procedure. If you do not specify an absolute path for the library, or a jar name
for Java stored procedures, the database manager searches the function
directory. The function directory is a directory defined for your operating system
as follows:
Unix operating systems
sqllib/function
OS/2 or Windows 32-bit operating systems
instance_name\function, where instance_name represents the value of
the DB2INSTPROF instance-specific registry setting. If DB2INSTPROF
is not set, instance_name represents the value of the %DB2PATH%
environment variable. The default value of the %DB2PATH%
environment variable is the path in which you installed DB2.
If DB2 does not find the stored procedure in instance_name\function,
DB2 searches the directories defined by the PATH and LIBPATH
environment variables.
For example, the function directory for a Windows 32-bit operating
system server with DB2 installed in the C:\sqllib directory, where you
have not set the DB2INSTPROF registry setting, is:
C:\sqllib\function
200
Note: You should give your library a name that is different than the stored
procedure name. If DB2 locates the library in the search path, DB2
executes any stored procedure with the same name as the library which
contains the stored procedure as a FENCED DB2DARI procedure.
For LANGUAGE C stored procedures, specify:
v The library name, taking the form of either:
A library found in the function directory
An absolute path including the library name
v The entry point for the stored procedure in the library. If you do not specify
an entry point, the database manager will use the default entry point. The
IBM XLC compiler on AIX allows you to specify any exported function
name in the library as the default entry point. This is the function that is
called if only the library name is specified in a stored procedure call or
CREATE FUNCTION statement. To specify a default entry point, use the -e
option in the link step. For example: -e funcname makes funcname the
default entry point. On other UNIX platforms, no such mechanism exists,
so the default entry point is assumed by DB2 to be the same name as the
library itself.
On a UNIX-based system, for example, mymod!proc8 directs the database
manager to the sqllib/function/mymod library and to use entry point proc8
within that library. On Windows 32-bit and OS/2 operating systems
mymod!proc8 directs the database manager to load the mymod.dll file from the
function directory and call the proc8() procedure in the dynamic link library
(DLL).
For LANGUAGE JAVA stored procedures, use the following syntax:
|
[<jar-file-name>:]<class-name>.<method-name> (java-method-signature)
The following list defines the EXTERNAL keywords for Java stored
procedures:
jar-file-name
If a jar file installed in the database contains the stored procedure
method, you must include this value. The keyword represents the
name of the jar file, and is delimitied by a colon (:). If you do not
specify a jar file name, the database manager looks for the class in the
function directory. For more information on installing jar files, see
Java Stored Procedures and UDFs on page 668.
class-name
The name of the class that contains the stored procedure method. If
the class is part of a package, you must include the complete package
name as a prefix.
201
method-name
The name of the stored procedure method.
java-method-signature
A list of the Java parameter data types for the method. These data
types must correspond to the default Java type mapping for the
signature specified after the procedure or function name. For example,
the default Java mapping of the SQL type INTEGER is int, not
java.lang.Integer. For a list of the default Java type mappings, see
Table 32 on page 640.
|
|
|
|
|
|
|
202
LANGUAGE COBOL
The database manager calls the stored procedure using COBOL calling
and linkage conventions. Use this option for COBOL stored
procedures.
Passing Parameters as Subroutines: C stored procedures of PROGRAM
TYPE SUB accept arguments as subroutines. Pass numeric data type
parameters as pointers. Pass character data types as arrays of the appropriate
length. For example, the following C stored procedure signature accepts
parameters of type INTEGER, SMALLINT, and CHAR(3):
int storproc (sqlint32 *arg1, short *arg2, char arg[4])
203
NO DBINFO
FENCED
READS SQL DATA
PROGRAM TYPE MAIN
EXTERNAL NAME 'spserver!mainexample'
The following code for the stored procedure copies the value of argv[1] into
the CHAR(8) host variable injob, then copies the value of the DOUBLE host
variable outsalary into argv[2] and returns the SQLCODE as argv[3]:
EXEC SQL BEGIN DECLARE SECTION;
char injob[9];
double outsalary;
EXEC SQL END DECLARE SECTION;
SQL_API_RC SQL_API_FN main_example (int argc, char **argv)
{
EXEC SQL INCLUDE SQLCA;
/* argv[0] contains the procedure name, so parameters start at argv[1] */
strcpy (injob, (char *)argv[1]);
EXEC SQL SELECT AVG(salary)
INTO :outsalary
FROM employee
WHERE job = :injob;
memcpy ((double *)argv[2], (double *)&outsalary, sizeof(double));
memcpy ((sqlint32 *)argv[3], (sqlint32 *)&SQLCODE, sizeof(sqlint32));
return (0);
} /* end main_example function */
JAVA
DB2SQL
DB2DARI
DB2GENERAL
LANGUAGE C
LANGUAGE
JAVA
LANGUAGE
OLE
LANGUAGE
COBOL
204
GENERAL
The stored procedure receives parameters as host variables from the
CALL statement in the client application. The stored procedure does
not directly pass null indicators to the client application. You can only
use GENERAL when you also specify the LANGUAGE C or
LANGUAGE COBOL option.
DB2 Universal Database for OS/390 compatibility note: GENERAL is
the equivalent of SIMPLE.
PARAMETER STYLE GENERAL stored procedures accept parameters
in the manner indicated by the value of the PROGRAM TYPE clause.
The following example demonstrates a PARAMETER STYLE
GENERAL stored procedure that accepts two parameters using
PROGRAM TYPE SUBROUTINE:
SQL_API_RC SQL_API_FN one_result_set_to_client
(double *insalary, sqlint32 *out_sqlerror)
{
EXEC SQL INCLUDE SQLCA;
EXEC SQL WHENEVER SQLERROR GOTO return_error;
EXEC SQL BEGIN DECLARE SECTION;
double l_insalary;
EXEC SQL END DECLARE SECTION;
l_insalary = *insalary;
*out_sqlerror = 0;
EXEC SQL DECLARE c3 CURSOR FOR
SELECT name, job, CAST(salary AS INTEGER)
FROM staff
WHERE salary > :l_insalary
ORDER BY salary;
EXEC SQL OPEN c3;
/* Leave cursor open to return result set */
return (0);
/* Copy SQLCODE to OUT parameter if SQL error occurs */
return_error:
{
*out_sqlerror = SQLCODE;
EXEC SQL WHENEVER SQLERROR CONTINUE;
return (0);
}
} /* end one_result_set_to_client function */
205
parameters are passed as host variables. You can only use GENERAL
WITH NULLS when you also specify the LANGUAGE C or
LANGUAGE COBOL option.
DB2 Universal Database for OS/390 compatibility note: GENERAL
WITH NULLS is the equivalent of SIMPLE WITH NULLS.
PARAMETER STYLE GENERAL WITH NULLS stored procedures
accept parameters in the manner indicated by the value of the
PROGRAM TYPE clause, and allocate an array of null indicators with
one element per declared parameter. The following SQL registers a
PARAMETER STYLE GENERAL WITH NULLS stored procedure that
passes one INOUT parameter and two OUT parameters using
PROGRAM TYPE SUB:
CREATE PROCEDURE INOUT_PARAM (INOUT medianSalary DOUBLE,
OUT errorCode INTEGER, OUT errorLabel CHAR(32))
DYNAMIC RESULT SETS 0
LANGUAGE C
PARAMETER STYLE GENERAL WITH NULLS
NO DBINFO
FENCED
MODIFIES SQL DATA
PROGRAM TYPE SUB
EXTERNAL NAME 'spserver!inout_param'
The following C code demonstrates how to declare and use the null
indicators required by a GENERAL WITH NULLS stored procedure:
SQL_API_RC SQL_API_FN inout_param (double *inoutMedian,
sqlint32 *out_sqlerror, char buffer[33], sqlint16 nullinds[3])
{
EXEC SQL INCLUDE SQLCA;
EXEC SQL WHENEVER SQLERROR GOTO return_error;
if (nullinds[0] < 0)
{
/* NULL value was received as input, so return NULL output */
nullinds[0] = -1;
nullinds[1] = -1;
nullinds[2] = -1;
}
else
{
int counter = 0;
*out_sqlerror = 0;
medianSalary = *inoutMedian;
strcpy(buffer, "DECLARE inout CURSOR");
EXEC SQL DECLARE inout CURSOR FOR
SELECT CAST(salary AS DOUBLE) FROM staff
WHERE salary > :medianSalary
ORDER BY salary;
206
nullinds[1] = 0;
nullinds[2] = 0;
strcpy(buffer, "SELECT COUNT INTO numRecords");
EXEC SQL SELECT COUNT(*) INTO :numRecords
FROM staff
WHERE salary > :medianSalary;
if (numRecords != 0)
/* At least one record was found */
{
strcpy(buffer, "OPEN inout");
EXEC SQL OPEN inout USING :medianSalary;
strcpy(buffer, "FETCH inout");
while (counter < (numRecords / 2 + 1)) {
EXEC SQL FETCH inout INTO :medianSalary;
*inoutMedian = medianSalary;
counter = counter + 1;
}
else /* No records were found */
{
/* Return 100 to indicate NOT FOUND error */
*out_sqlerror = 100;
}
return (0);
/* Copy SQLCODE to OUT parameter if SQL error occurs */
return_error:
{
*out_sqlerror = SQLCODE;
EXEC SQL WHENEVER SQLERROR CONTINUE;
return (0);
}
} /* end inout_param function */
JAVA
DB2SQL
Your C function definition for a DB2SQL stored procedure must
207
1
2
3
4
5
208
v You can set the value of the DB2SQL SQLSTATE (CHAR(5) and
diagnostic message (null-terminated CHAR(70)) parameters to
return a customized value in the SQLCA to the client.
For example, the following embedded C stored procedure
demonstrates the coding style for PARAMETER STYLE DB2SQL
stored procedures:
SQL_API_RC SQL_API_FN db2sql_example (
char injob[9],
/* Input - CHAR(8)
double *salary,
/* Output - DOUBLE
sqlint16 nullinds[2],
char sqlst[6],
char qualname[28],
char specname[19],
char diagmsg[71]
)
{
EXEC SQL INCLUDE SQLCA;
*/
*/
if (nullinds[0] < 0)
{
/* NULL value was received as input, so return NULL output */
nullinds[1] = -1;
/* Set custom SQLSTATE to return to client. */
strcpy(sqlst, "38100");
/* Set custom message to return to client. */
strcpy(diagmsg, "Received null input on call to DB2SQL_EXAMPLE.");
}
else
{
EXEC SQL SELECT (CAST(AVG(salary) AS DOUBLE))
INTO :outsalary INDICATOR :outsalaryind
FROM employee
WHERE job = :injob;
*salary = outsalary;
nullinds[1] = outsalaryind;
}
return (0);
} /* end db2sql_example function */
209
210
}
}
return 0;
DB2GENERAL
The stored procedure uses a parameter passing convention that is
only supported by DB2 Java stored procedures. You can only use
DB2GENERAL when you also specify the LANGUAGE JAVA option.
For increased portability, you should write Java stored procedures
using the PARAMETER STYLE JAVA conventions. See Appendix C.
DB2DARI and DB2GENERAL Stored Procedures and UDFs on
page 765 for more information on writing DB2GENERAL parameter
style stored procedures.
DB2DARI
The stored procedure uses a parameter passing convention that
conforms with C language calling and linkage conventions. This
option is only supported by DB2 Universal Database, and can only be
used when you also specify the LANGUAGE C option.
To increase portability across the DB2 family, you should write your
LANGUAGE C stored procedures using the GENERAL or GENERAL
WITH NULLS parameter styles. If you want to write DB2DARI
parameter style stored procedures, see Appendix C. DB2DARI and
DB2GENERAL Stored Procedures and UDFs on page 765.
Passing a DBINFO Structure: For LANGUAGE C stored procedures with a
PARAMETER TYPE of GENERAL, GENERAL WITH NULLS, or DB2SQL,
you have the option of writing your stored procedure to accept an additional
parameter. You can specify DBINFO in the CREATE PROCEDURE statement
to instruct the client application to pass a DBINFO structure containing
Chapter 7. Stored Procedures
211
information about the DB2 client to the stored procedure, along with the call
parameters. The DBINFO structure contains the following values:
Database name
The name of the database to which the client is connected.
Application authorization ID
The application run-time authorization ID.
Code page
The code page of the database.
Schema name
Not applicable to stored procedures.
Table name
Not applicable to stored procedures.
Column name
Not applicable to stored procedures.
Database version and release
The version, release, and modification level of the database server
invoking the stored procedure.
Platform
The platform of the database server.
Table function result column numbers
Not applicable to stored procedures.
For more information on the DBINFO structure, see DBINFO Structure on
page 404.
Variable Declaration and CREATE PROCEDURE Examples
The following examples demonstrate the stored procedure source code and
CREATE PROCEDURE statements you would use in hypothetical scenarios
with the SAMPLE database.
Using IN and OUT Parameters: Assume that you want to create a Java
stored procedure GET_LASTNAME that, given empno (SQL type VARCHAR),
returns lastname (SQL type CHAR) from the EMPLOYEE table in the SAMPLE
database. You will create the procedure as the getname method of the Java
class StoredProcedure, contained in the JAR installed as myJar. Finally, you
will call the stored procedure with a client application coded in C.
1. Declare two host variables in your stored procedure source code:
String empid;
String name;
...
#sql { SELECT lastname INTO :empid FROM employee WHERE empno=:empid }
212
Using INOUT Parameters: For the following example, assume that you want
to create a C stored procedure GET_MANAGER that, given deptnumb (SQL
type SMALLINT), returns manager (SQL type SMALLINT) from the ORG table
in the SAMPLE database.
1. Since deptnumb and manager are both of SQL data type SMALLINT, you
can declare a single variable onevar in your stored procedure that receives
a value from and returns a value to the client application:
EXEC SQL BEGIN DECLARE SECTION;
short onevar = 0;
EXEC SQL END DECLARE SECTION;
3. Call the stored procedure from your client application written in Java:
short onevar = 0;
...
#SQL { CALL GET_MANAGER (:INOUT onevar) };
213
CONTAINS SQL
Indicates that SQL statements that neither read nor modify SQL data
can be executed by the stored procedure. If the stored procedure
attempts to execute an SQL statement that reads or modifies SQL
data, the statement returns SQLSTATE 38004. Statements that are not
supported in any stored procedure return SQLSTATE 38003.
READS SQL DATA
Indicates that some SQL statements that do not modify SQL data can
be executed by the stored procedure. If the stored procedure attempts
to execute an SQL statement that modifies data, the statement returns
SQLSTATE 38002. Statements that are not supported in any stored
procedure return SQLSTATE 38003.
MODIFIES SQL DATA
Indicates that the stored procedure can execute any SQL statement
except statements that are not supported in stored procedures. If the
stored procedure attempts to execute an SQL statement that is not
supported in a stored procedure, the statement returns SQLSTATE
38003.
For more information on the CREATE PROCEDURE statement, refer to the
SQL Reference.
Nested Stored Procedures
Nested stored procedures are stored procedures that call another stored
procedure. You can use this technique in your DB2 applications under the
following restrictions:
v the stored procedures must be cataloged as LANGUAGE C or LANGUAGE
SQL.
v the calling stored procedure can only call a stored procedure that is
cataloged using the same LANGUAGE clause. For nested calls only,
LANGUAGE C and LANGUAGE SQL are considered the same language.
For example, a LANGUAGE C stored procedure can call an SQL procedure.
v the calling stored procedure cannot call a stored procedure that is cataloged
with a higher SQL data access level. For example, a stored procedure
cataloged with CONTAINS SQL data access can call a stored procedure
cataloged with NO SQL or CONTAINS SQL data access, but cannot call a
stored procedure cataloged with READS SQL DATA or MODIFIES SQL
DATA.
v up to 16 levels of nested stored procedure calls are supported. For example,
a scenario where stored procedure PROC1 calls PROC2, and PROC2 calls
PROC3 represents three levels of nested stored procedures.
v the calling and called stored procedures at all levels of nesting cannot be
cataloged as NOT FENCED
214
Nested SQL procedures can return one or more result sets to the client
application or to the calling stored procedure. To return a result set from an
SQL procedure to the client application, issue the DECLARE CURSOR
statement using the WITH RETURN TO CLIENT clause. To return a result set
from an SQL procedure to the caller, where the caller is either a client
application or a calling stored procedure, issue the DECLARE CURSOR
statement using the WITH RETURN TO CALLER clause.
Nested embedded SQL stored procedures written in C and nested CLI stored
procedures cannot return result sets to the client application or calling stored
procedure. If a nested embedded SQL stored procedure or a nested CLI stored
procedure leaves cursors open when the stored procedure exits, DB2 closes
the cursors. For more information on returning result sets from stored
procedures, see Returning Result Sets from Stored Procedures on page 233.
|
|
|
|
|
|
|
|
|
For example, assume the stored procedure MYPROC contains the following
code fragment:
|
|
|
DB2 returns an error when MYPROC is called because cursor c1 is still open
when MYPROC issues a recursive CALL statement. The specific error
returned by DB2 depends on the actions MYPROC performs on the cursor.
|
|
|
|
|
|
|
Close all open cursors before issuing the nested CALL statement to avoid an
error.
OPEN c1;
CALL MYPROC();
CLOSE c1;
OPEN c1;
CLOSE c1;
CALL MYPROC();
Restrictions
When you create a stored procedure, you must observe the following
restrictions:
v Do not use the standard I/O streams, for example, calls to
System.out.println() in Java, printf() in C/C++, or display in COBOL.
Stored procedures run in the background, so you cannot write to the screen.
However, you can write to a file.
215
CONNECT TO
CONNECT RESET
CREATE DATABASE
DROP DATABASE
FORWARD RECOVERY
RESTORE
v On UNIX-based systems, NOT FENCED stored procedures run under the
user ID of the DB2 Agent Process. FENCED stored procedures run under
the user ID of the db2dari executable, which is set to the owner of the
.fenced file in sqllib/adm. This user ID controls the system resources
available to the stored procedure. For information on the db2dari
executable, refer to the Quick Beginnings book for your platform.
v You cannot overload stored procedures that accept the same number of
parameters, even if the parameters are of different SQL data types.
v Stored procedures cannot contain commands that would terminate the
current process. A stored procedure should always return control to the
client without terminating the current process.
|
|
|
|
216
|
|
|
the OLE progID identifying the OLE automation object and the method name
separated by ! (exclamation mark). The OLE automation object must be
implemented as an in-process server (.DLL).
The following CREATE PROCEDURE statement registers an OLE automation
stored procedure called median for the median method of the OLE
automation object db2smpl.salary:
CREATE PROCEDURE median (INOUT sal DOUBLE)
EXTERNAL NAME 'db2smpl.salary!median'
LANGUAGE OLE
FENCED
PARAMETER STYLE DB2SQL
The calling conventions for OLE method implementations are identical to the
conventions for procedures written in C or C++.
DB2 automatically handles the type conversions between SQL types and OLE
automation types. For a list of the DB2 mappings between supported OLE
automation types and SQL types, see Table 16 on page 427. For a list of the
DB2 mappings between SQL types and the data types of the OLE
programming language, such as BASIC or C/C++, see Table 17 on page 428.
Data passed between DB2 and OLE automation stored procedures is passed as
call by reference. DB2 does not support SQL types such as DECIMAL or
LOCATORS, or OLE automation types such as boolean or CURRENCY, that
are not listed in the previously referenced tables. Character and graphic data
mapped to BSTR is converted from the database code page to UCS-2 (also
known as Unicode, IBM code page 13488) scheme. Upon return, the data is
converted back to the database code page. These conversions occur regardless
of the database code page. If code page conversion tables to convert from the
database code page to UCS-2 and from UCS-2 to the database code page are
not installed, you receive an SQLCODE -332 (SQLSTATE 57017).
|
|
|
|
|
|
|
After you code an OLE automation object, you must register the methods of
the object as stored procedures using the CREATE PROCEDURE statement. To
register an OLE automation stored procedure, issue a CREATE PROCEDURE
statement with the LANGUAGE OLE clause. The external name consists of
the OLE progID identifying the OLE automation object and the method name
separated by ! (exclamation mark). The OLE automation object needs to be
implemented as an in-process server (.DLL).
217
median is that half the values lie above it, and half below it.) The median
salary is then passed back to the client application using an OUT host
variable.
This sample program calculates the median salary of all employees in the
SAMPLE database. Since there is no existing SQL column function to calculate
medians, the median salary can be found iteratively by the following
algorithm:
1. Determine the number of records, n, in the table.
2. Order the records based upon salary.
3. Fetch records until the record in row position n 2 + 1 is found.
4. Read the median salary from this record.
An application that uses neither the stored procedures technique, nor blocking
cursors, must FETCH each salary across the network as shown in Figure 5.
Database
Server
Client
Workstation
Send request to
SELECT SALARY
FROM STAFF
ORDER BY SALARY.
Since only the salary at row n 2 + 1 is needed, the application discards all
the additional data, but only after it is transmitted across the network.
You can design an application using the stored procedures technique that
allows the stored procedure to process and discard the unnecessary data,
returning only the median salary to the client application. Figure 6 shows this
feature.
218
Client
Workstation
Database
Server
Call Server
Procedure stored
on the Database.
Outcli.java
Stored procedure
Outsrv.sqlj
spclient.sqc
Stored procedure
spserver.sqc
SQLDA
SQLCA
java.math.BigDecimal
Provides Java support for the DB2 DECIMAL data type
Chapter 7. Stored Procedures
219
Java
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
220
2
3
outLanguage(con);
outParameter(con);
inParameters(con);
inoutParam(con, outMedian);
resultSet(con);
twoResultSets(con);
allDataTypes(con);
// rollback any changes to the database
con.rollback();
con.close();
8
221
}
catch (Exception e)
{
try { con.close(); } catch (Exception x) { }
e.printStackTrace ();
}
} // end main
public static void outParameter(Connection con)
throws SQLException
{
// prepare the CALL statement for OUT_PARAM
procName = "OUT_PARAM";
sql = "CALL " + procName + "(?, ?, ?)";
callStmt = con.prepareCall(sql);
222
4
5
6
if (outErrorCode == 0) {
System.out.println(procName + " completed successfully");
System.out.println ("Median salary returned from OUT_PARAM = "
+ outMedian);
}
else { // stored procedure failed
System.out.println(procName + " failed with SQLCODE "
+ outErrorCode);
System.out.println(procName + " failed at " + outErrorLabel);
}
7
<stdio.h>
<stdlib.h>
<sql.h>
<sqlda.h>
<sqlca.h>
<string.h>
"utilemb.h"
1
4
2
outparameter();
EXEC SQL ROLLBACK;
EMB_SQL_CHECK("ROLLBACK");
printf("\nStored procedure rolled back.\n\n");
8
int outparameter() {
/********************************************************\
* Call OUT_PARAM stored procedure
*
\********************************************************/
EXEC SQL BEGIN DECLARE SECTION;
/* Declare host variables for passing data to OUT_PARAM */
double out_median;
EXEC SQL END DECLARE SECTION;
strcpy(procname, "OUT_PARAM");
printf("\nCALL stored procedure named %s\n", procname);
/* OUT_PARAM is PS GENERAL, so do not pass a null indicator */
EXEC SQL CALL :procname (:out_median, :out_sqlcode, :out_buffer); 5 6
EMB_SQL_CHECK("CALL OUT_PARAM");
/* Check that the stored procedure executed successfully */
if (out_sqlcode == 0)
7
{
Chapter 7. Stored Procedures
223
return 0;
224
1
3
2
4
5
// clean up resources
rs2.close();
Chapter 7. Stored Procedures
225
stmt2.close();
con.close();
226
}
catch (SQLException sqle)
{
errorCode[0] = sqle.getErrorCode();
}
6
<stdio.h>
<string.h>
<stdlib.h>
<sqlda.h>
<sqlca.h>
<sqludf.h>
<sql.h>
<memory.h>
1
2
strcpy(buffer, "SELECT");
EXEC SQL SELECT COUNT(*) INTO :numRecords FROM staff;
3
strcpy(buffer, "OPEN");
EXEC SQL OPEN c1;
strcpy(buffer, "FETCH");
while (counter < (numRecords / 2 + 1)) {
EXEC SQL FETCH c1 INTO :medianSalary;
4
5
227
228
6
C++ Consideration
When writing a stored procedure in C++, you may want to consider declaring
the procedure name using extern C, as in the following example:
extern C SQL_API_RC SQL_API_FN proc_name( short *parm1, char *parm2)
The extern "C" prevents type decoration (or mangling) of the function name
by the C++ compiler. Without this declaration, you have to include all the
type decorations for the function name when you call the stored procedure.
229
|
|
|
|
|
|
|
|
|
230
|
|
|
|
|
|
|
|
|
|
For example, if your client application passes the string "A SHORT STRING" as a
CHAR(200) parameter to a stored procedure, DB2 has to pad the parameter
with 186 spaces, null-terminate the string, then send the entire 200 character
string and null-terminator across the network to the stored procedure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
db2set DB2_STPROC_LOOKUP_FIRST=ON
231
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Notes:
1. While you can expect performance improvements from running NOT
FENCED stored procedures, user code can accidentally or maliciously
damage the database control structures. You should only use NOT
FENCED stored procedures when you need to maximize the performance
benefits. Test all your stored procedures thoroughly prior to running them
as NOT FENCED.
2. If a severe error does occur while you are running a NOT FENCED stored
procedure, the database manager determines whether the error occurred in
the stored procedure code or the database code, and attempts an
appropriate recovery.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. Run the client application on the same machine as the DB2 server.
|
|
|
|
|
|
|
|
|
232
|
|
|
|
|
|
|
|
Note: You should not use static data in stored procedures, because DB2
cannot guarantee that the static data in a stored procedure is or is not
reinitialized on subsequent invocations.
NOT FENCED stored procedures must be precompiled with the
WCHARTYPE NOCONVERT option. See The WCHARTYPE Precompiler
Option in C and C++ on page 623 for more information.
|
|
|
|
|
|
|
DB2 does not support the use of any of the following features in NOT
FENCED stored procedures:
v 16-bit
v Multi-threading
v Nested calls: calling or being called by another stored procedure
v Result sets: returning result sets to a client application or caller
v REXX
|
|
|
|
|
|
|
The following DB2 APIs and any DB2 CLI API are not supported in a NOT
FENCED stored procedure:
v BIND
v EXPORT
v IMPORT
v PRECOMPILE PROGRAM
v ROLLFORWARD DATABASE
233
spserver.sqc
Java
Spserver.java
This sample stored procedure accepts one IN parameter and returns one OUT
parameter and one result set. The stored procedure uses the IN parameter to
create a result set containing the values of the NAME, JOB, and SALARY
columns for the STAFF table for rows where SALARY is greater than the IN
parameter.
1
234
2
3
4
Java stored procedures: for each result set that a PARAMETER STYLE
JAVA stored procedure returns, you must include a corresponding
ResultSet[] argument in the stored procedure method signature.
235
3
236
237
Spclient.java
238
1
2
3
Fetch the rows from the result set. The sample CLI client uses a while
loop to fetch and display all rows from the result set. The sample
JDBC client calls a class method called fetchAll that fetches and
displays all rows from a result set.
<stdio.h>
<string.h>
<stdlib.h>
<sqlcli1.h>
<sqlca.h>
"utilcli.h"
SQLCHAR
SQLINTEGER
char
SQLINTEGER
struct sqlca
SQLRETURN
char
SQLHANDLE
SQLHANDLE
SQLHANDLE
SQLHANDLE
SQLRETURN
double
stmt[50];
out_sqlcode;
out_buffer[33];
indicator;
sqlca;
rc,rc1 ;
procname[254];
henv; /* environment handle */
hdbc; /* connection handle */
hstmt1; /* statement handle */
hstmt2; /* statement handle */
sqlrc = SQL_SUCCESS;
out_median;
int oneresultset1(SQLHANDLE);
int main(int argc, char *argv[])
{
SQLHANDLE
hstmt; /* statement handle */
SQLHANDLE
hstmt_oneresult; /* statement handle */
char
char
char
dbAlias[SQL_MAX_DSN_LENGTH + 1] ;
user[MAX_UID_LENGTH + 1] ;
pswd[MAX_PWD_LENGTH + 1] ;
239
240
return( SQL_SUCCESS ) ;
int oneresultset1(hstmt)
SQLHANDLE
hstmt; /* statement handle */
{
/********************************************************\
* Call one_result_set_to_client stored procedure
*
\********************************************************/
double
SQLINTEGER
SQLSMALLINT
char
char
insalary = 20000;
salary_int;
num_cols;
name[40];
job[10];
strcpy(procname, "RESULT_SET_CALLER");
printf("\nCALL stored procedure:
1
%s\n", procname);
2
241
\n");
rc = SQLFetch( hstmt );
}
STMT_HANDLE_CHECK( hstmt, rc);
/* Check that the stored procedure executed successfully */
if (rc == SQL_SUCCESS) {
printf("Stored procedure returned successfully.\n");
}
else {
printf("Stored procedure returned SQLCODE %d\n", out_sqlcode);
}
rc = SQLCloseCursor(hstmt);
}
242
return(rc);
3
}
else { // stored procedure failed
System.out.println(procName + " failed with SQLCODE "
+ outErrorCode);
}
243
Resolving Problems
If a stored procedure application fails to execute properly, ensure that:
v The stored procedure is built using the correct calling sequence, compile
options, and so on.
v The application executes locally with both client application and stored
procedure on the same workstation.
v The stored procedure is stored in the proper location in accordance with the
instructions in the Application Building Guide.
For example, in an OS/2 environment, the dynamic link library for a
FENCED stored procedure is located in the instance_name\function
directory on the database server.
v The application, except if it is written in DB2 CLI and JDBC, is bound to
the database.
v The stored procedure accurately returns any SQLCA error information to
the client application.
v Stored procedure function names are case-sensitive, and must match exactly
on client and server.
v If you register the stored procedure with a CREATE PROCEDURE
statement, stored procedure function names must not match their library
name.
For example, the database manager will execute the stored procedure
myfunc contained in the Windows 32-bit operating system library
myfunc.dll as a DB2DARI function, disregarding the values specified in its
associated CREATE PROCEDURE statement.
Note: For more information on debugging Java stored procedures, see
Debugging Stored Procedures in Java on page 665.
You can use the debugger supplied with your compiler to debug a local
FENCED stored procedure as you would any other application. Consult your
compiler documentation for information on using the supplied debugger.
For example, to use the debugger supplied with Visual Studio on Windows
NT, perform the following steps:
Step 1. Set the DB2_STPROC_ALLOW_LOCAL_FENCED registry variable to true.
Step 2. Compile the source file for the stored procedure DLL with the -Zi
and -Od flags, and then link the DLL using the -DEBUG option.
Step 3. Copy the resulting DLL to the instance_name \function directory of
the server.
Step 4. Invoke the client application on the server with the Visual Studio
debugger. For the client application outcli.exe, enter the following
command:
244
msdev spclient.exe
Step 5. When the Visual Studio debugger window opens, select Project >
Settings.
Step 6. Click the Debug tab.
Step 7. Click the Category arrow and select the Additional DLLs.
Step 8. Click the New button to create a new module.
Step 9. Click the Browse button to open the Browse window.
Step 10. Select the module spserver.dll and click OK to close the Settings
window.
Step 11. Open the source file for the stored procedure and set a breakpoint.
Step 12. Click the Go button. The Visual Studio debugger stops when the
stored procedure is invoked.
Step 13. At this point, you can debug the stored procedure using the Visual
Studio debugger.
Refer to the Visual Studio product documentation for further information on
using the Visual Studio debugger.
245
246
247
248
250
251
251
253
254
254
256
.
.
.
.
257
257
258
258
259
259
260
. 260
. 263
. 263
256
257
247
v Other information about the procedure, such as the specific name of the
procedure and the number of result sets returned by the procedure.
Unlike a CREATE PROCEDURE statement for an external stored procedure,
the CREATE PROCEDURE statement for an SQL procedure does not specify
the EXTERNAL clause. Instead, an SQL procedure has a procedure body,
which contains the source statements for the stored procedure.
The following example shows a CREATE PROCEDURE statement for a simple
stored procedure. The procedure name, the list of parameters that are passed
to or from the procedure, and the LANGUAGE parameter are common to all
stored procedures. However, the LANGUAGE value of SQL and the
BEGIN...END block, which forms the procedure body, are particular to an
SQL procedure.
CREATE PROCEDURE UPDATE_SALARY_1
1
(IN EMPLOYEE_NUMBER CHAR(6),
2
IN RATE INTEGER)
2
LANGUAGE SQL
3
BEGIN
UPDATE EMPLOYEE
4
SET SALARY = SALARY * (1.0 * RATE / 100.0 )
WHERE EMPNO = EMPLOYEE_NUMBER;
END
Within the SQL procedure body, you cannot use OUT parameters as a value
in any expression. You can only assign values to OUT parameters using the
assignment statement, or as the target variable in the INTO clause of SELECT,
VALUES and FETCH statements. You cannot use IN parameters as the target
of assignment or INTO clauses.
248
variable that is defined and used only within a procedure body. You
cannot assign values to IN parameters.
CASE statement
Selects an execution path based on the evaluation of one or more
conditions. This statement is similar to the CASE expression described in
the SQL Reference.
FOR statement
Executes a statement or group of statements for each row of a table.
GET DIAGNOSTICS statement
The GET DIAGNOSTICS statement returns information about the
previous SQL statement.
GOTO statement
Transfers program control to a user-defined label within an SQL routine.
IF statement
Selects an execution path based on the evaluation of a condition.
ITERATE statement
Passes the flow of control to a labelled block or loop.
LEAVE statement
Transfers program control out of a loop or block of code.
LOOP statement
Executes a statement or group of statements multiple times.
REPEAT statement
Executes a statement or group of statements until a search condition is
true.
RESIGNAL statement
The RESIGNAL statement is used within a condition handler to resignal
an error or warning condition. It causes an error or warning to be
returned with the specified SQLSTATE, along with optional message text.
RETURN statement
Returns control from the SQL procedure to the caller. You can also return
an integer value to the caller.
SIGNAL statement
The SIGNAL statement is used to signal an error or warning condition. It
causes an error or warning to be returned with the specified SQLSTATE,
along with optional message text.
SQL statement
The SQL procedure body can contain any SQL statement listed in
Appendix A. Supported SQL Statements on page 737.
249
WHILE statement
Repeats the execution of a statement or group of statements while a
specified condition is true.
Compound statement
Can contain one or more of any of the other types of statements in this
list, as well as SQL variable declarations, condition handlers, or cursor
declarations.
For a complete list of the SQL statements allowed within an SQL procedure
body, see Appendix A. Supported SQL Statements on page 737. For detailed
descriptions and syntax of each of these statements, refer to the SQL Reference.
IF (rating = 1)
THEN UPDATE employee
SET salary = salary * 1.10, bonus = 1000
WHERE empno = employee_number;
ELSEIF (rating = 2)
THEN UPDATE employee
SET salary = salary * 1.05, bonus = 500
WHERE empno = employee_number;
ELSE UPDATE employee
SET salary = salary * 1.03, bonus = 0
WHERE empno = employee_number;
END IF;
END
To process the DB2 CLP script from the command line, use the following
syntax:
db2 -tdterm-char -vf script-name
250
When DB2 raises a condition that matches condition, DB2 passes control to the
condition handler. The condition handler performs the action indicated by
handler-type, and then executes SQL-procedure-statement.
handler-type
CONTINUE
Specifies that after SQL-procedure-statement completes, execution
continues with the statement after the statement that caused the
error.
251
EXIT
UNDO
Specifies that before SQL-procedure-statement executes, DB2 rolls
back any SQL operations that have occurred in the compound
statement that contains the handler. After SQL-procedure-statement
completes, execution continues at the end of the compound
statement that contains the handler.
Note: You can only declare UNDO handlers in ATOMIC
compound statements.
condition
DB2 provides three general conditions:
NOT FOUND
Identifies any condition that results in an SQLCODE of +100 or an
SQLSTATE of 02000.
SQLEXCEPTION
Identifies any condition that results in a negative SQLCODE.
SQLWARNING
Identifies any condition that results in a warning condition
(SQLWARN0 is W), or that results in a positive SQL return code
other than +100.
You can also use the DECLARE statement to define your own condition
for a specific SQLSTATE. For more information on defining your own
condition, refer to the SQL Reference.
SQL-procedure-statement
You can use a single SQL procedure statement to define the behavior of
the condition handler. DB2 accepts a compound statement delimited by a
BEGIN...END block as a single SQL procedure statement. If you use a
compound statement to define the behavior of a condition handler, and
you want the handler to retain the value of either the SQLSTATE or
SQLCODE variables, you must assign the value of the variable to a local
variable or parameter in the first statement of the compound block. If the
first statement of a compound block does not assign the value of
SQLSTATE or SQLCODE to a local variable or parameter, SQLSTATE and
SQLCODE cannot retain the value that caused DB2 to invoke the
condition handler.
Note: You cannot define another condition handler within the condition
handler.
252
253
For more information on the SIGNAL and RESIGNAL statements, refer to the
SQL Reference.
You can also use CONTINUE condition handlers to assign the value of the
SQLSTATE and SQLCODE variables to local variables in your SQL procedure
body. You can then use these local variables to control your procedural logic,
or pass the value back as an output parameter. In the following example, the
SQL procedure returns control to the statement following each SQL statement
with the SQLCODE set in a local variable called RETCODE.
DECLARE SQLCODE INTEGER DEFAULT 0;
DECLARE retcode INTEGER DEFAULT 0;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION SET retcode = SQLCODE;
DECLARE CONTINUE HANDLER FOR SQLWARNING SET retcode = SQLCODE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET retcode = SQLCODE;
254
If your dynamic SQL statement contains parameter markers, you must use the
PREPARE and EXECUTE statements. If you you plan to execute a dynamic
SQL statement multiple times, it might be more efficient to issue a single
PREPARE statement and to issue the EXECUTE statement multiple times
rather than issuing the EXECUTE IMMEDIATE statement each time. To use
the PREPARE and EXECUTE statements to issue dynamic SQL in your SQL
procedure, you must include the following statements in the SQL procedure
body:
Step 1. Declare a variable of type VARCHAR that is large enough to hold
your dynamic SQL statement using a DECLARE statement.
Step 2. Assign a statement string to the variable using a SET statement. You
cannot include variables directly in the statement string. Instead, you
must use the question mark ('?') symbol as a parameter marker for
any variables used in the statement.
Step 3. Create a prepared statement from the statement string using a
PREPARE statement.
Step 4. Execute the prepared statement using an EXECUTE statement. If the
statement string includes a parameter marker, use a USING clause to
replace it with the value of a variable.
Note: Statement names defined in PREPARE statements for SQL procedures
are treated as scoped variables. Once the SQL procedure exits the scope
in which you define the statement name, DB2 can no longer access the
statement name. Inside any compound statement, you cannot issue two
PREPARE statements that use the same statement name.
Example: Dynamic SQL statements: The following example shows an SQL
procedure that includes dynamic SQL statements.
The procedure receives a department number (deptNumber) as an input
parameter. In the procedure, three statement strings are built, prepared, and
executed. The first statement string executes a DROP statement to ensure that
the table to be created does not already exist. This table is named
DEPT_deptno_T, where deptno is the value of input parameter deptNumber. A
CONTINUE HANDLER ensures that the SQL procedure will continue if it
detects SQLSTATE 42704 (undefined object name), which DB2 returns from
the DROP statement if the table does not exist. The second statement string
issuees a CREATE statement to create DEPT_deptno_T. The third statement
string inserts rows for employees in department deptno into DEPT_deptno_T.
The third statement string contains a parameter marker that represents
deptNumber. When the prepared statement is executed, parameter deptNumber
is substituted for the parameter marker.
CREATE PROCEDURE create_dept_table
(IN deptNumber VARCHAR(3), OUT table_name VARCHAR(30))
LANGUAGE SQL
Chapter 8. Writing SQL Procedures
255
BEGIN
DECLARE stmt VARCHAR(1000);
-- continue if sqlstate 42704 ('undefined object name')
DECLARE CONTINUE HANDLER FOR SQLSTATE '42704'
SET stmt = '';
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SET table_name = 'PROCEDURE_FAILED';
SET table_name = 'DEPT_'||deptNumber||'_T';
SET stmt = 'DROP TABLE '||table_name;
PREPARE s1 FROM stmt;
EXECUTE s1;
SET stmt = 'CREATE TABLE '||table_name||
'( empno CHAR(6) NOT NULL, '||
'firstnme VARCHAR(12) NOT NULL, '||
'midinit CHAR(1) NOT NULL, '||
'lastname VARCHAR(15) NOT NULL, '||
'salary DECIMAL(9,2))';
PREPARE s2 FROM STMT;
EXECUTE s2;
SET stmt = 'INSERT INTO '||table_name || ' ' ||
'SELECT empno, firstnme, midinit, lastname, salary '||
'FROM employee '||
'WHERE workdept = ?';
PREPARE s3 FROM stmt;
EXECUTE s3 USING deptNumber;
END
256
257
result sets from those procedures. To return a result set from an SQL
procedure, write your SQL procedure as follows:
1. Declare the number of result sets that the SQL procedure returns using the
DYNAMIC RESULT SETS clause of the CREATE PROCEDURE statement.
2. Declare a cursor using the DECLARE CURSOR statement.
3. Open the cursor using the OPEN CURSOR statement.
4. Exit from the SQL procedure without closing the cursor.
For example, you can write an SQL procedure that returns a single result set,
based on the value of the INOUT parameter threshold, as follows:
CREATE PROCEDURE RESULT_SET (INOUT threshold SMALLINT)
LANGUAGE SQL
DYNAMIC RESULT SETS 1
BEGIN
DECLARE cur1 CURSOR WITH RETURN TO CALLER FOR
SELECT name, job, years
FROM staff
WHERE years < threshold;
OPEN cur1;
END
258
ALLOCATE CURSOR
Use the ALLOCATE CURSOR statement in a calling SQL procedure to
open a result set returned from a target SQL procedure. To use the
ALLOCATE CURSOR statement, the result set must already be
associated with a result set locator through the ASSOCIATE RESULT
SET LOCATORS statement. Once the SQL procedure issues an
ALLOCATE CURSOR statement, you can fetch rows from the result
set using the cursor name declared in the ALLOCATE CURSOR
statement. To extend the previously described ASSOCIATE
LOCATORS example, the SQL procedure could fetch rows from the
first of the returned result sets using the following SQL:
Chapter 8. Writing SQL Procedures
259
Once you display the message, try modifying your SQL procedure following
the suggestions in the User Response section.
260
normally creates a log file that contains the error messages. This log file, and
other intermediate files, are described in Debugging SQL Procedures Using
Intermediate Files on page 263.
To retrieve the error messages generated by DB2 and the C compiler for an
SQL procedure, display the message log file in the following directory on
your database server:
UNIX $DB2PATH/function/routine/sqlproc/$DATABASE/$SCHEMA/tmp
where $DB2PATH represents the location of the instance directory,
$DATABASE represents the database name, and $SCHEMA represents
the schema name used to create the SQL procedure.
Windows NT
%DB2PATH%\function\routine\sqlproc\%DB%\%SCHEMA%\tmp
where %DB2PATH% represents the location of the instance directory,
%DB% represents the database name, and %SCHEMA% represents
the schema name used to create the SQL procedure.
You can also issue a CALL statement in an application to call the sample
stored procedure db2udp!get_error_messages using the following syntax:
CALL db2udp!get_error_messages(schema-name, file-name, message-text)
);
261
}
catch (Exception e) { throw e; }
finally
{
if (stmt != null) stmt.close();
}
You could use the following C application to display the error messages for
an SQL procedure:
int getErrors(char inputSchema[9], char inputFilename[9],
char outputFilecontents[32000])
{
EXEC SQL BEGIN DECLARE SECTION;
char
procschema[100] = "";
char
filename[100] = "";
char
filecontents[32000] = "";
EXEC SQL END DECLARE SECTION;
strcpy (procschema, inputSchema);
strcpy (filename, inputFilename);
Note: Before you can display the error messages for an SQL procedure that
DB2 failed to create, you must know both the procedure name and the
generated file name of the SQL procedure. If the procedure schema
name is not issued as part of the CREATE PROCEDURE statement,
DB2 uses the value of the CURRENT SCHEMA special register. To
display the value of the CURRENT SCHEMA special register, issue the
following statement at the CLP:
VALUES CURRENT SCHEMA
262
Before DB2 can use the new value of the registry variable, you must restart
the database.
263
264
Example 3: Using Nested SQL Procedures with Global Temporary Tables and Result
Sets:
The following example shows how to use the ASSOCIATE RESULT SET
LOCATOR and ALLOCATE CURSOR statements to return a result set from
the called SQL procedure, temp_table_insert, to the calling SQL procedure,
temp_table_create. The example also shows how a called SQL procedure can
use a global temporary table that is created by a calling SQL procedure.
In the example, a client application or another SQL procedure calls
temp_table_create, which creates the global temporary table SESSION.TTT
and in turn calls temp_table_insert.
To use the SESSION.TTT global temporary table, temp_table_insert contains
a DECLARE GLOBAL TEMPORARY TABLE statement identical to the
statement that temp_table_create issues to create SESSION.TTT. The
difference is that temp_table_insert contains the DECLARE GLOBAL
Chapter 8. Writing SQL Procedures
265
where ts1 represents the name of the user temporary tablespace, and
ts1file represents the name of the container used by the tablespace.
CREATE PROCEDURE temp_table_create(IN parm1 INTEGER, IN parm2 INTEGER,
OUT parm3 INTEGER, OUT parm4 INTEGER)
LANGUAGE SQL
BEGIN
DECLARE loc1 RESULT_SET_LOCATOR VARYING;
DECLARE total3,total4 INTEGER DEFAULT 0;
DECLARE rcolumn1, rcolumn2 INTEGER DEFAULT 0;
DECLARE result_set_end INTEGER DEFAULT 0;
DECLARE CONTINUE HANDLER FOR NOT FOUND, SQLEXCEPTION, SQLWARNING
BEGIN
SET result_set_end = 1;
END;
--Create the temporary table that is used in both this SQL procedure
--and in the SQL procedure called by this SQL procedure.
DECLARE GLOBAL TEMPORARY TABLE ttt(column1 INT, column2 INT)
NOT LOGGED;
--Insert rows into the temporary table.
--The result set includes these rows.
266
267
BEGIN
--The WITH RETURN TO CALLER clause causes the SQL procedure
--to return its result set to the calling procedure.
DECLARE cur1 CURSOR WITH RETURN TO CALLER
FOR SELECT * FROM session.ttt;
--To return a result set, open a cursor without closing the cursor.
OPEN cur1 ;
END;
END
268
. 269
.
.
. 270
. 270
Java
SQL
You can export SQL stored procedures and create Java stored procedures from
existing Java class files. To provide a comfortable development environment,
Copyright IBM Corp. 1993, 2001
269
the Stored Procedure Builder code editor enables you to use vi or emacs key
bindings, in addition to the default key bindings.
Launching Stored Procedure Builder:
On Windows 32-bit operating systems, you can launch Stored Procedure
Builder from the DB2 Universal Database program group, issuing the db2spb
command from the command line, or from any of the following development
applications:
v Microsoft Visual C++ 5.0 and 6.0
v Microsoft Visual Basic 5.0 and 6.0
v IBM VisualAge for Java
On AIX and Solaris Operating Environment clients, you can launch Stored
Procedure Builder by issuing the db2spb command from the command line.
Stored Procedure Builder is implemented with Java and all database
connections are managed with Java Database Connectivity (JDBC). Using a
JDBC driver, you can connect to any local DB2 alias or any other database for
which you can specify a host, port, and database name.
Note: To use Stored Procedure Builder, you must be connected to a DB2
database for development. For more information about using Stored
Procedure Builder, refer to the IBM DB2 Stored Procedure Builder
online help.
270
basic SQL structure and then use the source code editor to modify the stored
procedure to contain sophisticated stored procedure logic.
When creating a stored procedure, you can choose to return a single result set,
multiple result sets, or output parameters only. You might choose not to
return a result set when your stored procedure creates or updates database
tables. You can use the stored procedure wizards to define input and output
parameters for a stored procedure so that it receives values for host variables
from the client application. Additionally, you can create multiple SQL
statements in a stored procedure, allowing the stored procedure to receive a
case value and then to select one of a number of queries.
To build a stored procedure on a target database, simply click Finish in the
stored procedure wizards. You do not have to manually register the stored
procedure with DB2 by using the CREATE PROCEDURE statement.
271
server. To debug a stored procedure, you build the stored procedure in debug
mode, add a debug entry for your client IP address, and run the stored
procedure. You are not required to debug the stored procedures from within
an application program. You can separate testing your stored procedure from
testing the calling application program.
Using Stored Procedure Builder, you can view all the stored procedures that
you have the authority to change, add, or remove debug entries for in the
stored procedures debug table. If you are a database administrator or the
creator of the selected stored procedure, you can grant authorization to other
users to debug the stored procedure.
272
273
274
.
.
.
. 275
. 275
. 277
. 278
275
276
your applications that use structured types and distinct types, refer to
the Administration Guide. For more information on the CREATE
INDEX EXTENSION statement, refer to the SQL Reference.
Constraints
Constraints are rules that you define that the database enforces. There
are four types of constraints:
Unique
Ensures the unique values of a key in a table. Any changes to
the columns that compose the unique key are checked for
uniqueness.
Referential integrity
Enforces referential constraints on insert, update, and delete
operations. It is the state of a database in which all values of
all foreign keys are valid.
Table check
Verify that changed data does not violate conditions specified
when a table was created or altered.
Triggers
Triggers consist of SQL statements that are associated with a
table and are automatically activated when data change
operations occur on that table. You can use triggers to support
general forms of integrity such as business rules.
For more information about unique constraints, referential integrity,
and table check constraints, refer to the Administration Guide. For more
information on triggers, refer to Chapter 16. Using Triggers in an
Active DBMS on page 483.
Using object-oriented features in traditional applications
There is an important synergy among the object-oriented features of
DB2. The use of the DB2 object-oriented mechanisms is not restricted
to the support of object-oriented applications. Just as C++, a popular
object-oriented programming language, is used to implement all sorts
of non-object-oriented applications, the object-oriented mechanisms
provided by DB2 are also very useful to support all kinds of
non-object-oriented applications. The object-relational features of DB2
are general-purpose mechanisms that can be used to model any
database application. For this reason, these DB2 object extensions offer
extensive support for both non-traditional, that is, object-oriented
applications, in addition to improving support for traditional ones.
User-defined Distinct Types
Distinct types are based on existing built-in types. For example, you might
have distinct types to represent various currencies, such as USDollar and
Chapter 10. Using the Object-Relational Capabilities
277
278
Methods
Methods, like UDFs, define behavior for objects, but they differ from
functions in the following ways:
v Methods are tightly associated with a particular user-defined
structured type and are stored in the same schema as the
user-defined type.
v Methods can be invoked on user-defined structured types that are
stored as values in columns, or, using the dereference operator (->),
on scoped references to structured types.
v Methods are invoked using a different SQL syntax from that used
to invoke functions.
v DB2 resolves unqualified references to methods starting with the
type on which the method was invoked. If the type on which the
method was invoked does not define the method, DB2 tries to
resolve the method by calling the method on the supertype of the
type on which the method was invoked.
To invoke a method on a structured type stored in a column, include
the name of the structured type (or an expression that resolves to a
structured type), followed by the method invocation operator (..),
followed by the name of the method. To invoke a method on a scoped
reference of a structured type, include the reference to the structured
type using the dereference operator (->), followed by the method
invocation operator, followed by the name of the method.
For more information about the object-relational features of DB2, refer to:
v Chapter 12. Working with Complex Objects: User-Defined Structured
Types on page 291
v Chapter 11. User-defined Distinct Types on page 281
v Chapter 13. Using Large Objects (LOBs) on page 349
v Chapter 14. User-Defined Functions (UDFs) and Methods on page 373
v Chapter 15. Writing User-Defined Functions (UDFs) and Methods on
page 393
v Chapter 16. Using Triggers in an Active DBMS on page 483
279
280
. 281
. 282
. 282
.
.
.
.
.
.
.
283
283
283
283
284
284
285
285
286
287
288
288
289
289
290
. 285
281
types, they share the same efficient code used to implement built-in
functions, comparison operators, indexes, etc. for built-in data types.
282
Example: Money
Suppose you are writing applications that need to handle different currencies
and wish to ensure that DB2 does not allow these currencies to be compared
or manipulated directly with one another in queries. Remember that
conversions are necessary whenever you want to compare values of different
currencies. So you define as many distinct types as you need; one for each
currency that you may need to represent:
CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL (9,2) WITH COMPARISONS
CREATE DISTINCT TYPE CANADIAN_DOLLAR AS DECIMAL (9,2) WITH COMPARISONS
CREATE DISTINCT TYPE EURO AS DECIMAL (9,2) WITH COMPARISONS
Because DB2 does not support comparisons on CLOBs, you do not specify the
clause WITH COMPARISONS. You have specified a schema name different
from your authorization ID since you have DBADM authority, and you would
like to keep all distinct types and UDFs dealing with applicant forms in the
same schema.
283
Example: Sales
Suppose you want to define tables to keep your companys sales in different
countries as follows:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH
INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR
INTEGER CHECK (YEAR > 1985),
TOTAL
US_DOLLAR)
CREATE TABLE CANADIAN_SALES
(PRODUCT_ITEM INTEGER,
MONTH
INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR
INTEGER CHECK (YEAR > 1985),
TOTAL
CANADIAN_DOLLAR)
CREATE TABLE GERMAN_SALES
(PRODUCT_ITEM INTEGER,
MONTH
INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR
INTEGER CHECK (YEAR > 1985),
TOTAL
EURO)
The distinct types in the above examples are created using the same CREATE
DISTINCT TYPE statements in Example: Money on page 283. Note that the
above examples use check constraints. For information on check constraints
refer to the SQL Reference.
You have fully qualified the distinct type name because its qualifier is not the
same as your authorization ID and you have not changed the default function
path. Remember that whenever type and function names are not fully
qualified, DB2 searches through the schemas listed in the current function
path and looks for a type or function name matching the given unqualified
name. Because SYSIBM is always considered (if it has been omitted) in the
current function path, you can omit the qualification of built-in data types.
For example, you can execute SET CURRENT FUNCTION PATH = cheryl and the
value of the current function path special register will be "CHERYL", and does
not include "SYSIBM". Now, if CHERYL.INTEGER type is not defined, the
statement CREATE TABLE FOO(COL1 INTEGER) still succeeds because SYSIBM is
always considered as COL1 is of type SYSIBM.INTEGER.
284
You are, however, allowed to fully qualify the built-in data types if you wish
to do so. Details about the use of the current function path are discussed in
the SQL Reference.
285
Because you cannot compare US dollars with instances of the source type of
US dollars (that is, DECIMAL) directly, you have used the cast function
provided by DB2 to cast from DECIMAL to US dollars. You can also use the
other cast function provided by DB2 (that is, the one to cast from US dollars
to DECIMAL) and cast the column total to DECIMAL. Either way you decide
to cast, from or to the distinct type, you can use the cast specification notation
to perform the casting, or the functional notation. That is, you could have
written the above query as:
SELECT PRODUCT_ITEM
FROM
US_SALES
WHERE TOTAL > CAST (100000 AS us_dollar)
AND
MONTH = 7
AND
YEAR = 1999
The exchange rate between Canadian and U.S. dollars may change between
two invocations of the UDF, so you declare it as NOT DETERMINISTIC.
The question now is, how do you pass Canadian dollars to this UDF and get
U.S. dollars from it? The Canadian dollars must be cast to DECIMAL values.
The DECIMAL values must be cast to DOUBLE. You also need to have the
returned DOUBLE value cast to DECIMAL and the DECIMAL value cast to
U.S. dollars.
Such casts are performed automatically by DB2 anytime you define sourced
UDFs, whose parameter and return type do not exactly match the parameter
and return type of the source function. Therefore, you need to define two
286
That is, C1 (in Canadian dollars) is cast to decimal which in turn is cast to a
double value that is passed to the CDN_TO_US_DOUBLE function. This function
accesses the exchange rate file and returns a double value (representing the
amount in U.S. dollars) that is cast to decimal, and then to U.S. dollars.
A function to convert euros to U.S. dollars would be similar to the example
above:
CREATE FUNCTION EURO_TO_US_DOUBL(DOUBLE)
RETURNS DOUBLE
EXTERNAL NAME '/u/finance/funcdir/currencies!euro2us'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
NOT DETERMINISTIC
NO EXTERNAL ACTION
FENCED
CREATE FUNCTION EURO_TO_US_DEC (DECIMAL(9,2))
RETURNS DECIMAL(9,2)
SOURCE EURO_TO_US_DOUBL (DOUBLE)
CREATE FUNCTION US_DOLLAR(EURO) RETURNS US_DOLLAR
SOURCE EURO_TO_US_DEC (DECIMAL())
287
AND
AND
AND
AND
CDN.MONTH = 7
CDN.YEAR = 1999
GERMAN.MONTH = 7
GERMAN.YEAR = 1999
You want to know the total of sales in Germany for each product in the year
of 1994. You would like to obtain the total sales in US dollars:
SELECT PRODUCT_ITEM, US_DOLLAR (SUM (TOTAL))
FROM GERMAN_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
You could not write SUM (us_dollar (total)), unless you had defined a SUM
function on US dollar in a manner similar to the above.
You do not explicitly invoke the cast function to convert the character string
to the distinct type personal.application_form because DB2 lets you assign
instances of the source type of a distinct type to targets having that distinct
type.
288
You made use of DB2s cast specification to tell DB2 that the type of the
parameter marker is CLOB(32K), a type that is assignable to the distinct type
column. Remember that you cannot declare a host variable of a distinct type
type, since host languages do not support distinct types. Therefore, you
cannot specify that the type of a parameter marker is a distinct type.
Now suppose your supervisor requests that you maintain the annual total
sales in US dollars of each product and in each country, in separate tables:
CREATE TABLE US_SALES_94
(PRODUCT_ITEM INTEGER,
TOTAL
US_DOLLAR)
CREATE TABLE GERMAN_SALES_94
(PRODUCT_ITEM INTEGER,
TOTAL
US_DOLLAR)
CREATE TABLE CANADIAN_SALES_94
289
(PRODUCT_ITEM INTEGER,
TOTAL
US_DOLLAR)
INSERT INTO US_SALES_94
SELECT PRODUCT_ITEM, SUM (TOTAL)
FROM US_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
INSERT INTO GERMAN_SALES_94
SELECT PRODUCT_ITEM, US_DOLLAR (SUM (TOTAL))
FROM GERMAN_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
INSERT INTO CANADIAN_SALES_94
SELECT PRODUCT_ITEM, US_DOLLAR (SUM (TOTAL))
FROM CANADIAN_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
You explicitly cast the amounts in Canadian dollars and euros to US dollars
since different distinct types are not directly assignable to each other. You
cannot use the cast specification syntax because distinct types can only be cast
to their own source type.
290
292
293
295
296
296
298
299
300
301
303
304
304
304
304
305
305
306
306
306
308
308
309
311
311
313
314
314
315
316
316
317
317
|
|
317
318
319
319
320
321
321
322
322
322
323
324
324
325
325
326
326
327
328
328
329
329
329
330
340
291
.
.
. 348
. 348
The AS clause provides the attribute definitions associated with the type.
BusinessUnit_t is a type with two attributes: Name and Headcount. To create a
structured type, you must include the MODE DB2SQL clause in the CREATE
TYPE statement. For more information on the REF USING clause, see
Reference Types and Their Representation Types on page 295.
Structured types offer two major extensions beyond traditional relational data
types: the property of inheritance, and the capability of storing instances of a
structured type either as rows in a table, or as values in a column. The
following section briefly describes these features:
Inheritance
It is certainly possible to model objects such as people using
traditional relational tables and columns. However, structured types
offer an additional property of inheritance. That is, a structured type
can have subtypes that reuse all of its attributes and contain additional
attributes specific to the subtype. For example, the structured type
Person_t might contain attributes for Name, Age, and Address. A
subtype of Person_t might be Employee_t, that contains all of the
attributes Name, Age, and Address and in addition contains attributes
for SerialNum, Salary, and BusinessUnit.
292
Each column in the table derives its name and data type from one
of the attributes of the indicated structured type. Such tables are
known as typed tables.
v As a value in a column. To store objects in table columns, the
column is defined using the structured type as its type. The
following statement creates a Properties table that has a structured
type Address that is of the Address_t structured type:
CREATE TABLE Properties
(ParcelNum INT,
Photo BLOB(2K),
Address Address_t)
...
293
BusinessUnit_t
Person_t
Student_t
Employee_t
Manager_t
Architect_t
In Figure 8, the person type Person_t is the root type of the hierarchy. Person_t
is also the supertype of the types below it--in this case, the type named
Employee_t and the type named Student_t. The relationships among subtypes
and supertypes are transitive; in other words, the relationship between
subtype and supertype exists throughout the entire type hierarchy. So,
Person_t is also a supertype of types Manager_t and Architect_t.
Type BusinessUnit_t, defined in Structured Types Overview on page 292,
has no subtypes. Type Address_t, defined in Inserting Structured Type
Instances into a Column on page 321, has the following subtypes:
Germany_addr_t, Brazil_addr_t, and US_addr_t.
The CREATE TYPE statement for type Person_t declares that Person_t is
INSTANTIABLE. For more information on declaring structured types using
the INSTANTIABLE or NOT INSTANTIABLE clauses, see Additional
Properties of Structured Types on page 303.
The following SQL statements create the Person_t type hierarchy:
CREATE TYPE Person_t AS
(Name VARCHAR(20),
Age INT,
Address Address_t)
INSTANTIABLE
REF USING VARCHAR(13) FOR BIT DATA
MODE DB2SQL;
CREATE TYPE Employee_t UNDER Person_t AS
(SerialNum INT,
Salary DECIMAL (9,2),
Dept REF(BusinessUnit_t))
MODE DB2SQL;
294
Person_t has three attributes: Name, Age and Address. Its two subtypes,
Employee_t and Student_t, each inherit the attributes of Person_t and also
have several additional attributes that are specific to their particular types. For
example, although both employees and students have serial numbers, the
format used for student serial numbers is different from the format used for
employee serial numbers.
Note: A typed table created from the Person_t type includes the column
Address of structured type Address_t. As with any structured type
column, you must define transform functions for the structured type of
that column. For information on defining transform functions, see
Creating the Mapping to the Host Language Program: Transform
Functions on page 329.
Finally, Manager_t and Architect_t are both subtypes of Employee_t; they
inherit all the attributes of Employee_t and extend them further as appropriate
for their types. Thus, an instance of type Manager_t will have a total of seven
attributes: Name, Age, Address, SerialNum, Salary, Dept, and Bonus.
Reference Types and Their Representation Types
For every structured type you create, DB2 automatically creates a companion
type. The companion type is called a reference type and the structured type to
which it refers is called a referenced type. Typed tables can make special use of
the reference type, as described in Using Structured Types in Typed Tables
on page 304. You can also use reference types in SQL statements like other
user-defined types. To use a reference type in an SQL statement, use
REF(type-name), where type-name represents the referenced type.
DB2 uses the reference type as the type of the object identifier column in
typed tables. The object identifier uniquely identifies a row object in the typed
table hierarchy. DB2 also uses reference types to store references to rows in
typed tables. You can use reference types to refer to each row object in the
table. For more information about using references, see Using Reference
Types on page 308. For more information on typed tables, see Storing
Objects in Typed Tables on page 299.
Chapter 12. Working with Complex Objects: User-Defined Structured Types
295
References are strongly typed. Therefore, you must have a way to use the
type in expressions. When you create the root type of a type hierarchy, you
can specify the base type for a reference with the REF USING clause of the
CREATE TYPE statement. The base type for a reference is called the
representation type. If you do not specify the representation type with the REF
USING clause, DB2 uses the default data type of VARCHAR(16) FOR BIT
DATA. The representation type of the root type is inherited by all its subtypes.
The REF USING clause is only valid when you define the root type of a
hierarchy. In the examples used throughout this section, the representation
type for the BusinessUnit_t type is INTEGER, while the representation type
for Person_t is VARCHAR(13).
Casting and Comparing Reference Types
DB2 automatically creates functions that cast values between the reference
type and its representation type, in both directions. The CREATE TYPE
statement has an optional CAST WITH clause, described in the SQL Reference,
that allows you to choose the names of these two cast functions. By default,
the names of the cast functions are the same as the names of the structured
type and its reference representation type. For example, the CREATE TYPE
Person_t statement from Creating a Structured Type Hierarchy on page 293
automatically creates the following functions:
CREATE FUNCTION VARCHAR(REF(Person_t))
RETURNS VARCHAR
DB2 also creates the function that does the inverse operation:
CREATE FUNCTION Person_t(VARCHAR(13))
RETURNS REF(Person_t)
You will use these cast functions whenever you need to insert a new value
into the typed table or when you want to compare a reference value to
another value.
DB2 also creates functions that let you compare reference types using the
following comparison operators: =, <>, <, <=, >, and >=. For more information
on comparison operators for reference types, refer to the SQL Reference.
Other System-Generated Routines
Every structured type that you create causes DB2 to implicitly create a set of
functions and methods that you can use to construct, observe, or modify a
structured type value. This means, for instance, that for type Person_t, DB2
automatically creates the following functions and methods when you create
the type:
Constructor function
A function of the same name as the type is created. This function has
no parameters and returns an instance of the type with all of its
296
attributes set to null. The function that is created for Person_t, for
example, is as if the following statement were executed:
CREATE FUNCTION Person_t ( ) RETURNS Person_t
297
Once you have associated the method specification with the type, you can
define the behavior for the type by creating the method as either an external
method or an SQL-bodied method, according to the method specification. For
example, the following statement registers an SQL method called calc_bonus
that resides in the same schema as the type Employee_t:
CREATE METHOD calc_bonus (rate DOUBLE)
FOR Employee_t
RETURN SELF..salary * rate;
You can create as many methods named calc_bonus as you like, as long as
they have different numbers or types of parameters, or are defined for types
in different type hierarchies. In other words, you cannot create another
method named calc_bonus for Architect_t that has the same parameter types
and same number of parameters.
Note: DB2 does not currently support dynamic dispatch. This means that you
cannot declare a method for a type, and then redefine the method for a
subtype using the same number of parameters. As a workaround, you
can use the TYPE predicate to determine the dynamic type and then
use the TREAT AS clause to call a different method for each dynamic
type. For an example of transform functions that handle subtypes, see
Retrieving Subtype Data from DB2 (Bind Out) on page 341.
For more information about registering, writing, and invoking methods, see
Chapter 14. User-Defined Functions (UDFs) and Methods on page 373 and
Chapter 15. Writing User-Defined Functions (UDFs) and Methods on
page 393.
298
To insert an instance of Person into the table, you could use the following
syntax:
INSERT INTO Person (Oid, Name, Age)
VALUES(Person_t('a'), 'Andrew', 29);
Table 10. Person typed table
Oid
Name
Age
Andrew
29
Address
Your program accesses attributes of the object by accessing the columns of the
typed table:
UPDATE Person SET Age=30 WHERE Name='Andrew';
After the previous UPDATE statement, the table looks like:
Table 11. Person typed table after update
Oid
Name
Age
Andrew
30
Address
299
And, again, an insert into the Employee table looks like this:
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary)
VALUES (Employee_t('s'), 'Susan', 39, 24001, 37000.48)
Table 12. Employer typed subtable
Oid
Name
Age
Susan
39
Address
SerialNum Salary
24001
Dept
37000.48
If you execute the following query, the information for Susan is returned:
SELECT *
FROM Employee
WHERE Name='Susan';
The interesting thing about these two tables is that you can access instances of
both employees and people just by executing your SQL statement on the
Person table. This feature is called substitutability, and is discussed in
Additional Properties of Structured Types on page 303. By executing a query
on the table that contains instances that are higher in the type hierarchy, you
automatically get instances of types that are lower in the hierarchy. In other
words, the Person table logically looks like this to SELECT, UPDATE, and
DELETE statements :
Table 13. Person table contains Person and Employee instances
Oid
Name
Age
Address
Andrew
30
(null)
Susan
39
(null)
If you execute the following query, you get an object identifier and Person_t
information about both Andrew (a person) and Susan (an employee):
SELECT *
FROM Person;
300
Age
Address
BusinessUnit_t Table
SerialNum
Salary
Dept
OID
(ref)
Toy
(ref)
Shoe
(ref)
Finance
(ref)
4
...
..
.
..
.
Quality
(ref)
(ref)
(ref)
Name
Headcount
301
employees, departments, and so on) in typed tables, but those objects might
also have attributes that are best modelled using a structured type.
For example, assume that your application has the need to access certain parts
of an address. Rather than store the address as an unstructured character
string, you can store it as a structured object as shown in Figure 10.
Person
Name (VARCHAR)
Age (INT)
Address (Address_t)
Street
Number
City
State
In the preceding example, the SET clause of the UPDATE statement invokes
the Number and Street mutator methods to update attributes of the instances
of type Address_t. The WHERE clause restricts the operation of the update
statement with two predicates: an equality comparison for the Name column,
and an equality comparison that invokes the State observer method of the
Address column.
302
303
Here is the SQL to create the tables in the Person table hierarchy:
CREATE TABLE Person OF Person_t
(REF IS Oid USER GENERATED);
CREATE TABLE Employee OF Employee_t UNDER Person
INHERIT SELECT PRIVILEGES
(SerialNum WITH OPTIONS NOT NULL,
Dept WITH OPTIONS SCOPE BusinessUnit );
CREATE TABLE Student OF Student_t UNDER Person
INHERIT SELECT PRIVILEGES;
CREATE TABLE Manager OF Manager_t UNDER Employee
INHERIT SELECT PRIVILEGES;
CREATE TABLE Architect OF Architect_t UNDER Employee
INHERIT SELECT PRIVILEGES;
304
the REF IS clause indicates that you must provide the initial value for the
object identifier column of each newly inserted row. After you insert the object
identifier, you cannot update the value of the object identifier. For information
on configuring DB2 to automatically generate object identifiers, see Defining
System-generated Object Identifiers on page 319.
Specifying the Position in the Table Hierarchy
The Person typed table is of type Person_t. To store instances of the subtypes
of employees and students, it is necessary to create the subtables of the Person
table, Employee and Student. The two additional subtypes of Employee_t also
require tables. Those subtables are named Manager and Architect. Just as a
subtype inherits the attributes of its supertype, a subtable inherits the columns
of its supertable, including the object identifier column.
Note: A subtable must reside in the same schema as its supertable.
Rows in the Employee subtable, therefore, will have a total of seven columns:
Oid, Name, Age, Address, SerialNum, Salary, and Dept.
A SELECT, UPDATE, or DELETE statement that operates on a supertable
automatically operates on all its subtables as well. For example, an UPDATE
statement on the Employee table might affect rows in the Employee, Manager,
and Architect tables, but an UPDATE statement on the Manager table can
only affect Manager rows.
If you want to restrict the actions of the SELECT, INSERT, or DELETE
statement to just the specified table, use the ONLY option, described in
Returning Objects of a Particular Type Using ONLY on page 317.
Indicating that SELECT Privileges are Inherited
The INHERIT SELECT PRIVILEGES clause of the CREATE TABLE statement
specifies that the resulting subtable, such as Employee, is initially accessible by
the same users and groups as the supertable, such as Person, from which it is
created using the UNDER clause. Any user or group currently holding
SELECT privileges on the supertable is granted SELECT privileges on the
newly created subtable. The creator of the subtable is the grantor of the
SELECT privileges. To specify privileges such as DELETE and UPDATE on
subtables, you must issue the same explicit GRANT or REVOKE statements
that you use to specify privileges on regular tables. For more information on
the INHERIT SELECT PRIVILEGES clause, refer to the SQL Reference.
Privileges may be granted and revoked independently at every level of a table
hierarchy. If you create a subtable, you can also revoke the inherited SELECT
privileges on that subtable. Revoking the inherited SELECT privileges from
the subtable prevents users with SELECT privileges on the supertable from
seeing any columns that appear only in the subtable. Revoking the inherited
Chapter 12. Working with Complex Objects: User-Defined Structured Types
305
SELECT privileges from the subtable limits users who only have SELECT
privileges on the supertable to seeing the supertable columns of the rows of
the subtable. Users can only operate directly on a subtable if they hold the
necessary privilege on that subtable. So, to prevent users from selecting the
bonuses of the managers in the subtable, revoke the SELECT privilege on that
table and grant it only to those users for whom this information is necessary.
Defining Column Options
The WITH OPTIONS clause lets you define options that apply to an
individual column in the typed table. The format of WITH OPTIONS is:
column-name WITH OPTIONS column-options
where column-name represents the name of the column in the CREATE TABLE
or ALTER TABLE statement, and column-options represents the options defined
for the column.
For example, to prevent users from inserting nulls into a SerialNum column,
specify the NOT NULL column option as follows:
(SerialNum WITH OPTIONS NOT NULL)
declares that the Dept column of this table and its subtables have a scope of
BusinessUnit. This means that the reference values in this column of the
Employee table are intended to refer to objects in the BusinessUnit table.
For example, the following query on the Employee table uses the dereference
operator to tell DB2 to follow the path from the Dept column to the
BusinessUnit table. The dereference operator returns the value of the Name
column:
SELECT Name, Salary, Dept->Name
FROM Employee;
For more information about references and scoping references, see Using
Reference Types on page 308.
306
BusinessUnit
(Oid, Name, Headcount)
Person
(Oid, Name, Age, Address)
Employee
(..., SerialNum, Salary, Dept)
Manager
(..., Bonus)
Student
(..., SerialNum, GPA)
Architect
(..., StockOption)
When the hierarchy is established, you can use the INSERT statement, as
usual, to populate the tables. The only difference is that you must remember
to populate the object identifier columns and, optionally, any additional
attributes of the objects in each table or subtable. Because the object identifier
column is a REF type, which is strongly typed, you must cast the
user-provided object identifier values, using the cast function that the system
generated for you when you created the structured type.
INSERT INTO BusinessUnit (Oid, Name, Headcount)
VALUES(BusinessUnit_t(1), 'Toy', 15);
INSERT INTO BusinessUnit (Oid, Name, Headcount)
VALUES(BusinessUnit_t(2), 'Shoe', 10);
INSERT INTO Person (Oid, Name, Age)
VALUES(Person_t('a'), 'Andrew', 20);
INSERT INTO Person (Oid, Name, Age)
VALUES(Person_t('b'), 'Bob', 30);
INSERT INTO Person (Oid, Name, Age)
VALUES(Person_t('c'), 'Cathy', 25);
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept)
VALUES(Employee_t('d'), 'Dennis', 26, 105, 30000, BusinessUnit_t(1));
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept)
VALUES(Employee_t('e'), 'Eva', 31, 83, 45000, BusinessUnit_t(2));
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept)
VALUES(Employee_t('f'), 'Franky', 28, 214, 39000, BusinessUnit_t(2));
INSERT INTO Student (Oid, Name, Age, SerialNum, GPA)
VALUES(Student_t('g'), 'Gordon', 19, 10245, 4.7);
307
The previous example does not insert any addresses. For information about
how to insert structured type values into columns, see Inserting Rows that
Contain Structured Type Values on page 323.
When you insert rows into a typed table, the first value in each inserted row
must be the object identifier for the data being inserted into the tables. Also,
just as with non-typed tables, you must provide data for all columns that are
defined as NOT NULL. Finally, notice that any reference-valued expression of
the appropriate type can be used to initialize a reference attribute. In the
previous examples, the Dept reference of the employees is input as an
appropriately type-cast constant. However, you can also obtain the reference
using a subquery, as shown in the following example:
INSERT INTO Architect (Oid, Name, Age, SerialNum, Salary, Dept, StockOption)
VALUES(Architect_t('m'), 'Brian', 7, 882, 112000,
(SELECT Oid FROM BusinessUnit WHERE name = 'Toy'), 30000);
308
reference type to the base type, and then perform the comparison. All
references in a given type hierarchy have the same reference representation
type. This enables REF(S) and REF(T) to be compared, provided that S and T
have a common supertype. Because uniqueness of the object identifier column
is enforced only within a table hierarchy, it is possible that a value of REF(T)
in one table hierarchy may be equal to a value of REF(T) in another table
hierarchy, even though they reference different rows.
Using References to Define Semantic Relationships
Using the WITH OPTIONS clause of CREATE TABLE, you can define that a
relationship exists between a column in one table and the objects in the same
or another table. For example, in the BusinessUnit and Person table
hierarchies, the department for each employee is actually a reference to an
object in the BusinessUnit table, as shown in Figure 12. To define the
destination objects of a given reference column, use the SCOPE keyword on
the WITH OPTIONS clause.
Dept column of
Employee table
BusinessUnit table
Name
Age
Age
Headcount
Dept
BusinessUnit
Oid
Name
309
create one typed table for parts and one typed table for suppliers. To show
the reference type definitions, the sample also includes the statements used to
create the types:
CREATE TYPE Company_t AS
(name VARCHAR(30),
location VARCHAR(30))
MODE DB2SQL ;
CREATE TYPE Part_t AS
(Descript VARCHAR(20),
Supplied_by REF(Company_t),
Used_in REF(part_t))
MODE DB2SQL;
CREATE TABLE Suppliers OF Company_t
(REF IS suppno USER GENERATED);
CREATE TABLE Parts OF Part_t
(REF IS Partno USER GENERATED,
Supplied_by WITH OPTIONS SCOPE Suppliers,
Used_in WITH OPTIONS SCOPE Parts);
Parts table
Partno
Descript
Supplied_by
Used_in
Part_t type
Supplier table
Suppno
Name
Location
Company_t type
Figure 13. Example of a self-referencing scope
You can use scoped references to write queries that, without scoped
references, would have to be written as outer joins or correlated subqueries.
For more information, see Queries that Dereference References on page 315.
310
The OF clause in the CREATE VIEW statement tells DB2 to base the columns
of the view on the attributes of the indicated structured type. In this case, DB2
bases the columns of the view on the VBusinessUnit_t structured type.
The VObjectID column of the view has a type of REF(VBusinessUnit_t). Since
you cannot cast from a type of REF(BusinessUnit_t) to REF(VBusinessUnit_t),
you must first cast the value of the Oid column from table BusinessUnit to
data type VARCHAR, and then cast from data type VARCHAR to data type
REF(VBusinessUnit_t).
The MODE DB2SQL clause specifies the mode of the typed view. This is the
only valid mode currently supported.
The REF IS... clause is identical to that of the typed CREATE TABLE
statement. It provides a name for the object identifier column of the view
(VObjectID in this case), which is the first column of the view. If you create a
typed view on a root type, you must specify an object identifier column for
the view. If you create a typed view on a subtype, your view can inherit the
object identifier column.
The USER GENERATED clause specifies that the initial value for the object
identifier column must be provided by the user when inserting a row. Once
inserted, the object identifier column cannot be updated.
Chapter 12. Working with Complex Objects: User-Defined Structured Types
311
The body of the view, which follows the keyword AS, is a SELECT statement
that determines the content of the view. The column-types returned by this
SELECT statement must be compatible with the column-types of the typed
view, including the initial object identifier column.
To illustrate the creation of a typed view hierarchy, the following example
defines a view hierarchy that omits some sensitive data and eliminates some
type distinctions from the Person table hierarchy created earlier under
Creating a Typed Table on page 304:
CREATE TYPE VPerson_t AS (Name VARCHAR(20))
MODE DB2SQL;
CREATE TYPE VEmployee_t UNDER VPerson_t
AS (Salary INT, Dept REF(VBusinessUnit_t))
MODE DB2SQL;
CREATE VIEW VPerson OF VPerson_t MODE DB2SQL
(REF IS VObjectID USER GENERATED)
AS SELECT VPerson_t (VARCHAR(Oid)), Name FROM ONLY(Person);
CREATE VIEW VEmployee OF VEmployee_t MODE DB2SQL
UNDER VPerson INHERIT SELECT PRIVILEGES
(Dept WITH OPTIONS SCOPE VBusinessUnit)
AS SELECT VEmployee_t(VARCHAR(Oid)), Name, Salary,
VBusinessUnit_t(VARCHAR(Dept))
FROM Employee;
The two CREATE TYPE statements create the structured types that are needed
to create the object view hierarchy for this example.
The first typed CREATE VIEW statement above creates the root view of the
hierarchy, VPerson, and is very similar to the VBusinessUnit view definition.
The difference is the use of ONLY(Person) to ensure that only the rows in the
Person table hierarchy that are in the Person table, and not in any subtable,
are included in the VPerson view. This ensures that the Oid values in VPerson
are unique compared with the Oid values in VEmployee. The second CREATE
VIEW statement creates a subview VEmployee under the view VPerson. As was
the case for the UNDER clause in the CREATE TABLE...UNDER statement,
the UNDER clause establishes the view hierarchy. You must create a subview
in the same schema as its superview. Like typed tables, subviews inherit
columns from their superview. Rows in the VEmployee view inherit the
columns VObjectID and Name from VPerson and have the additional columns
Salary and Dept associated with the type VEmployee_t.
The INHERIT SELECT PRIVILEGES clause has the same effect when you
issue a CREATE VIEW statement as when you issue a typed CREATE TABLE
statement. For more information on the INHERIT SELECT PRIVILEGES
clause, see Indicating that SELECT Privileges are Inherited on page 305. The
312
WITH OPTIONS clause in a typed view definition also has the same effect as
it does in a typed table definition. The WITH OPTIONS clause enables you to
specify column options such as SCOPE. The READ ONLY clause forces a
superview column to be marked as read-only, so that subsequent subview
definitions can specify an expression for the same column that is also
read-only.
If a view has a reference column, like the Dept column of the VEmployee view,
you must associate a scope with the column to use the column in SQL
dereference operations. If you do not specify a scope for the reference column
of the view and the underlying table or view column is scoped, then the
scope of the underlying column is passed on to the reference column of the
view. You can explicitly assign a scope to the reference column of the view by
using the WITH OPTIONS clause. In the previous example, the Dept column
of the VEmployee view receives the VBusinessUnit view as its scope. If the
underlying table or view column does not have a scope, and no scope is
explicitly assigned in the view definition, or no scope is assigned with an
ALTER VIEW statement, the reference column remains unscoped.
There are several important rules associated with restrictions on the queries
for typed views found in the SQL Reference that you should read carefully
before attempting to create and use a typed view.
313
TRANSFORM statement, refer to the SQL Reference. Note that you can only
drop user-defined transforms. You cannot drop built-in transforms or their
associated group definitions.
Any views that are dependent on the dropped view become inoperative. For
more information on inoperative views, refer to the Recovering Inoperative
Views section of the Administration Guide.
Other database objects such as tables and indexes will not be affected
although packages and cached dynamic statements are marked invalid. For
more information, refer to the Statement Dependencies section of the
Administration Guide.
As in the case of a table hierarchy, it is possible to drop an entire view
hierarchy in one statement by naming the root view of the hierarchy, as in the
following example:
DROP VIEW HIERARCHY VPerson;
For more information on dropping and creating views, refer to the SQL
Reference.
314
AGE
----------29
30
25
26
31
28
19
20
35
10
55
35
7
39
The following query uses the dereference operator to obtain the Name column
from the BusinessUnit table:
SELECT Name, Salary, Dept->Name
FROM Employee
315
Eva
Franky
Iris
Christina
Ken
Leo
Brian
Susan
45000
39000
55000
85000
105000
92000
112000
37000.48
Shoe
Shoe
Toy
Toy
Shoe
Shoe
Toy
---
316
The preceding example uses the dereference operator to return the value of
Name from the Employee table, and invokes the DEREF function to return the
dynamic type for the instance of Employee_t.
For more information about the built-in functions described in this section,
refer to the SQL Reference.
Authorization requirement: To use the DEREF function, you must have SELECT
authority on every table and subtable in the referenced portion of the table
hierarchy. In the above query, for example, you need SELECT privileges on
the Employee, Manager, and Architect typed tables.
To protect the security of the data, the use of ONLY requires the SELECT
privilege on every subtable of Employee.
You can also use the ONLY clause to restrict the operation of an UPDATE or
DELETE statement to the named table. That is, the ONLY clause ensures that
the operation does not occur on any subtables of that named table.
Restricting Returned Types Using a TYPE Predicate
If you want a more general way to restrict what rows are returned or affected
by an SQL statement, you can use the type predicate. The type predicate
enables you to compare the dynamic type of an expression to one or more
named types. A simple version of the type predicate is:
<expression> IS OF (<type_name>[, ...])
317
For example, the following query returns people who are greater than 35
years old, and who are either managers or architects:
SELECT Name
FROM Employee E
WHERE E.Age > 35 AND
DEREF(E.Oid) IS OF (Manager_t, Architect_t);
318
Suppose that your application needs to see not just the attributes of these high
achievers, but what the most specific type is for each one. You can do this in a
single query by passing the object identifier of an object to the TYPE_NAME
built-in function and combining it with an OUTER query, as follows:
SELECT TYPE_NAME(DEREF(P.Oid)), P.*
FROM OUTER(Person) P
WHERE P.Salary > 200000 OR
P.GPA > 3.95 ;
Because the Address column of the Person typed table contains structured
types, you would have to define additional functions and issue additional
SQL to return the data from that column. For more information on returning
data from a structured type column, see Retrieving and Modifying
Structured Type Values on page 324. Assuming you perform these additional
steps, the preceding query returns the following output, where Additional
Attributes includes GPA and Salary:
1
-----------------PERSON_T
PERSON_T
PERSON_T
EMPLOYEE_T
EMPLOYEE_T
EMPLOYEE_T
MANAGER_T
ARCHITECT_T
EMPLOYEE_T
OID
------------a
b
c
d
e
f
i
l
s
NAME
-------------------Andrew
Bob
Cathy
Dennis
Eva
Franky
Iris
Leo
Susan
Additional Attributes
...
...
...
...
...
...
...
...
...
...
Note that you must always provide the clause USER GENERATED.
319
An INSERT statement to insert a row into the typed table, then, might look
like this:
INSERT INTO BusinessUnit (Oid, Name, Headcount)
VALUES(BusinessUnit_t(GENERATE_UNIQUE( )), 'Toy' 15);
To insert an employee that belongs to the Toy department, you can use a
statement like the following, which issues a subselect to retrieve the value of
the object identifier column from the BusinessUnit table, casts the value to the
BusinessUnit_t type, and inserts that value into the Dept column:
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept)
VALUES(Employee_t('d'), 'Dennis', 26, 105, 30000,
BusinessUnit_t(SELECT Oid FROM BusinessUnit WHERE Name='Toy'));
OID
Name
Mgr (ref)
320
Address_t
(Street, Number, City, State)
Brazil_addr_t
(Neighborhood)
Germany_addr_t
(Family_name)
US_addr_t
(Zipcode)
321
|
|
|
|
|
|
|
|
|
|
|
The proper syntax for inserting the NAME attribute into column C1 is:
|
|
Unless you are concerned with how structured types are laid out in the data
record, there is no additional syntax for creating tables with columns of
structured types. For example, the following statement adds a column of
Address_t type to a Customer_List untyped table:
ALTER TABLE Customer_List
ADD COLUMN Address Address_t;
Person_t can be used as the type of a table, the type of a column in a regular
table, or as an attribute of another structured type.
322
323
324
Note: DB2 enables you to invoke methods that take no parameters using
either <type-name>..<method-name>() or <type-name>..<method-name>,
where type-name represents the name of the structured type, and
attribute-name represents the name of the method that takes no
parameters.
You can also use observer methods to select each attribute into a host variable,
as follows:
SELECT Name, Dept, Address..street, Address..number, Address..city,
Address..state
INTO :name, :dept, :street, :number, :city, :state
FROM Employee
WHERE Empno = 000250;
Note: You can only use the preceding approach to determine the subtype of a
structured type when the attributes of the subtype are all of the same
type, or can be cast to the same type. In the previous example, zip,
family_name, and neighborhood are all VARCHAR or CHAR types, and
can be cast to the same type.
For more information about the syntax of the TREAT expression or the TYPE
predicate, refer to the SQL Reference.
Modifying Attributes
To change an attribute of a structured column value, invoke the mutator
method for the attribute you want to change. For example, to change the
325
street attribute of an address, you can invoke the mutator method for street
with the value to which it will be changed. The returned value is an address
with the new value for street. The following example invokes a mutator
method for the attribute named street to update an address type in the
Employee table:
UPDATE Employee
SET Address = Address..street(Bailey)
WHERE Address..street = Bakely;
The following example performs the same update as the previous example,
but instead of naming the structured column for the update, the SET clause
directly accesses the mutator method for the attribute named street:
UPDATE Employee
SET Address..street = Bailey
WHERE Address..street = Bakely;
326
You can associate additional functions with the Address_t type by adding
more groups on the CREATE TRANSFORM statement. To alter the transform
definition, you must reissue the CREATE TRANSFORM statement with the
additional functions. For example, you might want to customize your client
functions for different host language programs, such as having one for C and
one for Java. To optimize the performance of your application, you might
want your transforms to work only with a subset of the object attributes. Or
you might want one transform that uses VARCHAR as the client
representation for an object and one transform that uses BLOB.
Use the SQL statement DROP TRANSFORM to disassociate transform
functions from types. After you execute the DROP TRANSFORM statement,
the functions will still exist, but they will no longer be used as transform
functions for this type. The following example disassociates the specific group
of transform functions func_group for the Address_t type, and then
disassociates all transform functions for the Address_t type:
DROP TRANSFORMS func_group FOR Address_t;
DROP TRANSFORMS ALL FOR Address_t;
327
Once you set the transform group to func_group in the appropriate situation,
as described in Where Transform Groups Must Be Specified, DB2 invokes
the correct transform function whenever you bind in or bind out an address
or polygon.
Restriction: Do not begin a transform group with the string SYS; this group
is reserved for use by DB2.
When you define an external function or method and you do not specify a
transform group name, DB2 attempts to use the name DB2_FUNCTION, and
assumes that that group name was specified for the given structured type. If
you do not specify a group name when you precompile a client program that
references a given structured type, DB2 attempts to use a group name called
DB2_PROGRAM, and again assumes that the group name was defined for
that type.
This default behavior is convenient in some cases, but in a more complex
database schema, you might want a slightly more extensive convention for
transform group names. For example, it may help you to use different group
names for different languages to which you might bind out the type.
328
For more information on the PRECOMPILE and BIND commands, refer to the
Command Reference.
329
330
The following example issues an SQL statement that invokes an external UDF
called MYUDF that takes an address as an input parameter, modifies the address
(to reflect a change in street names, for example), and returns the modified
address:
SELECT MYUDF(Address)
FROM PERSON;
VARCHAR
CHAR
VARCHAR
VARCHAR
VARCHAR
CHAR
VARCHAR
VARCHAR
331
1. Your FROM SQL transform function decomposes the structured object into
an ordered set of its base attributes. This enables the routine to receive the
object as a simple list of parameters whose types are basic built-in data
types. For example, assume that you want to pass an address object to an
external routine. The attributes of Address_t are VARCHAR, CHAR,
VARCHAR, and VARCHAR, in that order. The FROM SQL transform for
passing this object to a routine must accept this object as an input and
return VARCHAR, CHAR, VARCHAR, and VARCHAR. These outputs are
then passed to the external routine as four separate parameters, with four
corresponding null indicator parameters, and a null indicator for the
structured type itself. The order of parameters in the FROM SQL function
does not matter, as long as all functions that return Address_t types use
the same order. For more information, see Passing Structured Type
Parameters to External Routines on page 334.
2. Your external routine accepts the decomposed address as its input
parameters, does its processing on those values, and then returns the
attributes as output parameters.
3. Your TO SQL transform function must turn the VARCHAR, CHAR,
VARCHAR, and VARCHAR parameters that are returned from MYUDF back
into an object of type Address_t. In other words, the TO SQL transform
function must take the four parameters, and all of the corresponding null
indicator parameters, as output values from the routine. The TO SQL
function constructs the structured object and then mutates the attributes
with the given values.
Note: If MYUDF also returns a structured type, another transform function must
transform the resultant structured type when the UDF is used in a
SELECT clause. To avoid creating another transform function, you can
use SELECT statements with observer methods, as in the following
example:
SELECT Name
FROM Employee
WHERE MYUDF(Address)..city LIKE Tor%;
332
The following list explains the syntax of the preceding CREATE FUNCTION
statement:
1. The signature of this function indicates that it accepts one parameter, an
object of type Address_t.
2. The RETURNS ROW clause indicates that the function returns a row
containing four columns: Street, Number, City, and State.
3. The LANGUAGE SQL clause indicates that this is an SQL-bodied function,
not an external function.
4. The RETURN clause marks the the beginning of the function body. The
body consists of a single VALUES clause that invokes the observer method
for each attribute of the Address_t object. The observer methods
decompose the object into a set of base types, which the function returns
as a row.
DB2 does not know that you intend to use this function as a transform
function. Until you create a transform group that uses this function, and then
specify that transform group in the appropriate situation, DB2 cannot use the
function as a transform function. For more information, see Associating
Transforms with a Type on page 326.
The TO SQL transform simply does the opposite of the FROM SQL function.
It takes as input the list of parameters from a routine and returns an instance
of the structured type. To construct the object, the following FROM SQL
function invokes the constructor function for the Address_t type:
CREATE FUNCTION functoaddress (street VARCHAR(30), number CHAR(15),
city VARCHAR(30), state VARCHAR(10)) 1
RETURNS Address_t 2
LANGUAGE SQL
CONTAINS SQL
RETURN
Address_t()..street(street)..number(number)
..city(city)..state(state) 3
333
3. The function constructs the object from the input types by invoking the
constructor for Address_t and the mutators for each of the attributes.
The order of parameters in the FROM SQL function does not matter, other
than that all functions that return addresses must use this same order.
Passing Structured Type Parameters to External Routines: When you pass
structured type parameters to an external routine, you should pass a
parameter for each attribute. You must pass a null indicator for each
parameter and a null indicator for the structured type itself. The following
example accepts the structured type Address_t and returns a base type:
CREATE FUNCTION stream_to_client (Address_t)
RETURNS VARCHAR(150) ...
The external routine must accept the null indicator for the instance of the
Address_t type (address_ind) and one null indicator for each of the attributes
of the Address_t type. There is also a null indicator for the VARCHAR output
parameter. The following code represents the C language function headers for
the functions which implement the UDFs:
void SQL_API_FN stream_to_client(
/*decomposed address*/
SQLUDF_VARCHAR *street,
SQLUDF_CHAR *number,
SQLUDF_VARCHAR *city,
SQLUDF_VARCHAR *state,
SQLUDF_VARCHAR *output,
/*null indicators for type attributes*/
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
/*null indicator for instance of the type*/
SQLUDF_NULLIND *address_ind,
/*null indicator for the VARCHAR output*/
SQLUDF_NULLIND *out_ind,
SQLUDF_TRAIL_ARGS)
334
ST1
ST2
ST3
st1_att1 VARCHAR
st2_att1 VARCHAR
st3_att1 INTEGER
st2_att2 INTEGER
st2_att2 CHAR
st3_att2 CLOB
ST2
ST3
st2_att3 INTEGER
The following code represents the C language headers for routines which
implement the UDFs. The arguments include variables and null indicators for
the attributes of the decomposed structured type and a null indicator for each
instance of a structured type, as follows:
void SQL_API_FN myudf(
SQLUDF_INTEGER *INT,
/* Decompose st1 input */
SQLUDF_VARCHAR *st1_att1,
SQLUDF_INTEGER *st1_att2,
/* Decompose st2 input */
SQLUDF_VARCHAR *st2_att1,
SQLUDF_CHAR
*st2_att2,
SQLUDF_INTEGER *st2_att3,
/* Decompose st3 output */
SQLUDF_VARCHAR *st3_att1out,
SQLUDF_CLOB
*st3_att2out,
/* Null indicator of integer*/
SQLUDF_NULLIND *INT_ind,
/* Null indicators of st1 attributes and type*/
SQLUDF_NULLIND *st1_att1_ind,
SQLUDF_NULLIND *st1_att2_ind,
SQLUDF_NULLIND *st1_ind,
/* Null indicators of st2 attributes and type*/
SQLUDF_NULLIND *st2_att1_ind,
SQLUDF_NULLIND *st2_att2_ind,
SQLUDF_NULLIND *st2_att3_ind,
SQLUDF_NULLIND *st2_ind,
/* Null indicators of st3_out attributes and type*/
SQLUDF_NULLIND *st3_att1_ind,
SQLUDF_NULLIND *st3_att2_ind,
SQLUDF_NULLIND *st3_ind,
/* trailing arguments */
SQLUDF_TRAIL_ARGS
)
335
FROM Person
INTO :addhv
WHERE AGE > 25
END EXEC;
Figure 17 shows the process of binding out that address to the client program.
VARCHAR
Server
Client
3. After retrieving the address as a VARCHAR,
the client can decode its attributes and
access them as desired.
Figure 17. Binding out a structured type to a client application
1. The object must first be passed to the FROM SQL function transform to
decompose it into its base type attributes.
2. Your FROM SQL client transform must encode the value into a single
built-in type, such as a VARCHAR or BLOB. This enables the client
program to receive the entire value in a single host variable.
This encoding can be as simple as copying the attributes into a contiguous
area of storage (providing for required alignments as necessary). Because
the encoding and decoding of attributes cannot generally be achieved with
SQL, client transforms are usually written as external UDFs.
For information about processing data between platforms, see Data
Conversion Considerations on page 339.
3. The client program processes the value.
336
Figure 18 shows the reverse process of passing the address back to the
database.
Client
Server
1. TO SQL function transform
Address_t
1. The client application encodes the address into a format expected by the
TO SQL client transform.
2. The TO SQL client transform decomposes the single built-in type into a set
of its base type attributes, which is used as input to the TO SQL function
transform.
3. The TO SQL function transform constructs the address and returns it to
the database.
Implementing Client Transforms Using External UDFs: Register the client
transforms the same way as any other external UDF. For example, assume
that you have written external UDFs that do the appropriate encoding and
decoding for an address. Suppose that you have named the FROM SQL client
transform from_sql_to_client and the TO SQL client transform
to_sql_from_client. In both of these cases, the output of the functions are in
a format that can be used as input by the appropriate FROM SQL and TO
SQL function transforms.
Chapter 12. Working with Complex Objects: User-Defined Structured Types
337
Client Transform for Binding in from a Client: The following DDL registers a
function that takes the VARCHAR-encoded object from the client, decomposes
it into its various base type attributes, and passes it to the TO SQL function
transform.
CREATE FUNCTION to_sql_from_client (VARCHAR (150))
RETURNS Address_t
LANGUAGE C
TRANSFORM GROUP func_group
EXTERNAL NAME 'addressudf!address_to_sql_from_client'
NOT VARIANT
NO EXTERNAL ACTION
NOT FENCED
NO SQL
PARAMETER STYLE DB2SQL;
338
339
Transform
direction
FROM SQL
TO SQL
FROM SQL
TO SQL
What is being
transformed
Routine
parameter
Routine result
Output host
variable
Input host
variable
Behavior
Decomposes
Constructs
Encodes
Decodes
Transform
function
parameters
Transform
function result
Structured type
Dependent on
another
transform?
No
TO SQL UDF
transform
When is the
transform
group
specified?
No
No
FROM SQL
UDF transform
Yes
Note: Although not generally the case, client type transforms can actually be
written in SQL if any of the following are true:
v The structured type contains only one attribute.
v The encoding and decoding of the attributes into a built-in type can
be achieved by some combination of SQL operators or functions.
In these cases, you do not have to depend on function transforms to
exchange the values of a structured type with a client application.
340
341
342
Step 4. Create a SQL-bodied UDF that chooses the correct external UDF to
process the instance. The following UDF uses the TREAT specification
in SELECT statements combined by a UNION ALL clause to invoke
the correct FROM SQL client transform:
CREATE FUNCTION addr_stream (ab Address_t)
RETURNS VARCHAR(150)
LANGUAGE SQL
RETURN
WITH temp(addr) AS
(SELECT address_to_client(ta.a)
FROM TABLE (VALUES (ab)) AS ta(a)
WHERE ta.a IS OF (ONLY Address_t)
UNION ALL
SELECT address_to_client(TREAT (tb.a AS US_addr_t))
FROM TABLE (VALUES (ab)) AS tb(a)
WHERE tb.a IS OF (ONLY US_addr_t))
SELECT addr FROM temp;
Step 5. Add the Addr_stream external UDF as a FROM SQL client transform
for Address_t:
CREATE TRANSFORM GROUP FOR Address_t
client_group (FROM SQL
WITH FUNCTION Addr_stream)
When DB2 binds the application that contains the SELECT Address FROM
Person INTO :hvar statement, DB2 looks for a FROM SQL client transform.
DB2 recognizes that a structured type is being bound out, and looks in the
transform group client_group because that is the TRANSFORM GROUP
specified at bind time in 6.
Chapter 12. Working with Complex Objects: User-Defined Structured Types
343
To execute the INSERT statement for a structured type, your application must
perform the following steps:
Step 1. Create a TO SQL function transform for each variation of address.
The following example shows SQL-bodied UDFs that transform the
Address_t and US_addr_t types:
CREATE FUNCTION functoaddress
(str VARCHAR(30), num CHAR(15), cy VARCHAR(30), st VARCHAR (10))
RETURNS Address_t
LANGUAGE SQL
RETURN Address_t()..street(str)..number(num)..city(cy)..state(st);
CREATE FUNCTION functoaddress
(str VARCHAR(30), num CHAR(15), cy VARCHAR(30), st VARCHAR (10),
zp CHAR(10))
344
RETURNS US_addr_t
LANGUAGE SQL
RETURN US_addr_t()..street(str)..number(num)..city(cy)
..state(st)..zip(zp);
Step 3. Create external UDFs that return the encoded address types, one for
each type variation.
Register the external UDF for the Address_t type:
CREATE FUNCTION client_to_address (encoding VARCHAR(150))
RETURNS Address_t
LANGUAGE C
TRANSFORM GROUP funcgroup1
...
EXTERNAL NAME 'address!client_to_address';
/* Null indicators */
SQLUDF_NULLIND *encoding_ind,
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
SQLUDF_NULLIND *address_ind,
SQLUDF_TRAIL_ARGS )
char c[150];
char *pc;
strcpy(c, encoding);
pc = strtok (c, ":]");
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strcpy (street, pc);
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strcpy (number, pc);
345
char c[150];
char *pc;
strcpy(c, encoding);
pc = strtok (c, ":]");
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strcpy (street, pc);
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strncpy (number, pc,14);
pc = strtok (NULL, ":]");
346
Step 4. Create a SQL-bodied UDF that chooses the correct external UDF for
processing that instance. The following UDF uses the TYPE predicate
to invoke the correct to client transform. The results are placed in a
temporary table:
CREATE FUNCTION stream_address (ENCODING VARCHAR(150))
RETURNS Address_t
LANGUAGE SQL
RETURN
(CASE(SUBSTR(ENCODING,2,POSSTR(ENCODING,])2))
WHEN address_t
THEN client_to_address(ENCODING)
WHEN us_addr_t
THEN client_to_us_addr(ENCODING)
ELSE NULL
END);
Step 6. Bind the application with the TRANSFORM GROUP option set to
client_group.
PREP myProgram2 TRANSFORM GROUP client_group
When the application containing the INSERT statement with a structured type
is bound, DB2 looks for a TO SQL client transform. DB2 looks for the
transform in the transform group client_group because that is the
TRANSFORM GROUP specified at bind time in 6. DB2 finds the transform
function it needs: stream_address, which is associated with the root type
Address_t in 5.
stream_address is a SQL-bodied function, defined in 4, so it has no stated
dependency on any additional transform function. For input parameters,
stream_address accepts VARCHAR(150), which corresponds to the application
host variable :hvaddr. stream_address returns a value that is both of the
correct root type, Address_t, and of the correct dynamic type.
347
348
349
350
351
353
353
354
356
C Sample: LOBEVAL.SQC . . . . .
COBOL Sample: LOBEVAL.SQB . . .
Indicator Variables and LOB Locators .
LOB File Reference Variables . . . . .
Example: Extracting a Document To a File
How the Sample LOBFILE Program
Works. . . . . . . . . . . .
C Sample: LOBFILE.SQC . . . . .
COBOL Sample: LOBFILE.SQB . . .
Example: Inserting Data Into a CLOB
Column . . . . . . . . . . . .
.
.
.
.
361
363
366
366
368
. 368
. 369
. 370
. 372
359
360
349
Note: DB2 offers LOB support for JDBC and SQLJ applications. For more
information on using LOBs in Java applications, see JDBC 2.0 on
page 649.
The subsections that follow discuss in more detail those topics introduced
above.
350
types can fit in a single row. The space used by the LOB descriptor in the row
ranges from approximately 60 to 300 bytes, depending on the maximum size
of the corresponding column. For specific sizes of the LOB descriptor, refer to
the CREATE TABLE statement in the SQL Reference.
The lob-options-clause on CREATE TABLE gives you the choice to log (or
not) the changes made to the LOB column(s). This clause also allows for a
compact representation for the LOB descriptor (or not). This means you can
allocate only enough space to store the LOB or you can allocate extra space
for future append operations to the LOB. The tablespace-options-clause
allows you to identify a LONG table space to store the column values of long
field or LOB data types. For more information on the CREATE TABLE and ALTER
TABLE statements, refer to the SQL Reference.
With their potentially very large size, LOBs can slow down the performance
of your database system significantly when moved into or out of a database.
Even though DB2 does not allow logging of a LOB value greater than 1 GB,
LOB values with sizes near several hundred megabytes can quickly push the
database log to near capacity. An error, SQLCODE -355 (SQLSTATE 42993),
results from attempting to log a LOB greater than 1 GB in size. The
lob-options-clause in the CREATE TABLE and ALTER TABLE statements allows
users to turn off logging for a particular LOB column. Although setting the
option to NOT LOGGED improves performance, changes to the LOB values after
the most recent backup are lost during roll-forward recovery. For more
information on these topics, refer to the Administration Guide.
351
LOB locators can also be passed between DB2 and UDFs. There are special
APIs available for UDFs to manipulate the LOB values using LOB locators.
For more information on these APIs see Using LOB Locators as UDF
Parameters or Results on page 443.
When selecting a LOB value, you have three options:
v Select the entire LOB value into a host variable. The entire LOB value is
copied from the server to the client.
v Select just a LOB locator into a host variable. The LOB value remains on the
server; the LOB locator moves to the client.
v Select the entire LOB value into a file reference variable. The LOB value is
moved to a file at the client without going through the applications
memory.
The use of the LOB value within the program can help the programmer
determine which method is best. If the LOB value is very large and is needed
only as an input value for one or more subsequent SQL statements, then it is
best to keep the value in a locator. The use of a locator eliminates any
client/server communication traffic needed to transfer the LOB value to the
host variable and back to the server.
If the program needs the entire LOB value regardless of the size, then there is
no choice but to transfer the LOB. Even in this case, there are still three
options available to you. You can select the entire value into a regular or file
host variable, but it may also work out better to select the LOB value into a
locator and read it piecemeal from the locator into a regular host variable, as
suggested in the following example.
352
COBOL
FORTRAN
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
353
C Sample: LOBLOC.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
354
} else {
/* EVALUATE the LOB LOCATOR */
/* Locate the beginning of "Department Information" section */
EXEC SQL VALUES (POSSTR(:resume, 'Department Information'))
INTO :deptInfoBeginLoc;
EMB_SQL_CHECK("VALUES1");
/* Locate the beginning of "Education" section (end of "Dept.Info" */
EXEC SQL VALUES (POSSTR(:resume, 'Education'))
INTO :deptInfoEndLoc;
EMB_SQL_CHECK("VALUES2");
/* Obtain ONLY the "Department Information" section by using SUBSTR */
EXEC SQL VALUES(SUBSTR(:resume, :deptInfoBeginLoc,
:deptInfoEndLoc - :deptInfoBeginLoc)) INTO :deptBuffer;
EMB_SQL_CHECK("VALUES3");
/* Append the "Department Information" section to the :buffer var. */
EXEC SQL VALUES(:buffer || :deptBuffer) INTO :buffer;
EMB_SQL_CHECK("VALUES4");
} /* endif */
} while ( 1 );
printf ("%s\n",buffer);
EXEC SQL FREE LOCATOR :resume, :deptBuffer; 3
EMB_SQL_CHECK("FREE LOCATOR");
EXEC SQL CLOSE c1;
EMB_SQL_CHECK("CLOSE CURSOR");
EXEC SQL CONNECT RESET;
EMB_SQL_CHECK("CONNECT RESET");
return 0;
}
/* end of program : LOBLOC.SQC */
355
1
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: LOBLOC".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* format with the length
inspect passwd-name
before initial "
356
3
2
357
358
stop run.
359
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
360
C Sample: LOBEVAL.SQC
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
"utilemb.h"
361
INTO :hv_start_educ;
EMB_SQL_CHECK("VALUES2");
/* Replace Department Information Section with nothing */
EXEC SQL VALUES (SUBSTR(:hv_doc_locator1, 1, :hv_start_deptinfo -1)
|| SUBSTR (:hv_doc_locator1, :hv_start_educ))
INTO :hv_doc_locator2;
EMB_SQL_CHECK("VALUES3");
/* Move Department Information Section into the hv_new_section_buffer */
EXEC SQL VALUES (SUBSTR(:hv_doc_locator1, :hv_start_deptinfo,
:hv_start_educ -:hv_start_deptinfo)) INTO :hv_new_section_buffer;
EMB_SQL_CHECK("VALUES4");
/* Append our new section to the end (assume it has been filled in)
Effectively, this just moves the Department Information to the bottom
of the resume. */
EXEC SQL VALUES (:hv_doc_locator2 || :hv_new_section_buffer) INTO
:hv_doc_locator3;
EMB_SQL_CHECK("VALUES5");
/* Store this resume section in the table. This is where the LOB value
bytes really move */
EXEC SQL INSERT INTO emp_resume VALUES ('A00130', 'ascii',
:hv_doc_locator3); 4
EMB_SQL_CHECK("INSERT");
printf ("LOBEVAL completed\n");
/* free the locators */ 5
EXEC SQL FREE LOCATOR :hv_doc_locator1, :hv_doc_locator2, : hv_doc_locator3;
EMB_SQL_CHECK("FREE LOCATOR");
EXEC SQL CONNECT RESET;
EMB_SQL_CHECK("CONNECT RESET");
return 0;
}
/* end of program : LOBEVAL.SQC */
362
1
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: LOBEVAL".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* format with the length
inspect passwd-name
before initial "
363
2
3
364
4
5
365
366
v SQL_FILE_READ (Regular file) This is a file that can be open, read, and
closed. DB2 determines the length of the data in the file (in bytes) when
opening the file. DB2 then returns the length through the data_length field
of the file reference variable structure. (The value for COBOL is
SQL-FILE-READ, and for FORTRAN is sql_file_read.)
Values and options when using output file reference variables are as follows:
v SQL_FILE_CREATE (New file) This option creates a new file. Should the
file already exist, an error message is returned. (The value for COBOL is
SQL-FILE-CREATE, and for FORTRAN is sql_file_create.)
v SQL_FILE_OVERWRITE (Overwrite file) This option creates a new file
if none already exists. If the file already exists, the new data overwrites the
data in the file. (The value for COBOL is SQL-FILE-OVERWRITE, and for
FORTRAN is sql_file_overwrite.)
v SQL_FILE_APPEND (Append file) This option has the output appended
to the file, if it exists. Otherwise, it creates a new file. (The value for
COBOL is SQL-FILE-APPEND, and for FORTRAN is sql_file_append.)
Notes:
1. In an Extended UNIX Code (EUC) environment, the files to which
DBCLOB file reference variables point are assumed to contain valid EUC
characters appropriate for storage in a graphic column, and to never
contain UCS-2 characters. For more information on DBCLOB files in an
EUC environment, see Considerations for DBCLOB Files on page 525.
2. If a LOB file reference variable is used in an OPEN statement, the file
associated with the LOB file reference variable must not be deleted until
the cursor is closed.
For more information on file reference variables, refer to the SQL Reference.
367
COBOL
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
368
C Sample: LOBFILE.SQC
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<sql.h>
"utilemb.h"
}
/* end of program : LOBFILE.SQC */
369
1
pic x(80).
Procedure Division.
Main Section.
display "Sample COBOL program: LOBFILE".
* Get database connection information.
display "Enter your user id (default none): "
with no advancing.
accept userid.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Passwords in a CONNECT
* format with the length
inspect passwd-name
before initial "
370
2
3
371
372
373
374
377
378
379
379
379
380
380
381
382
382
. 383
. 383
. 384
.
.
.
.
384
385
385
386
387
. 387
388
. 389
373
v Object-relational support.
As discussed in Chapter 11. User-defined Distinct Types on page 281 and
Chapter 12. Working with Complex Objects: User-Defined Structured
Types on page 291, distinct types and structured types can be very useful
in extending the capability and increasing the safety of DB2. You can create
methods that define the behavior for structured types stored in columns.
You can also create functions that act on distinct types.
374
In this case, only the rows of interest are passed across the interface
between the application and the database. For large tables, or for cases
where SELECTION_CRITERIA supplies significant filtering, the performance
improvement can be very significant.
Another case where a UDF can offer a performance benefit is when dealing
with Large Objects (LOB). If you have a function whose purpose is to
extract some information from a value of one of the LOB types, you can
perform this extraction right on the database server and pass only the
extracted value back to the application. This is more efficient than passing
the entire LOB value back to the application and then performing the
extraction. The performance value of packaging this function as a UDF
could be enormous, depending on the particular situation. (Note that you
can also extract a portion of a LOB by using a LOB locator. See Example:
Deferring the Evaluation of a LOB Expression on page 359 for an example
of a similar scenario.)
In addition, you can use the RETURNS TABLE clause of the CREATE
FUNCTION statement to define UDFs called table functions. Table functions
enable you to very efficiently use relational operations and the power of
SQL on data that resides outside a DB2 database (including non-relational
data stores). A table function takes individual scalar values of different
types and meanings as its arguments, and returns a table to the SQL
statement that invokes it. You can write table functions that generate only
the data in which you are interested, eliminating any unwanted rows or
columns. For more information on table functions, including rules on where
you can use them, refer to the SQL Reference.
You cannot create a method that returns a table.
v Behavior of Distinct Types.
You can implement the behavior of a user-defined distinct type (UDT), also
called distinct type, using a UDF. For more information on UDTs, see
Chapter 11. User-defined Distinct Types on page 281. For additional
details on UDTs and the important concept of castability discussed therein,
refer to the SQL Reference. When you create a distinct type, you are
automatically provided cast functions between the distinct type and its
375
source type, and you may be provided comparison operators such as =, >, <,
and so on, depending on the source type. You have to provide any
additional behavior yourself. Because it is clearly desirable to keep the
behavior of a distinct type in the database where all of the users of the
distinct type can easily access it, UDFs can be used as the implementation
mechanism.
For example, suppose you have a BOAT distinct type, defined over a one
megabyte BLOB. The BLOB contains the various nautical specifications, and
some drawings. You may wish to compare sizes of boats, and with a
distinct type defined over a BLOB source type, you do not get the
comparison operations automatically generated for you. You can implement
a BOAT_COMPARE function which decides if one boat is bigger than
another based on a measure that you choose. These could be: displacement,
length over all, metric tonnage, or another calculation based on the BOAT
object. You create the BOAT_COMPARE function as follows:
CREATE FUNCTION BOAT_COMPARE (BOAT, BOAT) RETURNS INTEGER ...
This simple example returns all the boats from BOATS_INVENTORY that
are bigger than a particular boat in MY_BOATS. Note that the example only
376
passes the rows of interest back to the application because the comparison
occurs in the database server. In fact, it completely avoids passing any
values of data type BOAT. This is a significant improvement in storage and
performance as BOAT is based on a one megabyte BLOB data type.
SMITH.FOO
SYSIBM.SUBSTR
SYSFUN.FLOOR
However, you may also omit the <schema-name>., in which case, DB2 must
identify the function to which you are referring. For example:
BOAT_COMPARE
FOO
SUBSTR
FLOOR
v Function Path
The concept of function path is central to DB2s resolution of unqualified
references that occur when you do not use the schema-name. For the use of
function path in DDL statements that refer to functions, refer to the SQL
Reference. The function path is an ordered list of schema names. It provides
a set of schemas for resolving unqualified function references to UDFs and
methods as well as UDTs. In cases where a function reference matches
functions in more than one schema in the path, the order of the schemas in
the path is used to resolve this match. The function path is established by
means of the FUNCPATH option on the precompile and bind commands
for static SQL. The function path is set by the SET CURRENT FUNCTION
PATH statement for dynamic SQL. The function path has the following
default value:
"SYSIBM","SYSFUN","<ID>"
This applies to both static and dynamic SQL, where <ID> represents the
current statement authorization ID.
v Overloaded function names
Function names can be overloaded, which means that multiple functions,
even in the same schema, can have the same name. Two functions cannot,
however, have the same signature, which can be defined to be the qualified
function name concatenated with the defined data types of all the function
parameters in the order in which they are defined. For an example of an
overloaded function, see Example: BLOB String Search on page 381.
v Function selection algorithm
377
It is the function selection algorithm that takes into account the facts of
overloading and function path to choose the best fit for every function
reference, whether it is a qualified or an unqualified reference. Even
references to the built-in functions and the functions (also IBM-supplied) in
the SYSFUN schema are processed through the function selection algorithm.
v Types of function
Each user-defined function is classified as a scalar, column or table function.
A scalar function returns a single value answer each time it is called. For
example, the built-in function SUBSTR() is a scalar function. Scalar UDFs
and methods can either be external (coded in a programming language
such as C), or sourced (using the implementation of an existing function).
A column function receives a set of like values (a column of data) and
returns a single value answer from this set of values. These are also called
aggregating functions in DB2. An example of a column function is the built-in
function AVG(). An external column UDF cannot be defined to DB2, but a
column UDF that is sourced on one of the built-in column functions can be
defined. This is useful for distinct types. For example, if a distinct type
SHOESIZE exists that is defined with base type INTEGER, you could define a
UDF, AVG(SHOESIZE), as a column function sourced on the existing built-in
column function, AVG(INTEGER).
A table function returns a table to the SQL statement that references it. A
table function can only be referenced in the FROM clause of a SELECT
statement. Such a function can be used to apply the SQL language to
non-DB2 data, or to capture such data and put it into a DB2 table. For
example, it could dynamically convert a file consisting of non-DB2 data into
a table, or it could retrieve data from the World Wide Web or an operating
system and and return the data as a table. A table function can only be an
external function.
The concept of function path, the SET CURRENT FUNCTION PATH
statement, and the function selection algorithm are discussed in detail in the
SQL Reference. The FUNCPATH precompile and bind options are discussed in
detail in the Command Reference.
For information about the concept of mapping UDFs and methods and
built-in functions to data source functions in a federated system, refer to the
SQL Reference. For guidelines on creating such mappings, refer to Invoking
Data Source Functions on page 586.
378
379
v Example: Counting
Note that in these examples:
v The keyword or keyword/value specifications are always shown in the
same order, for consistency of presentation and ease of understanding. In
actually writing one of these CREATE FUNCTION or CREATE METHOD
statements, after the function name and the list of parameter data types, the
specifications can appear in any order.
v The specifications in the EXTERNAL NAME clause are always shown for
DB2 for UNIX platforms. You may need to make changes if you run these
examples on non-UNIX platforms. For example, by converting all the slash
(/) characters to back slash characters (\) and adding a drive letter such as
C:, you have examples that are valid in OS/2 or Windows environments.
Refer to the SQL Reference for a complete discussion of the EXTERNAL
NAME clause.
Example: Exponentiation
Suppose you have written an external UDF to perform exponentiation of
floating point values, and wish to register it in the MATH schema. Assume
that you have DBADM authority. As you have tested the function extensively,
and know that it does not represent any integrity exposure, you define it as
NOT FENCED. By virtue of having DBADM authority, you possess the
database authority, CREATE_NOT_FENCED, which is required to define the
function as NOT FENCED.
CREATE FUNCTION MATH.EXPON (DOUBLE, DOUBLE)
RETURNS DOUBLE
EXTERNAL NAME '/common/math/exponent'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
NOT FENCED
In this example, the system uses the NOT NULL CALL default value. This is
desirable since you want the result to be NULL if either argument is NULL.
Since you do not require a scratchpad and no final call is necessary, the NO
SCRATCHPAD and NO FINAL CALL default values are used. As there is no
reason why EXPON cannot be parallel, the ALLOW PARALLELISM default
value is used.
380
with database integrity for this function as you suspect the UDF is not fully
tested, you define the function as FENCED.
Additionally, Willie has written the function to return a FLOAT result.
Suppose you know that when it is used in SQL, it should always return an
INTEGER. You can create the following function:
CREATE FUNCTION FINDSTRING (CLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC "willie_find_feb95"
EXTERNAL NAME '/u/willie/testfunc/testmod!findstr'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED
Note that a CAST FROM clause is used to specify that the UDF body really
returns a FLOAT value but you want to cast this to INTEGER before returning
the value to the statement which used the UDF. As discussed in the SQL
Reference, the INTEGER built-in function can perform this cast for you. Also,
you wish to provide your own specific name for the function and later
reference it in DDL (see Example: String Search over UDT on page 382).
Because the UDF was not written to handle NULL values, you use the NOT
NULL CALL default value. And because there is no scratchpad, you use the
NO SCRATCHPAD and NO FINAL CALL default values. As there is no
reason why FINDSTRING cannot be parallel, the ALLOW PARALLELISM
default value is used.
This example illustrates overloading of the UDF name, and shows that
multiple UDFs and methods can share the same body. Note that although a
BLOB cannot be assigned to a CLOB, the same source code can be used. There
is no programming problem in the above example as the programming
Chapter 14. User-Defined Functions (UDFs) and Methods
381
interface for BLOB and CLOB between DB2 and UDF is the same; length
followed by data. DB2 does not check if the UDF using a particular function
body is in any way consistent with any other UDF using the same body.
Note that this FINDSTRING function has a different signature from the
FINDSTRING functions in Example: BLOB String Search on page 381, so
there is no problem overloading the name. You wish to provide our own
specific name for possible later reference in DDL. Because you are using the
SOURCE clause, you cannot use the EXTERNAL NAME clause or any of the
related keywords specifying function attributes. These attributes are taken
from the source function. Finally, observe that in identifying the source
function you are using the specific function name explicitly provided in
Example: BLOB String Search on page 381. Because this is an unqualified
reference, the schema in which this source function resides must be in the
function path, or the reference will not be resolved.
382
Observe that CAST FROM and SPECIFIC are not specified, but that NOT
DETERMINISTIC is specified. Here again, FENCED is chosen for safety
reasons.
Note that in the SOURCE clause you have qualified the function name, just in
case there might be some other AVG function lurking in your function path.
Example: Counting
Your simple counting function returns a 1 the first time and increments the
result by one each time it is called. This function takes no SQL arguments,
and by definition it is a NOT DETERMINISTIC function since its answer
varies from call to call. It uses the scratchpad to save the last value returned,
and each time it is invoked it increments this value and returns it. You have
rigorously tested this function, and possess DBADM authority on the
database, so you will define it as NOT FENCED. (DBADM implies
CREATE_NOT_FENCED.)
CREATE FUNCTION COUNTER ()
RETURNS INT
EXTERNAL NAME '/u/roberto/myfuncs/util!ctr'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
NOT DETERMINISTIC
NOT FENCED
SCRATCHPAD
DISALLOW PARALLEL
Note that no parameter definitions are provided, just empty parentheses. The
above function specifies SCRATCHPAD, and uses the default specification of
NO FINAL CALL. In this case, as the default size of the scratchpad (100
bytes) is sufficient, no storage has to be freed by means of a final call, and so
NO FINAL CALL is specified. Since the COUNTER function requires that a
single scratchpad be used to operate properly, DISALLOW PARALLEL is
383
384
Within the context of a single session it will always return the same table, and
therefore it is defined as DETERMINISTIC. Note the RETURNS clause which
defines the output from DOCMATCH, including the column name DOC_ID.
FINAL CALL does not need to be specified for each table function. In
addition, the DISALLOW PARALLEL keyword is added as table functions
cannot operate in parallel. Although the size of the output from DOCMATCH
is highly variable, CARDINALITY 20 is a representative value, and is
specified to help the DB2 optimizer to make good decisions.
Typically this table function would be used in a join with the table containing
the document text, as follows:
SELECT T.AUTHOR, T.DOCTEXT
FROM DOCS as T, TABLE(DOCMATCH('MATHEMATICS', 'ZORN''S LEMMA')) as F
WHERE T.DOCID = F.DOC_ID
Note the special syntax (TABLE keyword) for specifying a table function in a
FROM clause. In this invocation, the docmatch() table function returns a row
containing the single column DOC_ID for each mathematics document
referencing Zorn's Lemma. These DOC_ID values are joined to the master
document table, retrieving the authors name and document text.
Referring to Functions
Each reference to a function, whether it is a UDF, or a built-in function,
contains the following syntax:
WW
function_name
WY
X expression
385
386
Note that if any of the above functions are table functions, the syntax to
reference them is slightly different than presented above. For example, if
PABLO.BLOOP is a table function, to properly reference it, use:
TABLE(PABLO.BLOOP(A+B)) AS Q
As the function selection logic does not know what data type the argument
may turn out to be, it cannot resolve the reference. You can use the CAST
specification to provide a type for the parameter marker, for example
INTEGER, and then the function selection logic can proceed:
BLOOP(CAST(? AS INTEGER))
Only the BLOOP functions in schema PABLO are considered. It does not
matter that user SERGE has defined a BLOOP function, or whether or not
there is a built-in BLOOP function. Now suppose that user PABLO has
defined two BLOOP functions in his schema:
CREATE FUNCTION BLOOP (INTEGER) RETURNS ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS ...
BLOOP is thus overloaded within the PABLO schema, and the function
selection algorithm would choose the best BLOOP, depending on the data
type of the argument, column1. In this case, both of the PABLO.BLOOPs take
numeric arguments, and if column1 is not one of the numeric types, the
statement will fail. On the other hand if column1 is either SMALLINT or
INTEGER, function selection will resolve to the first BLOOP, while if column1
is DECIMAL, DOUBLE, REAL, or BIGINT, the second BLOOP will be chosen.
Several points about this example:
1. It illustrates argument promotion. The first BLOOP is defined with an
INTEGER parameter, yet you can pass it a SMALLINT argument. The
function selection algorithm supports promotions among the built-in data
types (for details, refer to the SQL Reference) and DB2 performs the
appropriate data value conversions.
387
2. If for some reason you want to invoke the second BLOOP with a
SMALLINT or INTEGER argument, you have to take an explicit action in
your statement as follows:
SELECT PABLO.BLOOP(DOUBLE(COLUMN1)) FROM T
PABLO.BLOOP(INTEGER(COLUMN1)) FROM T
PABLO.BLOOP(FLOOR(COLUMN1)) FROM T
PABLO.BLOOP(CEILING(COLUMN1)) FROM T
PABLO.BLOOP(INTEGER(ROUND(COLUMN1,0))) FROM T
You should investigate these other functions in the SQL Reference. The
INTEGER function is a built-in function in the SYSIBM schema. The
FLOOR, CEILING, and ROUND functions are UDFs shipped with DB2,
which you can find in the SYSFUN schema along with many other useful
functions.
You have created the two BLOOP functions cited in Using Qualified Function
Reference on page 387, and you want and expect one of them to be chosen. If
the following default function path is used, the first BLOOP is chosen (since
column1 is INTEGER), if there is no conflicting BLOOP in SYSIBM or
SYSFUN:
"SYSIBM","SYSFUN","PABLO"
However, suppose you have forgotten that you are using a script for
precompiling and binding which you previously wrote for another purpose.
In this script, you explicitly coded your FUNCPATH parameter to specify the
following function path for another reason that does not apply to your current
work:
"KATHY","SYSIBM","SYSFUN","PABLO"
If Kathy has written a BLOOP function for her own purposes, the function
selection could very well resolve to Kathys function, and your statement
would execute without error. You are not notified because DB2 assumes that
388
you know what you are doing. It becomes your responsibility to identify the
incorrect output from your statement and make the required correction.
389
attach some meaning to the "+" operator for values which have distinct
type BOAT. You can define the following UDF:
CREATE FUNCTION "+" (BOAT, BOAT) RETURNS ...
Note that you are not permitted to overload the built-in conditional
operators such as >, =, LIKE, IN, and so on, in this way. See Example:
Integer Divide Operator on page 453 for an example of a UDF which
overloads the divide (/) operator.
v The function selection algorithm does not consider the context of the
reference in resolving to a particular function. Look at these BLOOP
functions, modified a bit from before:
CREATE FUNCTION BLOOP (INTEGER) RETURNS INTEGER ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS CHAR(10)...
Because the best match, resolved using the SMALLINT argument, is the
first BLOOP defined above, the second operand of the CONCAT resolves to
data type INTEGER. The statement fails because CONCAT demands string
arguments. If the first BLOOP was not present, the other BLOOP would be
chosen and the statement execution would be successful.
Another type of contextual inconsistency that causes a statement to fail is if
a given function reference resolves to a table function in a context that
requires a scalar or column function. The reverse could also occur. A
reference could resolve to a scalar or column function when a table function
is necessary.
v UDFs and methods can be defined with parameters or results having any of
the LOB types: BLOB, CLOB, or DBCLOB. DB2 will materialize the entire
LOB value in storage before invoking such a function, even if the source of
the value is a LOB locator host variable. For example, consider the following
fragment of a C language application:
390
v UDF parameters or results which have one of the LOB types can be created
with the AS LOCATOR modifier. In this case, the entire LOB value is not
materialized prior to invocation. Instead, a LOB LOCATOR is passed to the
UDF, which can then use the special UDF APIs to manipulate the actual
bytes of the LOB value (see Using LOB Locators as UDF Parameters or
Results on page 443 for details).
You can also use this capability on UDF parameters or results which have a
distinct type that is based on a LOB. This capability is limited to UDFs
defined as not-fenced. Note that the argument to such a function can be
any LOB value of the defined type; it does not have to be a host variable
defined as one of the LOCATOR types. The use of host variable locators as
arguments is completely orthogonal to the use of AS LOCATOR in UDF
parameters and result definitions.
v UDFs and methods can be defined with distinct types as parameters or as
the result. (Earlier examples have illustrated this.) DB2 will pass the value
to the UDF in the format of the source data type of the distinct type.
Distinct type values which originate in a host variable and which are used
as arguments to a UDF which has its corresponding parameter defined as a
distinct type, must be explicitly cast to the distinct type by the user. There
is no host language type for distinct types. DB2s strong typing necessitates
this. Otherwise your results may be ambiguous. So, consider the BOAT
distinct type which is defined over a BLOB, and consider the BOAT_COST
UDF from Example: External Function with UDT Parameter on page 382,
which takes an object of type BOAT as its argument. In the following
fragment of a C language application, the host variable :ship holds the
BLOB value that is to passed to the BOAT_COST function:
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS BLOB(150K) ship;
EXEC SQL END DECLARE SECTION;
391
If there are multiple BOAT distinct types in the database, or BOAT UDFs in
other schema, you must exercise care with your function path. Otherwise
your results may be ambiguous.
392
393
395
395
408
410
418
419
420
420
422
423
425
425
426
427
428
428
429
431
432
434
435
436
436
439
441
442
442
443
447
448
448
450
453
453
457
461
463
471
474
476
478
480
Description
This section describes how to write UDFs and methods. The coding
conventions for UDFs and methods are the same, with the following
differences:
v Since DB2 associates each method with a specific structured type, the first
argument passed from DB2 to your method is always the instance of the
structured type on which you invoked the method.
v Methods, unlike UDFs, cannot return tables. You cannot invoke a method
as the argument for a FROM clause.
393
As the guidelines for writing UDFs and methods are the same, with the
exception of the previously described difference, the remainder of the
discussion on writing UDFs and methods refers to both UDFs and methods
simply as UDFs.
For small UDFs such as UDFs that contain only simple logic, consider using a
SQL-bodied UDF. To create a SQL-bodied UDF, issue a CREATE FUNCTION
or CREATE METHOD statement that includes a method body written using
SQL, rather than pointing to an external UDF. SQL-bodied UDFs enable you
to declare and define the UDF in a single step, without using an external
language or compiler. SQL-bodied UDFs also offer the possibility of increased
performance, because the method body is written using SQL accessible to the
DB2 optimizer.
|
|
|
|
|
|
|
|
394
v You can invoke your UDF using OLE (Object Linking and Embedding) as
described in Writing OLE Automation UDFs on page 425.
v You can define an OLE DB table function, which is a function that returns a
table from an OLE DB data source, with just a CREATE FUNCTION
statement. For more information on OLE DB table functions, see OLE DB
Table Functions on page 431.
Note that a sourced UDF, which is different from an external UDF, does not
require an implementation in the form of a separate piece of code. Such a
UDF uses the same implementation as its source function, along with many of
its other attributes.
395
X SQL-result
WW
X SQL-argument
W X SQL-result-ind
W
scratchpad
W
X SQL-argument-ind
SQL-state
call-type
function-name
specific-name
diagnostic-message
dbinfo
W
WY
396
-1
If the function is defined with NOT NULL CALL, the UDF body does
not need to check for a null value. However, if it is defined with
NULL CALL, any argument can be NULL and the UDF should check
it.
The indicator takes the form of a SMALLINT value, and this can be
defined in your UDF as described in How the SQL Data Types are
Passed to a UDF on page 410. DB2 aligns the data for
SQL-argument-ind according to the data type and the server platform.
397
SQL-result-ind
This argument is set by the UDF before returning to DB2. There is one
of these for each SQL-result argument.
This argument is used by the UDF to signal if the particular result
value is null:
0 or positive
The result is not null
negative
The result is the null value. For more information, see
Interpreting Negative SQL-result-ind Values.
Interpreting Negative SQL-result-ind Values:
DB2 treats the function result as null (-2) if the following is true:
v The database configuration parameter DFT_SQLMATHWARN is
YES
v One of the input arguments is a null because of an arithmetic error
v The SQL-result-ind is negative.
This is also true if you define the function with the NOT NULL CALL
option.
Even if the function is defined with NOT NULL CALL, the UDF body
must set the indicator of the result. For example, a divide function
could set the result to null when the denominator is zero.
The indicator takes the form of a SMALLINT value, and this can be
defined in your UDF as described in How the SQL Data Types are
Passed to a UDF on page 410.
If the UDF takes advantage of table function optimization using the
RESULT column list, then only the indicators corresponding to the
required columns need be set.
DB2 aligns the data for SQL-result-ind according to the data type and
the server platform.
SQL-state
This argument is set by the UDF before returning to DB2. It takes the
form of a CHAR(5) value. Ensure that the argument definition in the
UDF is appropriate for a CHAR(5) as described in How the SQL
Data Types are Passed to a UDF on page 410, and can be used by the
UDF to signal warning or error conditions. It contains the value
'00000', when the function is called. The UDF can set the value to the
following:
398
00000
Only valid for the FETCH call to table functions, it means that
there are no more rows in the table.
38502
A special value for the case where the UDF body attempted to
issue an SQL call and received an error, SQLCODE -487
(SQLSTATE 38502). because SQL is not allowed in UDFs), and
chose to pass this same error back through to DB2.
WILLIE.FINDSTRING
This form enables you to use the same UDF body for multiple
external functions, and still differentiate between the functions when it
is invoked.
Note: Although it is possible to include the period in object names
and schema names, it is not recommended. For example, if a
399
SQL9904281052440430
400
401
-450 (SQLSTATE 39501), but a major overwrite by the UDF can cause
unpredictable results, or an abend, and may not result in a graceful
failure by DB2.
If a scalar UDF which uses a scratchpad is referenced in a subquery,
DB2 may decide to refresh the scratchpad between invocations of the
subquery. This refresh occurs after a final-call is made, if FINAL CALL
is specified for the UDF.
DB2 initializes the scratchpad so that the data field is aligned for the
storage of any data type. This may result in the entire scratchpad
structure, including the length field, not being properly aligned. For
more information on declaring and accessing scratchpads, see
Writing Scratchpads on 32-bit and 64-bit Platforms on page 418.
call-type
This argument, if present, is set by DB2 before calling the UDF. For
scalar functions this argument is only present if FINAL CALL is
specified in the CREATE FUNCTION statement, but for table
functions it is ALWAYS present. It follows the scratchpad argument; or
the diagnostic-message argument if the scratchpad argument is not
present. This argument takes the form of an INTEGER value. Ensure
that this argument definition in the UDF is appropriate for INTEGER.
See How the SQL Data Types are Passed to a UDF on page 410 for
more information.
Note that even though all the current possible values are listed below,
your UDF should contain a switch or case statement which explicitly
tests for all the expected values, rather than containing if A do AA,
else if B do BB, else it must be C so do CC type logic. This is because
it is possible that additional call types may be added in the future,
and if you dont explicitly test for condition C you will have trouble
when new possibilities are added.
Notes:
1. For all the call-types, it may be appropriate for the UDF to set a
SQL-state and diagnostic-message return value. This information will
not be repeated in the following descriptions of each call-type. For
all calls DB2 will take the indicated action as described previously
for these arguments.
2. The include file sqludf.h is intended for use with UDFs and is
described in The UDF Include File: sqludf.h on page 419. The file
contains symbolic defines for the following call-type values, which
are spelled out as constants.
For scalar functions call-type contains:
402
-1
This is the FIRST call to the UDF for this statement. The
scratchpad (if any) is set to binary zeros when the UDF is
called. All argument values are passed, and the UDF should
do whatever one-time initialization actions are required. In
addition, a FIRST call to a scalar UDF is like a NORMAL call,
in that it is expected to develop and return an answer.
Note that if SCRATCHPAD is specified but FINAL CALL is
not, then the UDF will not have this call-type argument to
identify the very first call. Instead it will have to rely on the
all-zero state of the scratchpad.
This is a NORMAL call. All the SQL input values are passed,
and the UDF is expected to develop and return the result. The
UDF may also return SQL-state and diagnostic-message
information.
Releasing resources.
A scalar UDF is expected to release resources it has required, for
example, memory. If FINAL CALL is specified for the UDF, then that
FINAL call is a natural place to release resources, provided that
SCRATCHPAD is also specified and is used to track the resource. If
FINAL CALL is not specified, then any resource acquired should be
released on the same call.
For table functions call-type contains:
-2
This is the FIRST call, which only occurs if the FINAL CALL
keyword was specified for the UDF. The scratchpad is set to
binary zeros before this call. Argument values are passed to
the table function, and it may choose to acquire memory or
perform other one-time only resource initialization. Note that
this is not an OPEN call, that call follows this one. On a FIRST
call the table function should not return any data to DB2 as
DB2 ignores the data.
-1
403
Releasing resources.
Write UDFs to release any resources that they acquire. For table
functions, there are two natural places for this release: the CLOSE call
and the FINAL call. The CLOSE call balances each OPEN call and can
occur multiple times in the execution of a statement. The FINAL call
only occurs if FINAL CALL is specified for the UDF, and occurs only
once per statement.
If you can apply a resource across all OPEN/FETCH/CLOSE
sequences of the UDF, write the UDF to acquire the resource on the
FIRST call and free it on the FINAL call. The scratchpad is a natural
place to track this resource. For table functions, if FINAL CALL is
specified, the scratchpad is initialized only before the FIRST call. If
FINAL CALL is not specified, then it is reinitialized before each OPEN
call.
If a resource is specific to each OPEN/FETCH/CLOSE sequence,
write the UDF to free the resource on the CLOSE call. (Note that
when a table function is in a subquery or join, it is very possible that
there will be multiple occurrences of the OPEN/FETCH/CLOSE
sequence, depending on how the DB2 Optimizer chooses to organize
the execution of the statement.)
dbinfo
404
This argument is set by DB2 before calling the UDF. It is only present
if the CREATE FUNCTION statement for the UDF specifies the
405
406
SQLUDF_PLATFORM_UNKNOWN
Unknown platform
14.
15.
|
|
|
|
|
|
|
|
|
|
|
16.
17.
For additional platforms that are not contained in the above list,
see the contents of the sqludf.h file.
Number of table function column list entries (numtfcol)
The number of non-zero entries in the table function column list
specified in the table function column list field below.
Reserved field (resd1)
This field is for future use. It is defined as 2 characters long.
Procedure ID (procid)
The value of the procid field in the DBINFO structure that is
being passed to a procedure or function is non-zero if the caller
of the routine is a cataloged stored procedure. In this situation,
the value of procid is the ID of the calling procedure as recorded
in the PROCEDURE_ID column of the SYSCAT.PROCEDURES
table. In all other situations, the value returned for the procid
field is 0.
Reserved field (resd2)
407
where the <x> and <y> vary by connection type, but the <ts> is a
12 character time stamp of the form YYMMDDHHMMSS, which
is potentially adjusted by DB2 to ensure uniqueness.
Example:
*LOCAL.db2inst.980707130144
|
|
408
These arguments are used by DB2 to pass the identity of the referenced
function to the UDF.
v scratchpad and call-type.
These arguments are used by DB2 to manage the saving of UDF state
between calls. The scratchpad is created and initialized by DB2 and
thereafter managed by the UDF. DB2 signals the type of call to the UDF
using the call-type argument.
v dbinfo.
A structure passed by DB2 to the UDF containing additional information.
A table function logically returns a table to the SQL statement that references
it, but the physical interface between DB2 and the table function is row by
row. For table functions, the arguments are:
v SQL-argument.
This argument passes the values identified in the function reference from
DB2 to the UDF. The argument has the same value for FETCH calls as it
did for the OPEN and FIRST calls. There is one of these for each SQL
argument.
v SQL-result.
This argument is used to pass back the individual column values for the
row being returned by the UDF. There is one of these arguments for each
result column value defined in the RETURNS TABLE (...) clause of the
CREATE FUNCTION statement.
v SQL-argument-ind.
This argument corresponds positionally to SQL-argument values, and tells
the UDF whether the particular argument is null. There is one of these for
each SQL argument.
v SQL-result-ind.
This argument is used by the UDF to report back to DB2 whether the
individual column values returned in the table function output row is null.
It corresponds positionally to the SQL-result argument.
v SQL-state and diagnostic-message.
These arguments are used by the UDF to signal exception information and
the end-of-table condition back to DB2.
v function-name and specific-name.
These arguments are used by DB2 to pass the identity of the referenced
function to the UDF.
v scratchpad and call-type.
These arguments are used by DB2 to manage the saving of UDF state
between calls. The scratchpad is created and initialized by DB2 and
thereafter managed by the UDF. DB2 signals the type of call to the UDF
409
using the call-type argument. For table functions these call types are OPEN,
FETCH, CLOSE, and optionally FIRST and FINAL.
v dbinfo.
This is a structure passed by DB2 to the UDF containing additional
information.
Observe that the normal value outputs of the UDF, as well as the SQL-result,
SQL-result-ind, and SQL-state, are returned to DB2 using arguments passed
from DB2 to the UDF. Indeed, the UDF is written not to return anything in
the functional sense (that is, the functions return type is void). See the void
definition and the return statement in the following example:
#include ...
void SQL_API_FN divid(
... arguments ... )
{
... UDF body ...
return;
}
410
For the function result, it is the data type specified in the CAST FROM clause
of the CREATE FUNCTION statement that defines the format. If no CAST
FROM clause is present, then the data type specified in the RETURNS clause
defines the format.
In the following example, the presence of the CAST FROM clause means that
the UDF body returns a SMALLINT and that DB2 casts the value to INTEGER
before passing it along to the statement where the function reference occurs:
... RETURNS INTEGER CAST FROM SMALLINT ...
Example:
short
short
*arg1;
/* example for SMALLINT */
*arg1_null_ind; /* example for any null indicator */
411
sqlint32 *arg2;
v BIGINT
Valid. Represent in C as sqlint64.
Example:
sqlint64 *arg3;
For the above UDF, the first two parameters correspond to the wage and
number of hours. You invoke the UDF WEEKLY_PAY in your SQL select
statement as follows:
SELECT WEEKLY_PAY (WAGE, HOURS, ...) ...;
412
*result;
*/
arg1[14];
*arg1;
For a CHAR(n) parameter, DB2 always moves n bytes of data to the buffer
and sets the n+1 byte to null. For a RETURNS CHAR(n) value, DB2 always
takes the n bytes and ignores the n+1 byte. For this RETURNS CHAR(n)
case, you are warned against the inadvertent inclusion of a null-character in
the first n characters. DB2 will not recognize this as anything but a normal
part of the data, and it might later on cause seemingly anomalous results if
it was not intended.
If FOR BIT DATA is specified, exercise caution about using the normal C
string handling functions in the UDF. Many of these functions look for a
null to delimit the string, and the null-character (X'00') could be a legitimate
character in the middle of the data value.
When defining character UDF parameters, consider using VARCHAR rather
than CHAR as DB2 does not promote VARCHAR arguments to CHAR. For
example, suppose you define a UDF as follows:
CREATE FUNCTION SIMPLE(INT,CHAR(1))...
413
this function may not perceive the reason for the message. In the above
example, 'A' is VARCHAR, so you can either cast it to CHAR or define the
parameter as VARCHAR.
v VARCHAR(n) FOR BIT DATA or LONG VARCHAR with or without the
FOR BIT DATA modifier.
Valid. Represent in C as a structure similar to:
struct sqludf_vc_fbd
{
unsigned short length;
char
data[1];
};
/* length of data */
/* first char of data */
The [1] is merely to indicate an array to the compiler. It does not mean that
only one character is passed; because the address of the structure is passed,
and not the actual structure, it just provides a way to use array logic.
These values are not represented as C null-terminated strings because the
null-character could legitimately be part of the data value. The length is
explicitly passed to the UDF for parameters using the structure variable
length. For the RETURNS clause, the length that is passed to the UDF is
the length of the buffer. What the UDF body must pass back, using the
structure variable length, is the actual length of the data value.
Example:
struct sqludf_vc_fbd *arg1; /* example for VARCHAR(n) FOR BIT DATA */
struct sqludf_vc_fbd *result; /* also for LONG VARCHAR FOR BIT DATA */
arg2[51];
*result;
v GRAPHIC(n)
Valid. Represent in C as sqldbchar[n+1]. (This is a null-terminated graphic
string). Note that you can use wchar_t[n+1] on platforms where wchar_t is
defined to be 2 bytes in length; however, sqldbchar is recommended. See
Selecting the wchar_t or sqldbchar Data Type in C and C++ on page 622
for more information on these two data types.
414
arg1[14];
*arg1;
v VARGRAPHIC(n)
Valid. Represent in C as sqldbchar[n+1]. (This is a null-terminated graphic
string). Note that you can use wchar_t[n+1] on platforms where wchar_t is
defined to be 2 bytes in length; however, sqldbchar is recommended. See
Selecting the wchar_t or sqldbchar Data Type in C and C++ on page 622
for more information on these two data types.
For a VARGRAPHIC(n) parameter, DB2 will put a graphic null in the (k+1)
position, where k is the length of the particular occurrence. A graphic null
refers to the situation where all the bytes of the last character of the graphic
string contain binary zeros ('\0's). Data passed from DB2 to a UDF is in
DBCS format, and the result passed back is expected to be in DBCS format.
This behavior is the same as using the WCHARTYPE NOCONVERT
precompiler option described in The WCHARTYPE Precompiler Option in
C and C++ on page 623. For a RETURNS VARGRAPHIC(n) value, the
UDF body must delimit the actual value with a graphic null, because DB2
will determine the result length from this graphic null character.
Example:
sqldbchar
sqldbchar
args[51],
*result,
v LONG VARGRAPHIC
Chapter 15. Writing User-Defined Functions (UDFs) and Methods
415
/* length of data */
/* first char of data */
Note that in the above structure, you can use wchar_t in place of sqldbchar
on platforms where wchar_t is defined to be 2 bytes in length, however, the
use of sqldbchar is recommended. See Selecting the wchar_t or sqldbchar
Data Type in C and C++ on page 622 for more information on these two
data types.
The [1] merely indicates an array to the compiler. It does not mean that
only one graphic character is passed. Because the address of the structure is
passed, and not the actual structure, it just provides a way to use array
logic.
These are not represented as null-terminated graphic strings. The length, in
double-byte characters, is explicitly passed to the UDF for parameters using
the structure variable length. Data passed from DB2 to a UDF is in DBCS
format, and the result passed back is expected to be in DBCS format. This
behavior is the same as using the WCHARTYPE NOCONVERT precompiler
option described in The WCHARTYPE Precompiler Option in C and C++
on page 623. For the RETURNS clause, the length that is passed to the UDF
is the length of the buffer. What the UDF body must pass back, using the
structure variable length, is the actual length of the data value, in double
byte characters.
Example:
struct sqludf_vg *arg1; /* example for VARGRAPHIC(n)
struct sqludf_vg *result; /* also for LONG VARGRAPHIC
*/
*/
v DATE
Valid. Represent in C same as CHAR(10), that is as char...[11]. The date
value is always passed to the UDF in ISO format: yyyy-mm-dd.
Example:
char
char
arg1[11];
*result;
v TIME
Valid. Represent in C same as CHAR(8), that is, as char...[9]. The time
value is always passed to the UDF in ISO format: hh.mm.ss.
Example:
char
char
*arg;
result[9];
v TIMESTAMP
416
arg1[27];
*result;
/* length in bytes */
/* first byte of lob */
The [1] merely indicates an array to the compiler. It does not mean that
only one character is passed; because the address of the structure is passed,
and not the actual structure, it just provides a way to use array logic.
These are not represented as C null-terminated strings. The length is
explicitly passed to the UDF for parameters using the structure variable
length. For the RETURNS clause, the length that is passed back to the UDF,
is the length of the buffer. What the UDF body must pass back, using the
structure variable length, is the actual length of the data value.
Example:
struct sqludf_lob *arg1; /* example for BLOB(n), CLOB(n) */
struct sqludf_lob *result;
v DBCLOB(n)
Valid. Represent in C as a structure:
struct sqludf_lob
{
sqluint32 length;
sqldbchar data[1];
};
Note that in the above structure, you can use wchar_t in place of sqldbchar
on platforms where wchar_t is defined to be 2 bytes in length, however, the
use of sqldbchar is recommended. See Selecting the wchar_t or sqldbchar
Data Type in C and C++ on page 622 for more information on these two
data types.
The [1] merely indicates an array to the compiler. It does not mean that
only one graphic character is passed; because the address of the structure is
passed, and not the actual structure, it just provides a way to use array
logic.
417
v Distinct Types
Valid or invalid depending on the base type. Distinct types will be passed
to the UDF in the format of the base type of the UDT, so may be specified
if and only if the base type is valid.
Example:
struct sqludf_lob *arg1; /* for distinct type based on BLOB(n) */
double
*arg2; /* for distinct type based on DOUBLE */
char
res[5]; /* for distinct type based on CHAR(4) */
The type udf_locator is defined in the header file sqludf.h, which is discussed
in The UDF Include File: sqludf.h on page 419. The use of these locators is
discussed in Using LOB Locators as UDF Parameters or Results on
page 443.
418
419
having the data types. These are the definitions with names SQLUDF_x
and SQLUDF_x_FBD where x is a SQL data type name, and FBD
represents For Bit Data.
3.
4.
5.
6.
Some of the UDF examples in the next section illustrate the inclusion and use
of sqludf.h.
Where:
420
{ .....}
Java UDFs that implement table functions require more arguments. Beside the
variables representing the input, an additional variable appears for each
column in the resulting row. For example, a table function may be declared as:
public void test4(String arg1, int result1,
Blob result2, String result3);
SQL NULL values are represented by Java variables that are not initialized.
These variables have a value of zero if they are primitive types, and Java null
if they are object types, in accordance with Java rules. To tell an SQL NULL
apart from an ordinary zero, you can call the function isNull for any input
argument:
{ ....
if (isNull(1)) { /* argument #1 was a SQL NULL */ }
else
{ /* not NULL */ }
}
In the above example, the argument numbers start at one. The isNull()
function, like the other functions that follow, are inherited from the
COM.ibm.db2.app.UDF class.
To return a result from a scalar or table UDF, use the set() method in the
UDF, as follows:
{ ....
set(2, value);
}
421
422
NO FINAL CALL
LANGUAGE JAVA
SCRATCHPAD
FINAL CALL
LANGUAGE JAVA
SCRATCHPAD
No calls.
423
NO FINAL CALL
LANGUAGE JAVA
SCRATCHPAD
FINAL CALL
LANGUAGE JAVA
SCRATCHPAD
v
v UDF method is called with
CLOSE call. close() method if
it exists for class.
v
v Method closes its Web scan
and disconnects from the Web
server. close() does not need
to do anything.
Notes:
1. By UDF method we mean the Java class method which implements the
UDF. This is the method identified in the EXTERNAL NAME clause of the
CREATE FUNCTION statement.
2. For table functions with NO SCRATCHPAD specified, the calls to the UDF
method are as indicated in this table, but because the user is not asking for
any continuity via a scratchpad, DB2 will cause a new object to be
instantiated before each call, by calling the class constructor. It is not clear
that table functions with NO SCRATCHPAD (and thus no continuity) can
do very useful things, but they are supported.
3. These models are TOTALLY COMPATIBLE with what happens with the
other UDF languages: C/C++ and OLE.
424
such as Lotus Notes or Microsoft Exchange , can then integrate these objects
by taking advantage of these properties and methods through OLE
automation.
The applications exposing the properties and methods are called OLE
automation servers or objects, and the applications that access those properties
and methods are called OLE automation controllers. OLE automation servers
are COM components (objects) that implement the OLE IDispatch interface.
An OLE automation controller is a COM client that communicates with the
automation server through its IDispatch interface. COM (Component Object
Model) is the foundation of OLE. For OLE automation UDFs, DB2 acts as an
OLE automation controller. Through this mechanism, DB2 can invoke
methods of OLE automation objects as external UDFs.
Note that this section assumes that you are familiar with OLE automation
terms and concepts. This book does not present any introductory OLE
material. For an overview of OLE automation, refer to Microsoft Corporation:
The Component Object Model Specification, October 1995. For details on OLE
automation, refer to OLE Automation Programmers Reference, Microsoft Press,
1996, ISBN 1-55615-851-3.
For a list of sample applications included with the DB2 Application
Development Client that demonstrate OLE automation UDFs, see Table 50 on
page 761.
425
v LANGUAGE OLE
v FENCED, since OLE automation UDFs must run in FENCED mode
The external name consists of the OLE progID identifying the OLE
automation object and the method name separated by ! (exclamation mark):
CREATE FUNCTION bcounter () RETURNS INTEGER
EXTERNAL NAME 'bert.bcounter!increment'
LANGUAGE OLE
FENCED
SCRATCHPAD
FINAL CALL
NOT DETERMINISTIC
NULL CALL
PARAMETER STYLE DB2SQL
NO SQL
NO EXTERNAL ACTION
DISALLOW PARALLEL;
The calling conventions for OLE method implementations are identical to the
conventions for functions written in C or C++. An implementation of the
above method in the BASIC language looks like the following (notice that in
BASIC the parameters are by default defined as call by reference):
Public Sub increment(output As Long, _
indicator As Integer, _
sqlstate As String, _
fname As String, _
fspecname As String, _
sqlmsg As String, _
scratchpad() As Byte, _
calltype As Long)
426
How the SQL Data Types are Passed to an OLE Automation UDF
DB2 handles the type conversions between SQL types and OLE automation
types. The following table summarizes the supported data types and how
they are mapped. The mapping of OLE automation types to data types of the
implementing programming language, such as BASIC or C/C++, is described
in Table 17 on page 428.
Table 16. Mapping of SQL and OLE Automation Datatypes
SQL Type
SMALLINT
short
INTEGER
long
REAL
float
FLOAT or DOUBLE
double
DATE
DATE
TIME
DATE
TIMESTAMP
DATE
CHAR(n)
BSTR
VARCHAR(n)
BSTR
LONG VARCHAR
BSTR
CLOB(n)
BSTR
GRAPHIC(n)
BSTR
VARGRAPHIC(n)
BSTR
LONG GRAPHIC
BSTR
DBCLOB(n)
BSTR
1
1
LONG VARCHAR
BLOB(n)
Length-prefixed string as
described in the OLE
Automation Programmers
Reference.
SAFEARRAY[unsigned char]
CHAR(n)
VARCHAR(n)
Length-prefixed string as
described in the OLE
Automation Programmers
Reference.
SAFEARRAY[unsigned char]
1
SAFEARRAY[unsigned char]
SAFEARRAY[unsigned char]
Note:
1. With FOR BIT DATA specified
427
Data passed between DB2 and OLE automation UDFs is passed as call by
reference. SQL types such as BIGINT, DECIMAL, or LOCATORS, or OLE
automation types such as Boolean or CURRENCY that are not listed in the
table are not supported. Character and graphic data mapped to BSTR is
converted from the database code page to the UCS-2 (also known as Unicode,
IBM code page 13488) scheme. Upon return, the data is converted back to the
database code page. These conversions occur regardless of the database code
page. If code page conversion tables to convert from the database code page
to UCS-2 and from UCS-2 to the database code page are not installed, you
receive an SQLCODE -332 (SQLSTATE 57017).
UDF Language
BASIC
Type
C++ Type
SMALLINT
short
Integer
short
INTEGER
long
Long
long
REAL
float
Single
float
FLOAT or DOUBLE
double
Double
double
DATE
Date
DATE
BSTR
String
BSTR
GRAPHIC(n), VARGRAPHIC(n),
LONG GRAPHIC, DBCLOB(n)
BSTR
String
BSTR
CHAR(n)1, VARCHAR(n)1,
LONG VARCHAR1, BLOB(n)
SAFEARRAY[unsigned char]
Byte()
SAFEARRAY
Note:
1. With FOR BIT DATA specified
428
You can find an example of an OLE table automation in Example: Mail OLE
Automation Table Function in BASIC on page 478.
OLE Automation UDFs in C++
Table 17 on page 428 shows the C++ data types that correspond to the SQL
data types and how they map to OLE automation types.
The C++ declaration of the increment OLE automation UDF is as follows:
STDMETHODIMP Ccounter::increment (long
*output,
short *indicator,
BSTR
*sqlstate,
BSTR
*fname,
BSTR
*fspecname,
BSTR
*sqlmsg,
SAFEARRAY **scratchpad,
long
*calltype );
OLE supports type libraries that describe the properties and methods of OLE
automation objects. Exposed objects, properties, and methods are described in
the Object Description Language (ODL). The ODL description of the above
C++ method is as follows:
HRESULT increment ([out]
[out]
[out]
[in]
[in]
[out]
[in,out]
[in]
long *output,
short *indicator,
BSTR *sqlstate,
BSTR *fname,
BSTR *fspecname,
BSTR *sqlmsg,
SAFEARRAY (unsigned char) *scratchpad,
long *calltype);
429
Scalar functions contain one output parameter and output indicator, whereas
table functions contain multiple output parameters and output indicators
corresponding to the RETURN columns of the CREATE FUNCTION
statement.
OLE automation defines the BSTR data type to handle strings. BSTR is
defined as a pointer to OLECHAR: typedef OLECHAR *BSTR. For allocating
and freeing BSTRs, OLE imposes the rule, that the callee frees a BSTR passed
in as a by-reference parameter before assigning the parameter a new value.
This rule means the following for DB2 and OLE automation UDFs. The same
rule applies for one-dimensional byte arrays which are received by the callee
as SAFEARRAY**:
v [in] parameters: DB2 allocates and frees [in] parameters.
v [out] parameters: DB2 passes in a pointer to NULL. The [out] parameter
must be allocated by the callee and is freed by DB2.
v [in,out] parameters: DB2 initially allocates [in,out] parameters. They can be
freed and re-allocated by the callee. As is true for [out] parameters, DB2
frees the final returned parameter.
All other parameters are passed as pointers. DB2 allocates and manages the
referenced memory.
OLE automation provides a set of data manipulation functions for dealing
with BSTRs and SAFEARRAYs. The data manipulation functions are described
in the OLE Automation Programmers Reference.
The following C++ UDF returns the first 5 characters of a CLOB input
parameter:
// UDF DDL: CREATE FUNCTION crunch (clob(5k)) RETURNS char(5)
STDMETHODIMP Cobj::crunch (BSTR *in,
// CLOB(5K)
BSTR *out,
// CHAR(5)
short *indicator1, // input indicator
short *indicator2, // output indicator
BSTR *sqlstate,
// pointer to NULL
BSTR *fname,
// pointer to function name
BSTR *fspecname,
// pointer to specific name
BSTR *msgtext)
// pointer to NULL
{
// Allocate BSTR of 5 characters
// and copy 5 characters of input parameter
// out is an [out] parameter of type BSTR, that is,
// it is a pointer to NULL and the memory does not have to be freed.
// DB2 will free the allocated BSTR.
430
};
431
the OLE DB provider and the relevant rowset as a data source. You do not
have to do any UDF programming to take advantage of OLE DB table
functions.
To use OLE DB table functions with DB2 Universal Database, you must install
OLE DB 2.0 or later, available from Microsoft at http://www.microsoft.com. If
you attempt to invoke an OLE DB table function without first installing OLE
DB, DB2 issues SQLCODE 465, SQLSTATE 58032, reason code 35. For the
system requirements and OLE DB providers available for your data sources,
refer to your data source documentation. For a list of samples that define and
use OLE DB table functions, see Appendix B. Sample Programs on page 743.
For the OLE DB specification, see the Microsoft OLE DB 2.0 Programmers
Reference and Data Access SDK, Microsoft Press, 1998.
'server!rowset'
'!rowset!connectstring'
where:
server identifies a server registered with CREATE SERVER statement
rowset
identifies a rowset, or table, exposed by the OLE DB provider; this
value should be empty if the table has an input parameter to pass
through command text to the OLE DB provider.
432
connectstring
contains initialization properties needed to connect to an OLE DB
provider. For the complete syntax and semantics of the connection
string, see the Data Link API of the OLE DB Core Components in
the Microsoft OLE DB 2.0 Programmers Reference and Data Access SDK,
Microsoft Press, 1998.
You can use a connection string in the EXTERNAL NAME clause of a CREATE
FUNCTION statement, or specify the CONNECTSTRING option in a CREATE
SERVER statement.
For example, you can define an OLE DB table function and return a table
from a Microsoft Access database with the following CREATE FUNCTION
and SELECT statements:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME '!orders!Provider=Microsoft.Jet.OLEDB.3.51;
Data Source=c:\msdasdk\bin\oledb\nwind.mdb';
SELECT orderid, DATE(orderdate) AS orderdate,
DATE(shippeddate) AS shippeddate
FROM TABLE(orders()) AS t
WHERE orderid = 10248;
Instead of putting the connection string in the EXTERNAL NAME clause, you
can create and use a server name. For example, assuming you have defined
the server Nwind as described in Defining a Server Name for an OLE DB
Provider on page 435, you could use the following CREATE FUNCTION
statement:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME 'Nwind!orders';
OLE DB table functions also allow you to specify one input parameter of any
character string data type. Use the input parameter to pass command text
directly to the OLE DB provider. If you define an input parameter, do not
provide a rowset name in the EXTERNAL NAME clause. DB2 passes the
command text to the OLE DB provider for execution and the OLE DB
provider returns a rowset to DB2. Column names and data types of the
resulting rowset need to be compatible with the RETURNS TABLE definition
in the CREATE FUNCTION statement. Since binding to the column names of
the rowset is based on matching column names, you must ensure that you
name the columns properly.
The following example registers an OLE DB table function, which retrieves
store information from a Microsoft SQL Server 7.0 database. The connection
Chapter 15. Writing User-Defined Functions (UDFs) and Methods
433
string is provided in the EXTERNAL NAME clause. Since the table function
has an input parameter to pass through command text to the OLE DB
provider, the rowset name is not specified in the EXTERNAL NAME clause.
The query example passes in a SQL command text which retrieves
information about the top three stores from a SQL Server database.
CREATE FUNCTION favorites (varchar(600))
RETURNS TABLE (store_id char (4), name varchar (41), sales integer)
SPECIFIC favorites
LANGUAGE OLEDB
EXTERNAL NAME '!!Provider=SQLOLEDB.1;Persist Security Info=False;
User ID=sa;Initial Catalog=pubs;Data Source=WALTZ;
Locale Identifier=1033;Use Procedure for Prepare=1;
Auto Translate=False;Packet Size=4096;Workstation ID=WALTZ;
OLE DB Services=CLIENTCURSOR;';
SELECT *
FROM TABLE (favorites ('
'
'
'
'
'
'
'
'
'
'
'
'
||
||
||
||
||
||
434
You can then use the server name Nwind to identify the OLE DB provider in a
CREATE FUNCTION statement, for example:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME 'Nwind!orders';
435
For the complete syntax of the CREATE SERVER statement, refer to the SQL
Reference. For information on user mappings for OLE DB providers, see
Defining a User Mapping.
To provide the equivalent access to all of the DB2 users that call the OLE DB
table function orders, use the following CONNECTSTRING either in a
CREATE FUNCTION or CREATE SERVER statement:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME '!orders!Provider=Microsoft.Jet.OLEDB.3.51;User ID=dave;
Password=mypwd;Data Source=c:\msdasdk\bin\oledb\nwind.mdb';
436
For mappings of OLE DB provider source data types to OLE DB data types,
refer to the OLE DB provider documentation. For examples of how the ANSI
SQL, Microsoft Access, and Microsoft SQL Server providers might map their
respective data types to OLE DB data types, refer to the Microsoft OLE DB 2.0
Programmers Reference and Data Access SDK, Microsoft Press, 1998.
437
SMALLINT
DBTYPE_I2
INTEGER
DBTYPE_I4
BIGINT
DBTYPE_I8
REAL
DBTYPE_R4
FLOAT/DOUBLE
DBTYPE_R8
DEC (p, s)
DBTYPE_NUMERIC (p, s)
DATE
DBTYPE_DBDATE
TIME
DBTYPE_DBTIME
TIMESTAMP
DBTYPE_DBTIMESTAMP
CHAR(N)
DBTYPE_STR
VARCHAR(N)
DBTYPE_STR
LONG VARCHAR
DBTYPE_STR
CLOB(N)
DBTYPE_STR
DBTYPE_BYTES
DBTYPE_BYTES
DBTYPE_BYTES
BLOB(N)
DBTYPE_BYTES
GRAPHIC(N)
DBTYPE_WSTR
VARGRAPHIC(N)
DBTYPE_WSTR
LONG GRAPHIC
DBTYPE_WSTR
DBCLOB(N)
DBTYPE_WSTR
Note: OLE DB data type conversion rules are defined in the Microsoft OLE DB
2.0 Programmers Reference and Data Access SDK, Microsoft Press, 1998.
For example:
v To retrieve the OLE DB data type DBTYPE_CY, the data may get
converted to OLE DB data type DBTYPE_NUMERIC(19,4) which
maps to DB2 data type DEC(19,4).
v To retrieve the OLE DB data type DBTYPE_I1, the data may get
converted to OLE DB data type DBTYPE_I2 which maps to DB2 data
type SMALLINT.
v To retrieve the OLE DB data type DBTYPE_GUID, the data may get
converted to OLE DB data type DBTYPE_BYTES which maps to DB2
data type CHAR(12) FOR BIT DATA.
438
Scratchpad Considerations
The factors influencing whether your UDF should use a scratchpad or not are
important enough to warrant this special section. Other coding considerations
are discussed in Other Coding Considerations on page 448.
It is important that you code UDFs to be re-entrant. This is primarily due to
the fact that many references to the UDF may use the same copy of the
function body. In fact, these many references may even be in different
statements or applications. However, note that functions may need or want to
save state from one invocation to the next. Two categories of these functions
are:
1. Functions that, to be correct, depend on saving state.
An example of such a function is a simple counter function which returns a
'1' the first time it is called, and increments the result by one each
successive call. Such a function could be used to number the rows of a
SELECT result:
SELECT counter(), a, b+c, ...
FROM tablex
WHERE ...
This statement returns all the documents containing the particular text
string value represented by the first argument. What match would like to
do is:
v First time only.
Retrieve a list of all the document IDs which contain the string
myocardial infarction from the document application which is
maintained outside of DB2. This retrieval is a costly process, so the
function would like to do it only one time, and save the list somewhere
handy for subsequent calls.
v On each call.
439
Use the list of document IDs saved during this first call to see if the
document ID which is passed as the second argument is contained in
the list.
This particular match function is DETERMINISTIC (or NOT VARIANT). Its
answer only depends on its input argument values. What is shown here is
a function whose performance, not correctness, depends on the ability to
save information from one call to the next.
Both of these needs are met by the ability to specify a SCRATCHPAD in the
CREATE FUNCTION statement:
CREATE FUNCTION counter()
RETURNS int ... SCRATCHPAD;
CREATE FUNCTION match(varchar(200), char(15))
RETURNS char(1) ... SCRATCHPAD;
Note that for UDFs that use a scratchpad and are referenced in a subquery,
DB2 may decide to make a final call (if the UDF is so specified) and refresh
the scratchpad between invocations of the subquery. You can protect yourself
440
against this possibility, if your UDFs are ever used in subqueries, by defining
the UDF with FINAL CALL and using the call-type argument, or by always
checking for the binary zero condition.
If you do specify FINAL CALL, please note that your UDF receives a call of
type FIRST. This could be used to acquire and initialize some persistent
resource.
441
442
in:
in:
in:
in:
out:
/*
/*
/*
/*
in:
in:
in:
out:
443
);
extern int sqludf_free_locator(
sqludf_locator*
loc_p
);
The following is a discussion of how these APIs operate. Note that all lengths
are in bytes, regardless of the data type, and not in single or double-byte
characters.
Return codes. Interpret the return code passed back to the UDF by DB2 for
each API as follows:
0
Success.
-1
Locator passed to the API was freed by sqludf_free_locator() prior
to making the call.
-2
Call was attempted in FENCED mode UDF.
-3
Bad input value was provided to the API. For examples of bad input
values specific to each API, see its description below.
other Invalid locator or other error (for example, memory error). The value
that is returned for these cases is the SQLCODE corresponding to the
error condition. For example, -423 means invalid locator. Please note
that before returning to the UDF with one of these other codes, DB2
makes a judgment as to the severity of the error. For severe errors,
DB2 remembers that the error occurred, and when the UDF returns to
DB2, regardless of whether the UDF returns an error SQLSTATE to
DB2, DB2 takes action appropriate for the error condition. For
non-severe errors, DB2 forgets that the error has occurred, and leaves
it up to the UDF to decide whether it can take corrective action, or
return an error SQLSTATE to DB2.
v sqludf_length().
Given a LOB locator, it returns the length of the LOB value represented by
the locator. The locator in question is generally a locator passed to the UDF
by DB2, but could be a locator representing a result value being built (using
sqludf_append()) by the UDF.
Typically, a UDF uses this API when it wants to find out the length of a
LOB value when it receives a locator.
A return code of 3 may indicate:
udfloc_p (address of locator) is zero
return_len_p (address of where to put length) is zero
v sqludf_substr()
Given a LOB locator, a beginning position within the LOB, a desired length,
and a pointer to a buffer, this API places the bytes into the buffer and
returns the number of bytes it was able to move. (Obviously the UDF must
provide a buffer large enough for the desired length.) The number of bytes
moved could be shorter than the desired length, for example if you request
444
50 bytes beginning at position 101 and the LOB value is only 120 bytes
long, the API will move only 20 bytes.
Typically, this is the API that a UDF uses when it wants to see the bytes of
the LOB value, when it receives a locator.
A return code of 3 may indicate:
udfloc_p (address of locator) is zero
start is less than 1
length is negative
buffer_p (buffer address) is zero
return_len_p (address of where to put length) is zero
v sqludf_append()
Given a LOB locator, a pointer to a data buffer which has data in it, and a
length of data to append, this API appends the data to the end of the LOB
value, and returns the length of the bytes appended. (Note that the length
appended is always equal to the length given to append. If the entire length
cannot be appended, the call to sqludf_append() fails with the return code
of other.)
Typically, this is the API that a UDF uses when the result is defined with
AS LOCATOR, and the UDF is building the result value one append at a
time after creating the locator using sqludf_create_locator(). After
finishing the build process in this case, the UDF moves the locator to where
the result argument points.
Note that you can also append to your input locators using this API, which
might be useful from the standpoint of maximum flexibility to manipulate
your values within the UDF, but this will not have any effect on any LOB
values in the SQL statement, or stored in the database.
This API can be used to build very large LOB values in a piecemeal
manner. In cases where a large number of appends is used to build a result,
the performance of this task can be improved by:
allocating a large application control heap (APP_CTL_HEAP_SZ is the
database manager configuration parameter)
doing fewer appends of larger buffers; for example, instead of doing 20
appends of 50 bytes each, doing a single 1000 byte append
SQL applications which build many large LOB values via the
sqludf_append() API may encounter errors caused by limitations on the
amount of disk space available. The chance of these errors happening can
be reduced by:
using larger buffers for the individual appends
doing frequent COMMITs between statements
445
446
item. It is also valid for such a table function to return the same locator as
an output for several table function columns.
3. A LOB locator passed to a table function as an input argument remains
alive for the entire duration of the row generation process. In fact, the
table function can append to a LOB using such a LOB locator while
generating one row, and see the appended bytes on a subsequent row.
4. The internal control mechanisms used to represent a LOB which originated
in DB2 as a LOB locator output from a UDF (table or scalar function), take
1950 bytes. For this reason, and because there are limitations on the size of
a row which is input to a sort, a query which attempts to sort multiple
such LOBs which originated as UDF LOB locators will be limited to (at
most) two such values per row, depending on the sizes of the other
columns involved. The same limitation applies to rows being inserted into
a table.
447
448
449
450
2. The values of all environment variables beginning with 'DB2' are captured
at the time the database manager is started with db2start, and are
available in all UDFs whether or not they are FENCED. The only
exception is the DB2CKPTR environment variable. Note that the environment
variables are captured; any changes to the environment variables after
db2start is issued are not available to the UDFs.
3. With respect to LOBs passed to an external UDF, you are limited to the
maximum size specified by the UDF Shared Memory Size DB2 system
configuration parameter. The maximum that you can specify for this
parameter is 256M. The default setting on DB2 is 1M. For more
information on this parameter, refer to the Administration Guide.
4. Input to, and output from, the screen and keyboard is not recommended.
In the process model of DB2, UDFs run in the background, so you cannot
write to the screen. However, you can write to a file.
Note: DB2 does not attempt to synchronize any external input/output
performed by a UDF with DB2s own transactions. So for example,
if a UDF writes to a file during a transaction, and that transaction is
later backed out for some reason, no attempt is made to discover or
undo the writes to the file.
5. On UNIX-based systems, your UDF runs under the user ID of the DB2
Agent Process (NOT FENCED), or the user ID which owns the db2udf
executable (FENCED). This user ID controls the system resources available
to the UDF. For information on the db2udf executable, refer to the Quick
Beginnings for your platform.
6. When using protected resources, (that is, resources that only allow one
process access at a time) inside UDFs, you should try to avoid deadlocks
between UDFs. If two or more UDFs deadlock, DB2 will not be able to
detect the condition.
7. Character data is passed to external functions in the code page of the
database. Likewise, a character string that is output from the function is
assumed by the database to use the databases code page. In the case
where the application code page differs from the database code page, the
code page conversions occur as they would for other values in the SQL
statement. You can prevent this conversion, by coding FOR BIT DATA as
an attribute of the character parameter or result in your CREATE
FUNCTION statement. If the character parameter is not defined with the
FOR BIT DATA attribute, your UDF code will receive arguments in the
database code page.
Note that, using the DBINFO option on CREATE FUNCTION, the
database code page is passed to the UDF. Using this information, a UDF
which is sensitive to the code page can be written to operate in many
different code pages.
451
8. When writing a UDF using C++, you may want to consider declaring the
function name as:
extern "C" void SQL_API_FN udf( ...arguments... )
The extern "C" prevents type decoration (or mangling) of the function
name by the C++ compiler. Without this declaration, you have to include
all the type decoration for the function name when you issue the CREATE
FUNCTION statement.
452
After populating the table, issue the following statement using CLP to display
its contents:
SELECT INT1, INT2, PART, SUBSTR(DESCR,1,50) FROM TEST
Note the use of the SUBSTR function on the CLOB column to make the
output more readable. You receive the following CLP output:
INT1
INT2
----------- ----------16
1
8
2
4
4
2
0
97
16
5 record(s) selected.
PART
----brain
heart
elbow
xxxxx
4
-------------------------------------------------The only part of the body capable of forgetting.
The seat of the emotions?
That bendy place in mid-arm.
Unknown.
Refer to the previous information on table TEST as you read the examples and
scenarios which follow.
453
#include
#include
#include
#include
#include
#include
<stdlib.h>
<string.h>
<stdio.h>
<sqludf.h>
<sqlca.h>
<sqlda.h>
/*************************************************************************
* function divid: performs integer divid, but unlike the / operator
*
shipped with the product, gives NULL when the
*
denominator is zero.
*
*
This function does not use the constructs defined in the
*
"sqludf.h" header file.
*
*
inputs: INTEGER num
numerator
*
INTEGER denom
denominator
*
output: INTEGER out
answer
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN divid (
sqlint32 *num,
/* numerator */
sqlint32 *denom,
/* denominator */
sqlint32 *out,
/* output result */
short *in1null,
/* input 1 NULL indicator */
short *in2null,
/* input 2 NULL indicator */
short *outnull,
/* output NULL indicator */
char *sqlstate,
/* SQL STATE */
char *funcname,
/* function name */
char *specname,
/* specific function name */
char *mesgtext) {
/* message text insert */
if (*denom == 0) {
/* if denominator is zero, return null result */
*outnull = -1;
} else {
/* else, compute the answer */
*out = *num / *denom;
*outnull = 0;
} /* endif */
}
/* end of UDF : divid */
454
v It does not check for null input arguments, because the NOT NULL CALL
parameter is specified by default in the CREATE FUNCTION statement
shown below.
Here is the CREATE FUNCTION statement for this UDF:
CREATE FUNCTION MATH."/"(INT,INT)
RETURNS INT
NOT FENCED
DETERMINISTIC
NO SQL
NO EXTERNAL ACTION
LANGUAGE C
PARAMETER STYLE DB2SQL
EXTERNAL NAME '/u/slick/udfx/div' ;
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
For this statement, observe that:
v It is defined to be in the MATH schema. In order to define a UDF in a
schema that is not equal to your user-ID, you need DBADM authority on
the database.
v The function name is defined to be "/", the same name as the SQL divide
operator. In fact, this UDF can be invoked the same as the built-in /
operator, using either infix notation, for example, A / B, or functional
notation, for example, "/"(A,B). See below.
v You have chosen to define it as NOT FENCED because you are absolutely
sure that the program has no errors.
v You have used the default NOT NULL CALL, by which DB2 provides a
NULL result if either argument is NULL, without invoking the body of the
function.
Now if you run the following pair of statements (CLP input is shown):
SET CURRENT FUNCTION PATH = SYSIBM, SYSFUN, SLICK
SELECT INT1, INT2, INT1/INT2, "/"(INT1,INT2) FROM TEST
You get this output from CLP (if you do not enable friendly arithmetic with the
database configuration parameter DFT_SQLMATHWARN):
INT1
INT2
3
4
----------- ----------- ----------- ----------16
1
16
16
455
8
2
4
4
4
4
1
1
SQL0802N Arithmetic overflow or other arithmetic exception occurred.
SQLSTATE=22003
The SQL0802N error message occurs because you have set your CURRENT
FUNCTION PATH special register to a concatenation of schemas which does
not include MATH, the schema in which the "/" UDF is defined. And
therefore you are executing DB2s built-in divide operator, whose defined
behavior is to give the error when a divide by zero condition occurs. The
fourth row in the TEST table provides this condition.
However, if you change the function path, putting MATH in front of SYSIBM
in the path, and rerun the SELECT statement:
SET CURRENT FUNCTION PATH = MATH, SYSIBM, SYSFUN, SLICK
SELECT INT1, INT2, INT1/INT2, "/"(INT1,INT2) FROM TEST
You then get the desired behavior, as shown by the following CLP output:
INT1
INT2
3
4
----------- ----------- ----------- ----------16
1
16
16
8
2
4
4
4
4
1
1
2
0
97
16
6
6
5 record(s) selected.
456
additional UDFs, further overloading the "/" operator, which define first
and second parameters that are SMALLINT. These additional UDFs can be
sourced on MATH."/".
In this case, for a fully general set of functions you have to CREATE the
following three additional functions to completely handle integer divide:
CREATE FUNCTION MATH."/"(SMALLINT,SMALLINT)
RETURNS INT
SOURCE MATH."/"(INT,INT)
CREATE FUNCTION MATH."/"(SMALLINT,INT)
RETURNS INT
SOURCE MATH."/"(INT,INT)
CREATE FUNCTION MATH."/"(INT,SMALLINT)
RETURNS INT
SOURCE MATH."/"(INT,INT)
Even though three UDFs are added, additional code does not have to be
written as they are sourced on MATH."/".
And now, with the definition of these four "/" functions, any users who
want to take advantage of the new behavior on integer divide need only
place MATH ahead of SYSIBM in their function path, and can write their
SQL as usual.
While the preceding example does not consider the BIGINT data type, you
can easily extend the example to include BIGINT.
<stdlib.h>
<string.h>
<stdio.h>
<sqludf.h>
<sqlca.h>
<sqlda.h>
"util.h"
/*************************************************************************
function fold: input string is folded at the point indicated by the
second argument.
*
*
*
457
*
input: CLOB
in1
input string
*
INTEGER in2
position to fold on
*
CLOB
out
folded string
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN fold (
SQLUDF_CLOB
*in1,
/* input CLOB to fold */
SQLUDF_INTEGER *in2,
/* position to fold on */
SQLUDF_CLOB
*out,
/* output CLOB, folded */
SQLUDF_NULLIND *in1null,
/* input 1 NULL indicator */
SQLUDF_NULLIND *in2null,
/* input 2 NULL indicator */
SQLUDF_NULLIND *outnull,
/* output NULL indicator */
SQLUDF_TRAIL_ARGS) {
/* trailing arguments */
SQLUDF_INTEGER len1;
if (SQLUDF_NULL(in1null) || SQLUDF_NULL(in2null)) {
/* one of the arguments is NULL. The result is then "INVALID INPUT" */
strcpy( ( char * ) out->data, "INVALID INPUT" ) ;
out->length = strlen("INVALID INPUT");
} else {
len1 = in1->length;
/* length of the CLOB */
/* build the output by folding at position "in2" */
strncpy( ( char * ) out->data, &in1->data[*in2], len1 - *in2 ) ;
strncpy( ( char * ) &out->data[len1 - *in2], in1->data, *in2 ) ;
out->length = in1->length;
} /* endif */
*outnull = 0;
/* result is always non-NULL */
}
/* end of UDF : fold */
/*************************************************************************
* function findvwl: returns the position of the first vowel.
*
returns an error if no vowel is found
*
when the function is created, must be defined as
*
NOT NULL CALL.
*
inputs: VARCHAR(500) in
*
output: INTEGER
out
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN findvwl (
SQLUDF_VARCHAR
*in,
/* input character string */
SQLUDF_SMALLINT *out,
/* output location of vowel */
SQLUDF_NULLIND
*innull,
/* input NULL indicator */
SQLUDF_NULLIND
*outnull,
/* output NULL indicator */
SQLUDF_TRAIL_ARGS) {
/* trailing arguments */
short i;
for (i=0; (i < (short)strlen(in) &&
458
in[i]
in[i]
in[i]
in[i]
if (i ==
}
/* end of UDF : findvwl */
459
2
-----------------------------ly part of the body capable of
at of the emotions?The se
endy place in mid-arm.That b
INVALID INPUT
n.Unknow
Note the use of the SUBSTR built-in function to make the selected CLOB
values display more nicely. It shows how the output is folded (best seen in
the second, third and fifth rows, which have a shorter CLOB value than the
first row, and thus the folding is more evident even with the use of SUBSTR).
And it shows (fourth row) how the INVALID INPUT string is returned by the
FOLD UDF when its input text string (column DESCR) is null. This SELECT
also shows simple nesting of function references; the reference to FOLD is
within an argument of the SUBSTR function reference.
Then if you run the following statement:
SELECT PART, FINDV(PART) FROM TEST
460
PART 2
----- ----------brain
3
heart
2
elbow
1
SQL0443N User defined function "SLICK.FINDV" (specific name
"SQL950424135144750") has returned an error SQLSTATE with diagnostic
text "findvwl: No Vowel". SQLSTATE=38999
This example shows how the 38999 SQLSTATE value and error message token
returned by findvwl() are handled: message SQL0443N returns this
information to the user. The PART column in the fifth row contains no vowel,
and this is the condition which triggers the error in the UDF.
Observe the argument promotion in this example. The PART column is
CHAR(5), and is promoted to VARCHAR to be passed to FINDV.
And finally note how DB2 has generated a null output from FINDV for the
fourth row, as a result of the NOT NULL CALL specification in the CREATE
statement for FINDV.
The following statement:
SELECT SUBSTR(DESCR,1,25), FINDV(CAST (DESCR AS VARCHAR(60) ) )
FROM TEST
Example: Counter
Suppose you want to simply number the rows in your SELECT statement. So
you write a UDF which increments and returns a counter. This UDF uses a
scratchpad:
Chapter 15. Writing User-Defined Functions (UDFs) and Methods
461
#include
#include
#include
#include
#include
#include
<stdlib.h>
<string.h>
<stdio.h>
<sqludf.h>
<sqlca.h>
<sqlda.h>
/* structure scr defines the passed scratchpad for the function "ctr" */
struct scr {
sqlint32 len;
sqlint32 countr;
char not_used[96];
} ;
/*************************************************************************
* function ctr: increments and reports the value from the scratchpad.
*
*
This function does not use the constructs defined in the
*
"sqludf.h" header file.
*
*
input: NONE
*
output: INTEGER out
the value from the scratchpad
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN ctr (
sqlint32 *out,
/* output answer (counter) */
short *outnull,
/* output NULL indicator */
char *sqlstate,
/* SQL STATE */
char *funcname,
/* function name */
char *specname,
/* specific function name */
char *mesgtext,
/* message text insert */
struct scr *scratchptr) {
/* scratch pad */
*out = ++scratchptr->countr;
*outnull = 0;
}
/* end of UDF : ctr */
462
NOT DETERMINISTIC
NO SQL
NO EXTERNAL ACTION
LANGUAGE C
PARAMETER STYLE DB2SQL
EXTERNAL NAME 'udf!ctr'
DISALLOW PARALLELISM;
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
Referring to this statement, observe that:
v No input parameters are defined. This agrees with the code.
v SCRATCHPAD is coded, causing DB2 to allocate, properly initialize and
pass the scratchpad argument.
v You have chosen to define it as NOT FENCED because you are absolutely
sure that it is error free.
v You have specified it to be NOT DETERMINISTIC, because it depends on
more than the SQL input arguments, (none in this case).
v You have correctly specified DISALLOW PARALLELISM, because correct
functioning of the UDF depends on a single scratchpad.
Now you can successfully run the following statement:
SELECT INT1, COUNTER(), INT1/COUNTER() FROM TEST
Observe that the second column shows the straight COUNTER() output. The
third column shows that the two separate references to COUNTER() in the
SELECT statement each get their own scratchpad; had they not each gotten
their own, the output in the second column would have been 1 3 5 7 9,
instead of the nice orderly 1 2 3 4 5.
463
included in the example program, but could be read in from an external file,
as indicated in the comments contained in the example program. The data
includes the name of a city followed by its weather information. This pattern
is repeated for the other cities. Note that there is a client application
(tblcli.sqc) supplied with DB2 that calls this table function and prints out
the weather data retrieved using the tfweather_u table function.
#include
#include
#include
#include
#include
#define
#define
<stdlib.h>
<string.h>
<stdio.h>
<sql.h>
<sqludf.h> /* for use in compiling User Defined Function */
SQL_NOTNULL
SQL_ISNULL
0
-1
data
data
type
data
data
*/
*/
*/
*/
*/
464
0 }, /* city
*/
/*
/*
/*
/*
/*
/*
temp_in_f
humidity
wind
wind_velocity
barometer
forecast
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
465
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Clean all field data and field null indicator data */
int clean_fields( int field_pos ) {
while ( fields[field_pos].fld_length != 0 ) {
memset( fields[field_pos].fld_field, '\0', 31 ) ;
fields[field_pos].fld_ind = SQL_ISNULL ;
field_pos++ ;
}
return( 0 ) ;
}
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Fills all field data and field null indicator data ... */
/* ... from text weather data */
int get_value( char * value, int field_pos ) {
fld_desc * field ;
char field_buf[31] ;
double * double_ptr ;
int * int_ptr, buf_pos ;
while ( fields[field_pos].fld_length != 0 ) {
field = &fields[field_pos] ;
memset( field_buf, '\0', 31 ) ;
memcpy( field_buf,
( value + field->fld_offset ),
field->fld_length ) ;
buf_pos = field->fld_length ;
while ( ( buf_pos > 0 ) &&
( field_buf[buf_pos] == ' ' ) )
field_buf[buf_pos--] = '\0' ;
buf_pos = 0 ;
while ( ( buf_pos < field->fld_length ) &&
( field_buf[buf_pos] == ' ' ) )
buf_pos++ ;
if ( strlen( ( char * ) ( field_buf + buf_pos ) ) > 0 ||
strcmp( ( char * ) ( field_buf + buf_pos ), "n/a") != 0 ) {
field->fld_ind = SQL_NOTNULL ;
/* Text to SQL type conversion */
switch( field->fld_type ) {
case SQL_TYP_VARCHAR:
strcpy( field->fld_field,
( char * ) ( field_buf + buf_pos ) ) ;
break ;
case SQL_TYP_INTEGER:
int_ptr = ( int * ) field->fld_field ;
466
}
field_pos++ ;
}
return( 0 ) ;
}
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN weather( /* Return row fields */
SQLUDF_VARCHAR * city,
SQLUDF_INTEGER * temp_in_f,
SQLUDF_INTEGER * humidity,
SQLUDF_VARCHAR * wind,
SQLUDF_INTEGER * wind_velocity,
SQLUDF_DOUBLE * barometer,
SQLUDF_VARCHAR * forecast,
/* You may want to add more fields here */
/* Return row field null indicators */
SQLUDF_NULLIND * city_ind,
SQLUDF_NULLIND * temp_in_f_ind,
SQLUDF_NULLIND * humidity_ind,
SQLUDF_NULLIND * wind_ind,
SQLUDF_NULLIND * wind_velocity_ind,
SQLUDF_NULLIND * barometer_ind,
SQLUDF_NULLIND * forecast_ind,
/* You may want to add more field indicators here */
/* UDF always-present (trailing) input arguments */
SQLUDF_TRAIL_ARGS_ALL
) {
scratch_area * save_area ;
char line_buf[81] ;
int line_buf_pos ;
/* SQLUDF_SCRAT is part of SQLUDF_TRAIL_ARGS_ALL */
/* Preserve information from one function call to the next call */
save_area = ( scratch_area * ) ( SQLUDF_SCRAT->data ) ;
/* SQLUDF_CALLT is part of SQLUDF_TRAIL_ARGS_ALL */
switch( SQLUDF_CALLT ) {
/* First call UDF: Open table and fetch first row */
Chapter 15. Writing User-Defined Functions (UDFs) and Methods
467
case SQL_TF_OPEN:
/* If you use a weather data text file specify full path */
/* save_area->file_ptr = fopen("/sqllib/samples/c/tblsrv.dat",
"r"); */
save_area->file_pos = 0 ;
break ;
/* Normal call UDF: Fetch next row */
case SQL_TF_FETCH:
/* If you use a weather data text file */
/* memset(line_buf, '\0', 81); */
/* if (fgets(line_buf, 80, save_area->file_ptr) == NULL) { */
if ( weather_data[save_area->file_pos] == ( char * ) 0 ) {
/* SQLUDF_STATE is part of SQLUDF_TRAIL_ARGS_ALL */
strcpy( SQLUDF_STATE, "02000" ) ;
break ;
}
memset( line_buf, '\0', 81 ) ;
strcpy( line_buf, weather_data[save_area->file_pos] ) ;
line_buf[3] = '\0' ;
/* Clean all field data and field null indicator data */
clean_fields( 0 ) ;
/* Fills city field null indicator data */
fields[0].fld_ind = SQL_NOTNULL ;
/* Find a full city name using a short name */
/* Fills city field data */
if ( get_name( line_buf, fields[0].fld_field ) == 0 ) {
save_area->file_pos++ ;
/* If you use a weather data text file */
/* memset(line_buf, '\0', 81); */
/* if (fgets(line_buf, 80, save_area->file_ptr) == NULL) { */
if ( weather_data[save_area->file_pos] == ( char * ) 0 ) {
/* SQLUDF_STATE is part of SQLUDF_TRAIL_ARGS_ALL */
strcpy( SQLUDF_STATE, "02000" ) ;
break ;
}
memset( line_buf, '\0', 81 ) ;
strcpy( line_buf, weather_data[save_area->file_pos] ) ;
line_buf_pos = strlen( line_buf ) ;
while ( line_buf_pos > 0 ) {
if ( line_buf[line_buf_pos] >= ' ' )
line_buf_pos = 0 ;
else {
line_buf[line_buf_pos] = '\0' ;
line_buf_pos-- ;
}
}
}
/* Fills field data and field null indicator data ... */
468
469
}
}
The above CREATE FUNCTION statement is for a UNIX version of this UDF.
For other platforms, you may need to modify the value specified in the
EXTERNAL NAME clause.
Referring to this statement, observe that:
v It does not take any input, and returns 7 output columns.
v SCRATCHPAD is specified, so DB2 allocates, properly initializes and passes
the scratchpad argument.
v NO FINAL CALL is specified.
470
<stdlib.h>
<string.h>
<stdio.h>
<sql.h>
<sqlca.h>
<sqlda.h>
<sqludf.h>
"util.h"
/*
/*
/*
/*
/*
/*
/*
471
*/
472
rc = sqludf_create_locator(SQL_TYP_CLOB, &lob_output);
/* Error and exit if unable to create locator */
if (rc) {
memcpy (sqlstate, "38901", 5);
/* special sqlstate for this condition */
goto exit;
}
/* Find out the size of the input LOB value */
rc = sqludf_length(lob_input, &input_len) ;
/* Error and exit if unable to find out length */
if (rc) {
memcpy (sqlstate, "38902", 5);
/* special sqlstate for this condition */
goto exit;
}
/* Loop to read next 100 bytes, and append to result if it meets
* the criteria.
*/
for (input_pos = 0; (input_pos < input_len); input_pos += 100) {
/* Read the next 100 (or less) bytes of the input LOB value */
rc = sqludf_substr(lob_input, input_pos, 100,
(unsigned char *) lob_buf, &input_rec) ;
/* Error and exit if unable to read the segment */
if (rc) {
memcpy (sqlstate, "38903", 5);
/* special sqlstate for this condition */
goto exit;
}
/* apply the criteria for appending this segment to result
* if (...predicate involving buffer and criteria...) {
* The condition for retaining the segment is TRUE...
* Write that buffer segment which was last read in
*/
rc = sqludf_append(lob_output,
(unsigned char *) lob_buf, input_rec, &output_rec) ;
/* Error and exit if unable to read the 100 byte segment */
if (rc) {
memcpy (sqlstate, "38904", 5);
/* special sqlstate for this condition */
goto exit;
}
/* } end if criteria for inclusion met */
} /* end of for loop, processing 100-byte chunks of input LOB
* if we fall out of for loop, we are successful, and done.
*out_nul = 0;
exit: /* used for errors, which will override null-ness of output. */
return;
}
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
Referring to this statement, observe that:
v NOT NULL CALL is specified, so the UDF will not be called if any of its
input SQL arguments are NULL, and does not have to check for this
condition.
473
v The function is defined to be NOT FENCED; recall that the APIs only work
in NOT FENCED. NOT FENCED means that the definer will have to have
the CREATE_NOT_FENCED authority on the database (which is also
implied by DBADM authority).
v The function is specified as DETERMINISTIC, meaning that with a given
input CLOB value and a given set of criteria, the result will be the same
every time.
Now you can successfully run the following statement:
UPDATE tablex
SET col_a = 99,
col_b = carve (:hv_clob, '...criteria...')
WHERE tablex_key = :hv_key;
The UDF is used to subset the CLOB value represented by the host variable
:hv_clob and update the row represented by key value in host variable
:hv_key.
In this update example by the way, it may be that :hv_clob is defined in the
application as a CLOB_LOCATOR. It is not this same locator which will be
passed to the carve UDF! When :hv_clob is bound in to the DB2 engine
agent running the statement, it is known only as a CLOB. When it is then
passed to the UDF, DB2 generates a new locator for the value. This conversion
back and forth between CLOB and locator is not expensive, by the way; it
does not involve any extra memory copies or I/O.
474
475
97
19
5 record(s) selected.
The COM CCounter class definition in C++ includes the declaration of the
increment method as well as nbrOfInvoke:
class FAR CCounter : public ICounter
{
...
STDMETHODIMP CCounter::increment(long
*out,
short
*outnull,
BSTR
*sqlstate,
BSTR
*fname,
BSTR
*fspecname,
BSTR
*msgtext,
SAFEARRAY **spad,
long *calltype );
long nbrOfInvoke;
...
};
476
nbrOfInvoke = nbrOfInvoke + 1;
*out = nbrOfInvoke;
return NOERROR;
};
In the above example, sqlstate and msgtext are [out] parameters of type
BSTR*, that is, DB2 passes a pointer to NULL to the UDF. To return values for
these parameters, the UDF allocates a string and returns it to DB2 (for
example, *sqlstate = SysAllocString (L"01H00")), and DB2 frees the
memory. The parameters fname and fspecname are [in] parameters. DB2
allocates the memory and passes in values which are read by the UDF, and
then DB2 frees the memory.
The class factory of the CCounter class creates counter objects. You can
register the class factory as a single-use or multi-use object (not shown in this
example).
STDMETHODIMP CCounterCF::CreateInstance(IUnknown FAR* punkOuter,
REFIID riid,
void FAR* FAR* ppv)
{
CCounter *pObj;
...
// create a new counter object
pObj = new CCounter;
...
};
While processing the following query, DB2 creates two different instances of
class CCounter. An instance is created for each UDF reference in the query.
477
The two instances are reused for the entire query as the scratchpad option is
specified in the ccounter UDF registration.
SELECT INT1, CCOUNTER() AS COUNT, INT1/CCOUNTER() AS DIV FROM TEST
478
On the table function OPEN call, the CreateObject statement creates a mail
session, and the logon method logs on to the mail system (user name and
password issues are neglected). The message collection of the mail inbox is
used to retrieve the first message. On the FETCH calls, the message header
information and the first 30 characters of the current message are assigned to
the table function output parameters. If no messages are left, SQLSTATE 02000
is returned. On the CLOSE call, the example logs off and sets the session
object to nothing, which releases all the system and memory resources
associated with the previously referenced object when no other variable refers
to it.
Following is the CREATE FUNCTION statement for this UDF:
CREATE FUNCTION MAIL()
RETURNS TABLE (TIMERECIEVED DATE,
SUBJECT VARCHAR(15),
SIZE INTEGER,
TEXT VARCHAR(30))
Chapter 15. Writing User-Defined Functions (UDFs) and Methods
479
SUBJECT
SIZE
--------------- ----------Welcome!
3277
Invoice
1382
Congratulations
1394
TEXT
-----------------------------Welcome to Windows Messaging!
Please process this invoice. T
Congratulations to the purchas
3 record(s) selected.
480
For security and database integrity reasons, it is important to protect the body
of your UDF, once it is debugged and defined to DB2. This is particularly true
if your UDF is defined as NOT FENCED. If either by accident or with
malicious intent, anyone (including yourself) overwrites an operational UDF
with code that is not debugged, the UDF could conceivably destroy the
database if it is NOT FENCED, or compromise security.
Unfortunately, there is no easy way to run a source-level debugger on a UDF.
There are several reasons for this:
v The timing makes it difficult to start the debugger at a time when the UDF
is in storage and available
v The UDF runs in a database process with a special user ID and the user is
not permitted to attach to this process.
Note that valuable debugging tools such as printf() do not normally work as
debugging aids for your UDF, because the UDF normally runs in a
background process where stdout has no meaning. As an alternative to using
printf(), it may be possible for you to instrument your UDF with file output
logic, and for debugging purposes write indicative data and control
information to a file.
Another technique to debug your UDF is to write a driver program for
invoking the UDF outside the database environment. With this technique, you
can invoke the UDF with all kinds of marginal or erroneous input arguments
to attempt to provoke it into misbehaving. In this environment, it is not a
problem to use printf() or a source level debugger.
481
482
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
483
484
485
486
487
487
488
489
490
492
492
493
.
.
.
.
.
.
.
.
493
494
495
495
.
.
.
.
.
.
.
.
.
.
496
496
497
497
498
483
Benefits of Triggers
Using triggers in your database manager can result in:
v Faster application development.
Because triggers are stored in the relational database, the actions performed
by triggers do not have to be coded in each application.
v Global enforcement of business rules
A trigger only has to be defined once, and then it can be used for any
application that changes the table.
v Easier maintenance
If a business policy changes, only the corresponding trigger needs to
change instead of each application program.
484
Overview of a Trigger
When you create a trigger, you associate it with a table. This table is called the
subject table of the trigger. The term update operation refers to any change in the
state of the subject table. An update operation is initiated by:
v an INSERT statement
v an UPDATE statement, or a referential constraint which performs an
UPDATE
v a DELETE statement, or a referential constraint which performs a DELETE
You must associate each trigger with one of these three types of update
operations. The association is called the trigger event for that particular trigger.
You must also define the action, called the triggered action, that the trigger
performs when its trigger event occurs. The triggered action consists of one or
more SQL statements which can execute either before or after the database
manager performs the trigger event. Once a trigger event occurs, the database
manager determines the set of rows in the subject table that the update
operation affects and executes the trigger.
When you create a trigger, you declare the following attributes and behavior:
v The name of the trigger.
v The name of the subject table.
v The trigger activation time (BEFORE or AFTER the update operation
executes).
v The trigger event (INSERT, DELETE, or UPDATE).
v The old values transition variable, if any.
v The new values transition variable, if any.
v The old values transition table, if any.
v The new values transition table, if any.
v The granularity (FOR EACH STATEMENT or FOR EACH ROW).
v The triggered action of the trigger (including a triggered action condition
and triggered SQL statement(s)).
v If the trigger event is UPDATE, then the trigger column list for the trigger
event of the trigger, as well as an indication of whether the trigger column
list was explicit or implicit.
v The trigger creation timestamp.
v The current function path.
For more information on the CREATE TRIGGER statement, refer to the SQL
Reference.
485
Trigger Event
Every trigger is associated with an event. Triggers are activated when their
corresponding event occurs in the database. This trigger event occurs when
the specified action, either an UPDATE, INSERT, or DELETE (including those
caused by actions of referential constraints), is performed on the subject table.
For example:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMPLOYEE
FOR EACH ROW MODE DB2SQL
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1
The above statement defines the trigger new_hire, which activates when you
perform an insert operation on table employee.
You associate every trigger event, and consequently every trigger, with exactly
one subject table and exactly one update operation. The update operations
are:
Insert operation
An insert operation can only be caused by an INSERT statement.
Therefore, triggers are not activated when data is loaded using
utilities that do not use INSERT, such as the LOAD command.
Update operation
An update operation can be caused by an UPDATE statement or as a
result of a referential constraint rule of ON DELETE SET NULL.
Delete operation
A delete operation can be caused by a DELETE statement or as a
result of a referential constraint rule of ON DELETE CASCADE.
If the trigger event is an update operation, the event can be associated with
specific columns of the subject table. In this case, the trigger is only activated
if the update operation attempts to update any of the specified columns. This
provides a further refinement of the event that activates the trigger. For
example, the following trigger, REORDER, activates only if you perform an
update operation on the columns ON_HAND or MAX_STOCKED, of the table PARTS.
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED)
BEGIN ATOMIC
VALUES(ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED N_ROW.ON_HAND,
N_ROW.PARTNO));
END
486
The set of affected rows for the associated trigger contains all the rows in the
parts table whose part_no is greater than 15 000.
Trigger Granularity
When a trigger is activated, it runs according to its granularity as follows:
FOR EACH ROW
It runs as many times as the number of rows in the set of affected
rows.
FOR EACH STATEMENT
It runs once for the entire trigger event.
If the set of affected rows is empty (that is, in the case of a searched UPDATE
or DELETE in which the WHERE clause did not qualify any rows), a FOR
EACH ROW trigger does not run. But a FOR EACH STATEMENT trigger still
runs once.
For example, keeping a count of number of employees can be done using FOR
EACH ROW.
CREATE TRIGGER NEW_HIRED
AFTER INSERT ON EMPLOYEE
FOR EACH ROW MODE DB2SQL
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1
You can achieve the same affect with one update by using a granularity of
FOR EACH STATEMENT.
CREATE TRIGGER NEW_HIRED
AFTER INSERT ON EMPLOYEE
REFERENCING NEW_TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
UPDATE COMPANY_STATS
SET NBEMP = NBEMP + (SELECT COUNT(*) FROM NEWEMPS)
487
If the activation time is BEFORE, the triggered actions are activated for each
row in the set of affected rows before the trigger event executes. Note that
BEFORE triggers must have a granularity of FOR EACH ROW.
If the activation time is AFTER, the triggered actions are activated for each
row in the set of affected rows or for the statement, depending on the trigger
granularity. This occurs after the trigger event executes, and after the database
manager checks all constraints that the trigger event may affect, including
actions of referential constraints. Note that AFTER triggers can have a
granularity of either FOR EACH ROW or FOR EACH STATEMENT.
The different activation times of triggers reflect different purposes of triggers.
Basically, BEFORE triggers are an extension to the constraint subsystem of the
database management system. Therefore, you generally use them to:
v Perform validation of input data,
v Automatically generate values for newly inserted rows
v Read from other tables for cross-referencing purposes.
BEFORE triggers are not used for further modifying the database because they
are activated before the trigger event is applied to the database. Consequently,
they are activated before integrity constraints are checked and may be
violated by the trigger event.
Conversely, you can view AFTER triggers as a module of application logic
that runs in the database every time a specific event occurs. As a part of an
application, AFTER triggers always see the database in a consistent state. Note
that they are run after the integrity constraints that may be violated by the
triggering SQL operation have been checked. Consequently, you can use them
mostly to perform operations that an application can also perform. For
example:
v Perform follow on update operations in the database
v Perform actions outside the database, for example, to support alerts. Note
that actions performed outside the database are not rolled back if the
trigger is rolled back.
488
Because of the different nature of BEFORE and AFTER triggers, a different set
of SQL operations can be used to define the triggered actions of BEFORE and
AFTER triggers. For example, update operations are not allowed in BEFORE
triggers because there is no guarantee that integrity constraints will not be
violated by the triggered action. The set of SQL operations you can specify in
BEFORE and AFTER triggers are described in Triggered Action on page 492.
Similarly, different trigger granularities are supported in BEFORE and AFTER
triggers. For example, FOR EACH STATEMENT is not allowed in BEFORE
triggers because there is no guarantee that constraints will not be violated by
the triggered action, which would, in turn, result in failure of the operation.
Transition Variables
When you carry out a FOR EACH ROW trigger, it may be necessary to refer
to the value of columns of the row in the set of affected rows, for which the
trigger is currently executing. Note that to refer to columns in tables in the
database (including the subject table), you can use regular SELECT statements.
A FOR EACH ROW trigger may refer to the columns of the row for which it
is currently executing by using two transition variables that you can specify in
the REFERENCING clause of a CREATE TRIGGER statement. There are two
kinds of transition variables, which are specified as OLD and NEW, together
with a correlation-name. They have the following semantics:
OLD correlation-name
Specifies a correlation name which captures the original state of the
row, that is, before the triggered action is applied to the database.
NEW correlation-name
Specifies a correlation name which captures the value that is, or was,
used to update the row in the database when the triggered action is
applied to the database.
Consider the following example:
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED
AND N_ROW.ORDER_PENDING = 'N')
BEGIN ATOMIC
VALUES(ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED N_ROW.ON_HAND,
N_ROW.PARTNO));
UPDATE PARTS SET PARTS.ORDER_PENDING = 'Y'
WHERE PARTS.PARTNO = N_ROW.PARTNO;
END
489
Based on the definition of the OLD and NEW transition variables given
above, it is clear that not every transition variable can be defined for every
trigger. Transition variables can be defined depending on the kind of trigger
event:
UPDATE
An UPDATE trigger can refer to both OLD and NEW transition
variables.
INSERT
An INSERT trigger can only refer to a NEW transition variable
because before the activation of the INSERT operation, the affected
row does not exist in the database. That is, there is no original state of
the row that would define old values before the triggered action is
applied to the database.
DELETE
A DELETE trigger can only refer to an OLD transition variable
because there are no new values specified in the delete operation.
Note: Transition variables can only be specified for FOR EACH ROW triggers.
In a FOR EACH STATEMENT trigger, a reference to a transition
variable is not sufficient to specify to which of the several rows in the
set of affected rows the transition variable is referring.
Transition Tables
In both FOR EACH ROW and FOR EACH STATEMENT triggers, it may be
necessary to refer to the whole set of affected rows. This is necessary, for
example, if the trigger body needs to apply aggregations over the set of
affected rows (for example, MAX, MIN, or AVG of some column values). A
trigger may refer to the set of affected rows by using two transition tables that
can be specified in the REFERENCING clause of a CREATE TRIGGER
statement. Just like the transition variables, there are two kinds of transition
tables, which are specified as OLD_TABLE and NEW_TABLE together with a
table-name, with the following semantics:
OLD_TABLE table-name
Specifies the name of the table which captures the original state of the
set of affected rows (that is, before the triggering SQL operation is
applied to the database).
NEW_TABLE table-name
Specifies the name of the table which captures the value that is used
to update the rows in the database when the triggered action is
applied to the database.
For example:
490
Note that NEW_TABLE always has the full set of updated rows, even on a
FOR EACH ROW trigger. When a trigger acts on the table on which the
trigger is defined, NEW_TABLE contains the changed rows from the
statement that activated the trigger. However, NEW_TABLE does not contain
the changed rows that were caused by statements within the trigger, as that
would cause a separate activation of the trigger.
The transition tables are read-only. The same rules that define the kinds of
transition variables that can be defined for which trigger event, apply for
transition tables:
UPDATE
An UPDATE trigger can refer to both OLD_TABLE and NEW_TABLE
transition tables.
INSERT
An INSERT trigger can only refer to a NEW_TABLE transition table
because before the activation of the INSERT operation the affected
rows do not exist in the database. That is, there is no original state of
the rows that defines old values before the triggered action is applied
to the database.
DELETE
A DELETE trigger can only refer to an OLD transition table because
there are no new values specified in the delete operation.
Note: It is important to observe that transition tables can be specified for both
granularities of AFTER triggers: FOR EACH ROW and FOR EACH
STATEMENT.
The scope of the OLD_TABLE and NEW_TABLE table-name is the trigger body. In
this scope, this name takes precedence over the name of any other table with
the same unqualified table-name that may exist in the schema. Therefore, if the
OLD_TABLE or NEW_TABLE table-name is for example, X, a reference to X (that is,
an unqualified X) in the FROM clause of a SELECT statement will always
refer to the transition table even if there is a table named X in the in the
491
schema of the trigger creator. In this case, the user has to make use of the
fully qualified name in order to refer to the table X in the schema.
Triggered Action
The activation of a trigger results in the running of its associated triggered
action. Every trigger has exactly one triggered action which, in turn, has two
components:
v An optional triggered action condition or WHEN clause
v A set of triggered SQL statement(s).
The triggered action condition defines whether or not the set of triggered
statements are performed for the row or for the statement for which the
triggered action is executing. The set of triggered statements define the set of
actions performed by the trigger in the database as a consequence of its event
having occurred.
For example, the following trigger action specifies that the set of triggered
SQL statements should only be activated for rows in which the value of the
on_hand column is less than ten per cent of the value of the max_stocked
column. In this case, the set of triggered SQL statements is the invocation of
the issue_ship_request function.
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED)
BEGIN ATOMIC
VALUES(ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED N_ROW.ON_HAND,
N_ROW.PARTNO));
END
492
UDFs are written in either the SQL, Java, C, or C++ programming language.
This enables complex control of logic flows, error handling and recovery, and
access to system and library functions. (See Chapter 15. Writing User-Defined
Functions (UDFs) and Methods on page 393 for a description of UDFs.) This
capability allows a triggered action to perform non-SQL types of operations
when a trigger is activated. For example, such a UDF could send an electronic
mail message and thereby act as an alert mechanism. External actions, such as
messages, are not under commit control and will be run regardless of success
or failure of the rest of the triggered actions.
Chapter 16. Using Triggers in an Active DBMS
493
Also, the function can return an SQLSTATE that indicates an error has
occurred which results in the failure of the triggering SQL statement. This is
one method of implementing user-defined constraints. (Using a SIGNAL
SQLSTATE statement is the other.) In order to use a trigger as a means to
check complex user-defined constraints, you can use the RAISE_ERROR built-in
function in a triggered SQL statement. This function can be used to return a
user-defined SQLSTATE (SQLCODE -438) to applications. For details on
invocation and use of this function, refer to the SQL Reference.
For example, consider some rules related to the HIREDATE column of the
EMPLOYEE table, where HIREDATE is the date that the employee starts
working.
v HIREDATE must be date of insert or a future date
v HIREDATE cannot be more than 1 year from date of insert.
v If HIREDATE is between 6 and 12 months from date of insert, notify
personnel manager using a UDF called send_note.
The following trigger handles all of these rules on INSERT:
CREATE TRIGGER CHECK_HIREDATE
NO CASCADE BEFORE INSERT ON EMPLOYEE
REFERENCING NEW AS NEW_EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
VALUES CASE
WHEN NEW_EMP.HIREDATE < CURRENT DATE
THEN RAISE_ERROR('85001', 'HIREDATE has passed')
WHEN NEW_EMP.HIREDATE - CURRENT DATE > 10000.
THEN RAISE_ERROR('85002', 'HIREDATE too far out')
WHEN NEW_EMP.HIREDATE - CURRENT DATE > 600.
THEN SEND_MOTE('persmgr',NEW_EMP.EMPNO,'late.txt')
END;
END
Trigger Cascading
When you run a triggered SQL statement, it may cause the event of another,
or even the same, trigger to occur, which in turn, causes the other, (or a
second instance of the same) trigger to be activated. Therefore, activating a
trigger can cascade the activation of one or more other triggers.
The run-time depth level of trigger cascading supported is 16. If a trigger at
level 17 is activated, SQLCODE -724 (SQLSTATE 54038) will be returned and
the triggering statement will be rolled back.
494
The above triggers are activated when you run an INSERT operation on the
employee table. In this case, the timestamp of their creation defines which of
the above two triggers is activated first.
The activation of the triggers is conducted in ascending order of the
timestamp value. Thus, a trigger that is newly added to a database runs after
all the other triggers that are previously defined.
Chapter 16. Using Triggers in an Active DBMS
495
Old triggers are activated before new triggers to ensure that new triggers can
be used as incremental additions to the changes that affect the database. For
example, if a triggered SQL statement of trigger T1 inserts a new row into a
table T, a triggered SQL statement of trigger T2 that is run after T1 can be
used to update the same row in T with specific values. By activating triggers
in ascending order of creation, you can ensure that the actions of new triggers
run on a database that reflects the result of the activation of all old triggers.
Extracting Information
You could write an application that stores complete electronic mail messages
as a LOB value within the column MESSAGE of the ELECTRONIC_MAIL
table. To manipulate the electronic mail, you could use UDFs to extract
information from the message column every time such information was
required within an SQL statement.
Notice that the queries do not extract information once and store it explicitly
as columns of tables. If this was done, it would increase the performance of
the queries, not only because the UDFs are not invoked repeatedly, but also
because you can then define indexes on the extracted information.
Using triggers, you can extract this information whenever new electronic mail
is stored in the database. To achieve this, add new columns to the
ELECTRONIC_MAIL table and define a BEFORE trigger to extract the
corresponding information as follows:
ALTER
ADD
ADD
ADD
ADD
TABLE ELECTRONIC_MAIL
COLUMN SENDER
VARCHAR (200)
COLUMN RECEIVER VARCHAR (200)
COLUMN SENT_ON
DATE
COLUMN SUBJECT
VARCHAR (200)
496
SET
SET
SET
SET
END
N.SENDER = SENDER(N.MESSAGE);
N.RECEIVER = RECEIVER(N.MESSAGE);
N.SENT_ON = SENDING_DATE(N.MESSAGE);
N.SUBJECT = SUBJECT(N.MESSAGE);
Now, whenever new electronic mail is inserted into the message column, its
sender, its receiver, the date on which it was sent, and its subject are extracted
from the message and stored in separate columns.
497
This is certainly not the intent of your companys business rule. The intent is
to forward to the marketing manager any e-mail dealing with customer
complaints that were not copied to the marketing manager. Such a business
rule can only be expressed with a trigger because it requires taking actions
that cannot be expressed with declarative constraints. The trigger assumes the
existence of a SEND_NOTE function with parameters of type E_MAIL and
character string.
CREATE TRIGGER INFORM_MANAGER
AFTER INSERT ON ELECTRONIC_MAIL
REFERENCING NEW AS N
FOR EACH ROW MODE DB2SQL
WHEN (N.SUBJECT = 'Customer complaint' AND
CONTAINS (CC_LIST(MESSAGE), 'nelson@vnet.ibm.com') = 0)
BEGIN ATOMIC
VALUES(SEND_NOTE(N.MESSAGE, 'nelson@vnet.ibm.com'));
END
Defining Actions
Now assume that your general manager wants to keep the names of
customers who have sent three or more complaints in the last 72 hours in a
separate table. The general manager also wants to be informed whenever a
customer name is inserted in this table more than once.
To define such actions, you define:
v An UNHAPPY_CUSTOMERS table:
CREATE TABLE UNHAPPY_CUSTOMERS (
NAME
VARCHAR (30),
EMAIL_ADDRESS VARCHAR (200),
INSERTION_DATE DATE)
498
499
500
501
502
503
504
504
507
508
509
510
510
511
511
513
513
514
514
514
515
518
519
520
521
522
523
524
534
535
535
535
536
536
538
540
541
542
543
544
544
545
546
547
547
548
549
552
552
525
503
|
|
|
|
|
|
|
|
|
|
504
The weights in a collating sequence need not be unique. For example, you
could give uppercase letters and their lowercase equivalents the same weight.
|
|
|
|
|
Note: For double-byte and Unicode characters in GRAPHIC fields, the sort
sequence is always IDENTITY.
Character Comparisons: Once a collating sequence is established, character
comparison is performed by comparing the weights of two characters, instead
of directly comparing their code point values.
|
|
|
If weights that are not unique are used, characters that are not identical may
compare equally. Because of this, string comparison can become a two-phase
process:
|
|
|
505
returns
ab
Ab
abel
Abel
ABEL
abels
You could also specify the following SELECT statement when creating view
v1, make all comparisons against the view in uppercase, and request table
INSERTs in mixed case:
CREATE VIEW v1 AS SELECT TRANSLATE(c1) FROM T1
At the database level, you can set the collating sequence as part of the
sqlecrea - Create Database API. This allows you to decide if a is processed
before A, or if A is processed after a, or if they are processed with equal
weighting. This will make them equal when collating or sorting using the
ORDER BY clause. A will always come before a, because they are equal in
every sense. The only basis upon which to sort is the hexadecimal value.
Thus
SELECT c1 FROM T1 WHERE c1 LIKE 'ab%'
returns
ab
abel
abels
and
SELECT c1 FROM T1 WHERE c1 LIKE 'A%'
returns
Abel
Ab
ABEL
returns
ab
Ab
abel
Abel
ABEL
abels
506
Thus, you may want to consider using the scalar function TRANSLATE(), as
well as sqlecrea. Note that you can only specify a collating sequence using
sqlecrea. You cannot specify a collating sequence from the command line
processor (CLP). For information about the TRANSLATE() function, refer to
the SQL Reference. For information about sqlecrea, refer to the Administrative
API Reference.
You can also use the UCASE function as follows, but note that DB2 performs
a table scan instead of using an index for the select:
SELECT * FROM EMP WHERE UCASE(JOB) = 'NURSE'
ASCII-Based Sort
COL2
---V1G
Y2W
7AB
COL2
---7AB
V1G
Y2W
Figure 19. Example of How a Sort Order in an EBCDIC-Based Sequence Differs from a Sort Order
in an ASCII-Based Sequence
507
SELECT.....
WHERE COL2 > 'TT3'
EBCDIC-Based Results
ASCII-Based Results
COL2
---TW4
X72
39G
COL2
---TW4
X72
|
|
|
|
|
SQL_CS_USER
The collating sequence is specified by the value in the
SQLDBUDC field.
508
SQL_CS_NONE
The collating sequence is the identity sequence. Strings are
compared byte for byte, starting with the first byte, using a
simple code point comparison.
Note: These constants are defined in the SQLENV include file.
SQLDBUDC
A 256-byte field. The nth byte contains the sort weight of the nth
character in the code page of the database. If SQLDBCSS is not equal
to SQL_CS_USER, this field is ignored.
Sample Collating Sequences: Several sample collating sequences are
provided (as include files) to facilitate database creation using the EBCDIC
collating sequences instead of the default workstation collating sequence.
The collating sequences in these include files can be specified in the
SQLDBUDC field of the SQLEDBDESC structure. They can also be used as
models for the construction of other collating sequences.
For information on the include files that contain collating sequences, see the
following sections:
v For C/C++, Include Files for C and C++ on page 595
v For COBOL, Include Files for COBOL on page 680
v For FORTRAN, Include Files for FORTRAN on page 702.
OS/2
509
510
511
|
|
The server does not convert file names. To code a file name, either use the
ASCII invariant set, or provide the path in the hexadecimal values that are
physically stored in the file system.
In a multi-byte environment, there are four characters which are considered
special that do not belong to the invariant character set. These characters are:
v The double-byte percentage and double-byte underscore characters used in
LIKE processing. For further details concerning LIKE, refer to the SQL
Reference.
v The double-byte space character, used for, among other things, blank
padding in graphic strings.
v The double-byte substitution character, used as a replacement during code
page conversion when no mapping exists between a source code page and
a target code page.
|
|
|
The code points for each of these characters, by code page, is as follows:
Table 19. Code Points for Special Double-byte Characters
Code Page
Double-Byte
Percentage
Double-Byte
Underscore
Double-byte
Space
Double-Byte
Substitution
Character
932
X'8193'
X'8151'
X'8140'
X'FCFC'
938
X'8193'
X'8151'
X'8140'
X'FCFC'
942
X'8193'
X'8151'
X'8140'
X'FCFC'
943
X'8193'
X'8151'
X'8140'
X'FCFC'
948
X'8193'
X'8151'
X'8140'
X'FCFC'
949
X'A3A5'
X'A3DF'
X'A1A1'
X'AFFE'
950
X'A248'
X'A1C4'
X'A140'
X'C8FE'
954
X'A1F3'
X'A1B2'
X'A1A1'
X'F4FE'
964
X'A2E8'
X'A2A5'
X'A1A1'
X'FDFE'
970
X'A3A5'
X'A3DF'
X'A1A1'
X'AFFE'
1381
X'A3A5'
X'A3DF'
X'A1A1'
X'FEFE'
1383
X'A3A5'
X'A3DF'
X'A1A1'
X'A1A1'
13488
X'FF05'
X'FF3F'
X'3000'
X'FFFD'
1363
X'A3A5'
X'A3DF'
X'A1A1'
X'A1E0'
1386
X'A3A5'
X'A3DF'
X'A1A1'
X'FEFE'
5039
X'8193'
X'8151'
X'8140'
X'FCFC'
512
|
|
|
|
|
|
|
|
For more information about Unicode databases, see the Administration Guide:
Planning.
Coding Remote Stored Procedures and UDFs
When coding stored procedures that will be running remotely, the following
considerations apply:
v Data in a stored procedure must be in the database code page.
v Data passed to or from a stored procedure using an SQLDA with a
character data type must really contain character data. Numeric data and
data structures must never be passed with a character type if the client
application code page is different from the database code page. This is
because the server will convert all character data in an SQLDA. To avoid
character conversion, you can pass data by defining it in binary string
format by using a data type of BLOB or by defining the character data as
FOR BIT DATA.
|
|
|
|
|
|
|
By default, when you invoke DB2 DARI stored procedures and UDFs, they
run under a default national language environment which may not match the
databases national language environment. Consequently, using
country/region or code-page-specific operations, such as the C wchar_t
graphic host variables and functions, may not work as you expect. You need
to ensure that, if applicable, the correct environment is initialized when you
invoke the stored procedure or UDF.
|
|
|
|
|
|
|
|
|
|
513
514
conversions (xz and yz) yield different results, the SELECT in statement (2)
will fail to retrieve the data inserted by statement (1).
|
|
|
|
|
|
|
|
515
1. However, a literal inserted into a column defined as FOR BIT DATA could be converted if that literal was part of
an SQL statement which was converted.
516
The determination of target code page is more involved; where the data is to
be placed, including rules for intermediate operations, is considered:
v If the data is moved directly from an application into a database, with no
intervening operations, the target code page is the database code page.
v If the data is being imported into a database from a PC/IXF file, there are
two character conversion steps:
1. From the PC/IXF file code page (source code page) to the application
code page (target code page)
2. From the application code page (source code page) to the database code
page (target code page).
Exercise caution in situations where two conversion steps might occur. To
avoid a possible loss of character data, ensure you follow the supported
character conversions listed in the Administration Guide. Additionally, within
each group, only characters which exist in both the source and target code
page have meaningful conversions. Other characters are used as
substitutions and are only useful for converting from the target code page
back to the source code page (and may not necessarily provide meaningless
conversions in the two-step conversion process mentioned above). Such
problems are avoided if the application code page is the same as the
database code page.
v If the data is derived from operations performed on character data, where
the source may be any of the application code page, the database code
page, FOR BIT DATA, or for BLOB data, data conversion is based on a set
of rules. Some or all of the data items may have to be converted to an
intermediate result, before the final target code page can be determined. For
a summary of these rules, and for specific application with individual
operators and predicates, refer to the SQL Reference.
For a list of the code pages supported by DB2 Universal Database, refer to the
Administration Guide. The values under the heading Group can be used to
determine where conversions are supported. Any code page can be converted
to any other code page that is listed in the same IBM-defined language group.
For example, code page 437 can be converted to 37, 819, 850, 1051, 1252, or
1275.
|
|
|
Note: Code page conversions between multi-byte code pages, for example
DBCS and EUC, may result in either an increase or a decrease in the
length of the string.
Code Page Conversion Expansion Factor: When your application
successfully completes an attempt to connect to a DB2 database server, you
should consider the following fields in the returned SQLCA:
v The second token in the SQLERRMC field (tokens are separated by X'FF')
indicates the code page of the database. The ninth token in the SQLERRMC
Chapter 17. Programming in Complex Environments
517
field indicates the code page of the application. Querying the applications
code page and comparing it to the databases code page informs the
application whether it has established a connection which will undergo
character conversions.
v The first and second entries in the SQLERRD array. SQLERRD(1) contains
an integer value equal to the maximum expected expansion or contraction
factor for the length of mixed character data (CHAR data types) when
converted to the database code page from the application code page.
SQLERRD(2) contains an integer value equal to the maximum expected
expansion or contraction factor for the length of mixed character data
(CHAR data types) when converted to the application code page from the
database code page. A value of 0 or 1 indicates no expansion; a value
greater than 1 indicates a possible expansion in length; a negative value
indicates a possible contraction. Refer to the SQL Reference for details on
using the CONNECT statement.
The considerations for graphic string data should not be a factor in unequal
code page situations. Each string always has the same number of characters,
regardless of whether the data is in the application or the database code page.
See Unequal Code Page Situations on page 526 for information on dealing
with unequal code page situations.
|
|
|
|
Country/Region
Supported Mixed
Code Page
|
|
Japan
932, 943
X'00'-X'7F',
X'A1'-X'DF'
X'81'-X'9F',
X'E0'-X'FC'
|
|
|
Japan
942
X'00'-X'80',
X'A0'-X'DF',
X'FD'-X'FF'
X'81'-X'9F',
X'E0'-X'FC'
Taiwan
938 (*)
X'00'-X'7E'
X'81'-X'FC'
|
|
Taiwan
948 (*)
X'00'-X'80', X'FD',
X'FE'
X'81'-X'FC'
Korea
949
X'00'-X'7F'
X'8F'-X'FE'
518
|
|
|
|
Country/Region
Supported Mixed
Code Page
Taiwan
950
X'00'-X'7E'
X'81'-X'FE'
China
1381
X'00'-X'7F'
X'8C'-X'FE'
Korea
1363
X'00'-X'7F'
X'81'-X'FE'
China
1386
X'00'
X'81'-X'FE'
|
|
Code points not assigned to either of these categories are not defined, and are
processed as single-byte undefined code points.
|
|
|
|
|
Within each implied DBCS code table, there are 256 code points available as
the second byte for each valid first byte. Second byte values can have any
value from X'40' to X'7E', and from X'80' to X'FE'. Note that in DBCS
environments, DB2 does not perform validity checking on individual
double-byte characters.
Group
1st Byte
2nd Byte
3rd Byte
4th Byte
G0
X'20'-X'7E'
n/a
n/a
n/a
G1
X'A1'-X'FE'
X'A1'-X'FE'
n/a
n/a
G2
X'8E'
X'A1'-X'FE'
n/a
n/a
|
|
G3
X'8E'
X'A1'-X'FE'
X'A1'-X'FE'
n/a
Group
1st Byte
2nd Byte
3rd Byte
4th Byte
G0
X'20'-X'7E'
n/a
n/a
n/a
G1
X'A1'-X'FE'
X'A1'-X'FE'
n/a
n/a
519
Group
1st Byte
2nd Byte
3rd Byte
4th Byte
G2
n/a
n/a
n/a
n/a
|
|
G3
n/a
n/a
n/a
n/a
Group
1st Byte
2nd Byte
3rd Byte
4th Byte
G0
X'20'-X'7E'
n/a
n/a
n/a
G1
X'A1'-X'FE'
X'A1'-X'FE'
n/a
n/a
G2
X'8E'
X'A1'-X'FE'
X'A1'-X'FE'
X'A1'-X'FE'
|
|
G3
n/a
n/a
n/a
n/a
Group
1st Byte
2nd Byte
3rd Byte
4th Byte
G0
X'20'-X'7E'
n/a
n/a
n/a
G1
X'A1'-X'FE'
X'A1'-X'FE'
n/a
n/a
G2
n/a
n/a
n/a
n/a
|
|
G3
n/a
n/a
n/a
n/a
Code points not assigned to any of these categories are not defined, and are
processed as single-byte undefined code points.
520
PATCH2 = 7
This forces the driver to map all graphic column data types to char
column data type. This is needed in a double byte environment.
PATCH2 = 10
This setting should only be used in an EUC (Extended Unix Code)
environment. It ensures that the CLI driver provides data for character
variables (CHAR, VARCHAR, etc...) in the proper format for the JDBC
driver. The data in these character types will not be usable in JDBC
without this setting.
Note:
1. Each of these keywords is set in each database specific stanza of the
db2cli.ini file. If you want to set them for multiple databases then
you need to repeat them for each database stanza in db2cli.ini.
2. To set multiple PATCH1 values you add the individual values and
use the sum. To set PATCH1 to both 64 and 65536 you would set
PATCH1=65600 (64+65536). If you already have other PATCH1
values set then replace the existing number with the sum of the
existing number and the new PATCH1 values you want to add.
3. To set multiple PATCH2 values you specify them in a comma
delimited string (unlike the PATCH1 option). To set PATCH2 values
1 and 7 you would set PATCH2=1,7
For more information about setting these keywords refer to the
Installation and Configuration Supplement.
521
characters from both the Japanese and Traditional Chinese EUC code pages.
To overcome this situation, support is provided at both the application level
and the database level to represent Japanese and Traditional Chinese EUC
graphic data using another encoding scheme.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
DB2 Universal Database supports the all the Unicode characters that can be
encoded using UCS-2, but does not perform any composition, decomposition,
or normalization of characters. More information about the Unicode standard
can be found at the Unicode Consortium web site, www.unicode.org, and from
the latest edition of the Unicode Standard book published by Addison Wesley
Longman, Inc.
If you are working with applications or databases using these character sets
you may need to consider dealing with UCS-2 encoded data. When
converting UCS-2 graphic data to the applications EUC code page, there is
the possibility of an increase in the length of data. For details of data
expansion, see Code Page Conversion Expansion Factor on page 517. When
large amounts of data are being displayed, it may be necessary to allocate
buffers, convert, and display the data in a series of fragments.
The following sections discuss how to handle data in this environment. For
these sections, the term EUC is used to refer only to Japanese and Traditional
Chinese EUC character sets. Note that the discussions do not apply to DB2
Korean or Simplified-Chinese EUC support since graphic data in these
character sets is represented using the EUC encoding.
Mixed EUC and Double-Byte Client and Database Considerations
The administration of database objects in mixed EUC and double-byte code
page environments is complicated by the possible expansion or contraction in
the length of object names as a result of conversions between the client and
database code page. In particular, many administrative commands and
utilities have documented limits to the lengths of character strings which they
may take as input or output parameters. These limits are typically enforced at
522
the client, unless documented otherwise. For example, the limit for a table
name is 128 bytes. It is possible that a character string which is 128 bytes
under a double-byte code page is larger, say 135 bytes, under an EUC code
page. This hypothetical 135-byte table name would be considered invalid by
such commands as REORGANIZE TABLE if used as an input parameter
despite being valid in the target double-byte database. Similarly, the
maximum permitted length of output parameters may be exceeded, after
conversion, from the database code page to the application code page. This
may cause either a conversion error or output data truncation to occur.
|
|
|
|
|
|
|
|
523
UCS-2
EUC 946
UCS-2
C4A1
A7A1
C4A1
Thus, the original code points A7A1 and C4A1 end up as code point C4A1 after
conversion.
If you require the code page conversion tables for EUC code pages 946
(Traditional Chinese EUC) or 950 (Traditional Chinese Big-5) and UCS-2, see
the online Product and Service Technical Library
(http://www.ibm.com/software/data/db2/library/).
Developing Japanese or Traditional Chinese EUC Applications
When developing EUC applications, you need to consider the following items:
v Graphic Data Handling
v Developing for Mixed Code Set Environments
For additional considerations for stored procedures, see Considerations for
Stored Procedures on page 525. Additional language-specific application
development issues are discussed in:
v Japanese or Traditional Chinese EUC, and UCS-2 Considerations in C and
C++ on page 626 (for C and C++).
v Japanese or Traditional Chinese EUC, and UCS-2 Considerations for
COBOL on page 699 (for COBOL).
v Japanese or Traditional Chinese EUC, and UCS-2 Considerations for
FORTRAN on page 715 (for FORTRAN).
v Japanese or Traditional Chinese EUC Considerations for REXX on
page 734 (for REXX).
Graphic Data Handling: This section discusses EUC application
development considerations in order to handle graphic data. This includes
handling graphic constants, and handling graphic data in UDFs, stored
procedures, DBCLOB files, as well as collation.
Graphic Constants: Graphic constants, or literals, are actually classified as
mixed character data as they are part of an SQL statement. Any graphic
constants in an SQL statement from a Japanese or Traditional Chinese EUC
client are implicitly converted to the graphic encoding by the database server.
You can use graphic literals that are composed of EUC encoded characters in
your SQL applications. An EUC database server will convert these literals to
the graphic database code set which will be UCS-2. Graphic constants from
EUC clients should never contain single-width characters such as CS0 7-bit
ASCII characters or Japanese EUC CS2 (Katakana) characters.
524
525
v
v
v
v
v
526
expansion_factor = ABS[SQLERRD(2)]
if expansion_factor = 0
expansion_factor = 1
CHAR
254
VARCHAR
32 672
LONG VARCHAR
32 700
CLOB
All the above checks are required to allow for overflow which may occur
during the length calculation. The specific checks are:
1
527
3
528
Application Code
Page
Database Code
Page
SQLERRD(1)
SQLERRD(2)
SBCS
SBCS
+1
+1
DBCS
DBCS
+1
+1
eucJP
eucJP
+1
+1
eucJP
DBCS
-1
+2
DBCS
eucJP
+2
-1
eucTW
eucTW
+1
+1
eucTW
DBCS
-1
+2
DBCS
eucTW
+2
-1
Database Code
Page
SQLERRD(1)
SQLERRD(2)
eucKR
eucKR
+1
+1
eucKR
DBCS
+1
+1
DBCS
eucKR
+1
+1
eucCN
eucCN
+1
+1
eucCN
DBCS
+1
+1
DBCS
eucCN
+1
+1
529
commands and APIs, the tokens within SQL statements are not verified until
they have been converted to the databases code page. This can lead to
situations where it is possible to use an SQL statement in an unequal code
page environment to access a database object, such as a table, but it will not
be possible to access the same object using a particular command or API.
Consider an application that returns data contained in a table provided by an
end-user, and checks that the table name is not greater than 128 bytes long.
Now consider the following scenarios for this application:
1. A DBCS database is created. From a DBCS client, a table (t1) is created
with a table name which is 128 bytes long. The table name includes
several characters which would be greater than two bytes in length if the
string is converted to EUC, resulting in the EUC representation of the table
name being a total of 131 bytes in length. Since there is no expansion for
DBCS to DBCS connections, the table name is 128 bytes in the database
environment, and the CREATE TABLE is successful.
2. An EUC client connects to the DBCS database. It creates a table (t2) with a
table name which is 120 bytes long when encoded as EUC and 100 bytes
long when converted to DBCS. The table name in the DBCS database is
100 bytes. The CREATE TABLE is successful.
3. The EUC client creates a table (t3) with a table name that is 64 EUC
characters in length (131 bytes). When this name is converted to DBCS its
length shrinks to the 128 byte limit. The CREATE TABLE is successful.
4. The EUC client invokes the application against the each of the tables (t1,
t2, and t3) in the DBCS database, which results in:
Table
t1
Result
The application considers the table name invalid because it
is 131 bytes long.
t2
Displays correct results
t3
The application considers the table name invalid because it
is 131 bytes long.
5. The EUC client is used to query the DBCS database from the CLP.
Although the table name is 131 bytes long on the client, the queries are
successful because the table name is 128 bytes long at the server.
Using the DESCRIBE Statement: A DESCRIBE performed against an EUC
database will return information about mixed character and GRAPHIC
columns based on the definition of these columns in the database. This
information is based on code page of the server, before it is converted to the
clients code page.
When you perform a DESCRIBE against a select list item which is resolved in
the application context (for example VALUES SUBSTR(?,1,2)); then for any
character or graphic data involved, you should evaluate the returned SQLLEN
530
value along with the returned code page. If the returned code page is the
same as the application code page, there is no expansion. If the returned code
page is the same as the database code page, expansion is possible. Select list
items which are FOR BIT DATA (code page 0), or in the application code page
are not converted when returned to the application, therefore there is no
expansion or contraction of the reported length.
EUC Application with DBCS Database: If your applications code page is an
EUC code page, and it issues a DESCRIBE against a database with a DBCS
code page, the information returned for CHAR and GRAPHIC columns is
returned in the database context. For example, a CHAR(5) column returned as
part of a DESCRIBE has a value of five for the SQLLEN field. In the case of
non-EUC data, you allocate five bytes of storage when you fetch the data
from this column. With EUC data, this may not be the case. When the code
page conversion from DBCS to EUC takes place, there may be an increase in
the length of the data due to the different encoding used for characters for
CHAR columns. For example, with the Traditional Chinese character set, the
maximum increase is double. That is, the maximum character length in the
DBCS encoding is two bytes which may increase to a maximum character
length of four bytes in EUC. For the Japanese code set, the maximum increase
is also double. Note, however, that while the maximum character length in
Japanese DBCS is two bytes, it may increase to a maximum character length
in Japanese EUC of three bytes. Although this increase appears to be only by
a factor of 1.5, the single-byte Katakana characters in Japanese DBCS are only
one byte in length, while they are two bytes in length in Japanese EUC. See
Code Page Conversion Expansion Factor on page 517 for more information
on determining the maximum size.
Possible changes in data length as a result of character conversions apply only
to mixed character data. Graphic character data encoding is always the same
length, two bytes, regardless of the encoding scheme. To avoid losing the
data, you need to evaluate whether an unequal code page situation exists, and
whether or not it is between a EUC application and a DBCS database. You can
determine the database code page and the application code page from tokens
in the SQLCA returned from a CONNECT statement. For more information,
see Deriving Code Page Values on page 509, or refer to the SQL Reference. If
such a situation exists, your application needs to allocate additional storage
for mixed character data, based on the maximum expansion factor for that
encoding scheme.
DBCS Application with EUC Database: If your application code page is a DBCS
code page and issues a DESCRIBE against an EUC database, a situation
similar to that in EUC Application with DBCS Database occurs. However, in
this case, your application may require less storage than indicated by the
value of the SQLLEN field. The worst case in this situation is that all of the
data is single-byte or double-byte under EUC, meaning that exactly SQLLEN
Chapter 17. Programming in Complex Environments
531
bytes are required under the DBCS encoding scheme. In any other situation,
less than SQLLEN bytes are required because a maximum of two bytes are
required to store any EUC character.
Using Fixed or Variable Length Data Types: Due to the possible change in
length of strings when conversions occur between DBCS and EUC code pages,
you should consider not using fixed length data types. Depending on whether
you require blank padding, you should consider changing the SQLTYPE from
a fixed length character string, to a varying length character string after
performing the DESCRIBE. For example, if an EUC to DBCS connection is
informed of a maximum expansion factor of two, the application should
allocate ten bytes (based on the CHAR(5) example in EUC Application with
DBCS Database on page 531).
If the SQLTYPE is fixed-length, the EUC application will receive the column
as an EUC data stream converted from the DBCS data (which itself may have
up to five bytes of trailing blank pads) with further blank padding if the code
page conversion does not cause the data element to grow to its maximum
size. If the SQLTYPE is varying-length, the original meaning of the content of
the CHAR(5) column is preserved, however, the source five bytes may have a
target of between five and ten bytes. Similarly, in the case of possible data
shrinkage (DBCS application and EUC database), you should consider
working with varying-length data types.
An alternative to either allocating extra space or promoting the data type is to
select the data in fragments. For example, to select the same VARCHAR(3000)
which may be up to 6000 bytes in length after the conversion you could
perform two selects, of SUBSTR(VC3000, 1, LENGTH(VC3000)/2) and
SUBSTR(VC3000, (LENGTH(VC3000)/2)+1) separately into 2 VARCHAR(3000)
application areas. This method is the only possible solution when the data
type is no longer promotable. For example, a CLOB encoded in the Japanese
DBCS code page with the maximum length of 2 gigabytes is possibly up to
twice that size when encoded in the Japanese EUC code page. This means that
the data will have to be broken up into fragments since there is no support
for a data type in excess of 2 gigabytes in length.
Code Page Conversion String Length Overflow: In EUC and DBCS unequal
code page environments, situations may occur after conversion takes place,
when there is not enough space allocated in a column to accommodate the
entire string. In this case, the maximum expansion will be twice the length of
the string in bytes. In cases where expansion does exceed the capacity of the
column, SQLCODE -334 (SQLSTATE 22524) is returned.
This leads to situations that may not be immediately obvious or previously
considered as follows:
532
533
code pages are different. In cases where string length expansion occurs as a
result of conversion, you receive an SQLCODE -334 (SQLSTATE 22524) if
there is not enough space allocated to handle the expansion. Thus you must
be sure to provide enough space for potentially expanding strings when
developing stored procedures. You should use varying length data types with
enough space allocated to allow for expansion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When DB2 converts characters from a code page to UTF-8, the total number
of bytes that represent the characters may expand or shrink, depending on the
code page and the code points of the characters. 7-bit ASCII remains invariant
in UTF-8, and each ASCII character requires one byte. Non-ASCII characters
become more than one byte each. For more information about UTF-8
conversions, refer to the Administration Guide, or refer to the Unicode standard
documents.
For applications that connect to a Unicode database, GRAPHIC data is already
in Unicode. For applications that connect to DBCS databases, GRAPHIC data
is converted between the application DBCS code page and the database DBCS
code page. Unicode applications should perform the necessary conversions to
and from Unicode themselves, or should set WCHARTYPE CONVERT option
and use wchar_t for graphic data. For more details about this option, please
see Handling Graphic Host Variables in C and C++ on page 621.
534
Multisite Update
Multisite update, also known as Distributed Unit of Work (DUOW) and
Two-Phase commit, is a function that enables your applications to update data
in multiple remote database servers with guaranteed integrity. A good
example of a multisite update is a banking transaction that involves transfer
of money from one account to another in a different database server. In such a
transaction it is critical that updates that implement debit operation on one
account do not get committed unless updates required to process credit to the
other account are committed as well. The multisite update considerations
apply when data representing these accounts is managed by two different
database servers.
You can use multisite update to read and update multiple DB2 Universal
Database databases within a unit of work. If you have installed DB2 Connect
or use the DB2 Connect capability provided with DB2 Universal Database
Enterprise Edition you can also use multisite update with host or AS/400
database servers, such as DB2 Universal Database for OS/390 and DB2
Chapter 17. Programming in Complex Environments
535
Universal Database for AS/400. Certain restrictions apply when you use
multisite update with other database servers, as described in Multisite
Update with DB2 Connect on page 799.
A transaction manager coordinates the commit among multiple databases. If
you use a transaction processing (TP) monitor environment such as TxSeries
CICS, the TP monitor uses its own transaction manager. Otherwise, the
transaction manager supplied with DB2 is used. DB2 Universal Database for
OS/2, UNIX, and Windows 32-bit operating systems is an XA (extended
architecture) compliant resource manager. Host and AS/400 database servers
that you access with DB2 Connect are XA compliant resource managers. Also
note that the DB2 Universal Database transaction manager is not an XA
compliant transaction manager, meaning the transaction manager can only
coordinate DB2 databases.
For detailed information about multisite update, refer to the Administration
Guide.
When to Use Multisite Update
Multisite Update is most useful when you want to work with two or more
databases and maintain data integrity. For example, if each branch of a bank
has its own database, a money transfer application could do the following:
v Connect to the senders database
v Read the senders account balance and verify that enough money is present.
v Reduce the senders account balance by the transfer amount.
v Connect to the recipients database
v Increase the recipients account balance by the transfer amount.
v Commit the databases.
By doing this within one unit of work, you ensure that either both databases
are updated or neither database is updated.
Coding SQL for a Multisite Update Application
Table 26 on page 537 illustrates how you code SQL statements for multisite
update. The left column shows SQL statements that do not use multisite
update; the right column shows similar statements with multisite update.
536
CONNECT TO D1
SELECT
UPDATE
COMMIT
CONNECT TO D1
SELECT
UPDATE
CONNECT TO D2
INSERT
COMMIT
CONNECT TO D1
SELECT
COMMIT
CONNECT RESET
CONNECT TO D2
INSERT
RELEASE CURRENT
SET CONNECTION D1
SELECT
RELEASE D1
COMMIT
The SQL statements in the left column access only one database for each unit
of work. This is a remote unit of work (RUOW) application.
The SQL statements in the right column access more than one database within
a unit of work. This is a multisite update application.
Some SQL statements are coded and interpreted differently in a multisite
update application:
v The current unit of work does not need to be committed or rolled back
before you connect to another database.
v When you connect to another database, the current connection is not
disconnected. Instead, it is put into a dormant state. If the CONNECT
statement fails, the current connection is not affected.
v You cannot connect with the USER/USING clause if a current or dormant
connection to the database already exists.
v You can use the SET CONNECTION statement to change a dormant
connection to the current connection.
You can also accomplish the same thing by issuing a CONNECT statement
to the dormant database. This is not allowed if you set SQLRULES to STD.
You can set the value of SQLRULES using a precompiler option or the SET
CLIENT command or API. The default value of SQLRULES (DB2) allows
you to switch connections using the CONNECT statement.
v In a select, the cursor position is not affected if you switch to another
database and then back to the original database.
v The CONNECT RESET statement does not disconnect the current
connection and does not implicitly commit the current unit of work.
537
538
For more information on setting the connection type, refer to the Command
Reference. The following precompiler options are used when you precompile
an application which uses multisite updates:
CONNECT (1 | 2)
Specify CONNECT 2 to indicate that this application uses the SQL
syntax for multisite update applications, as described in Coding SQL
for a Multisite Update Application on page 536. The default,
CONNECT 1, means that the normal (RUOW) rules for SQL syntax
apply to the application.
SYNCPOINT (ONEPHASE | TWOPHASE | NONE)
If you specify SYNCPOINT TWOPHASE and DB2 coordinates the
transaction, DB2 requires a database to maintain the transaction state
information. When you deploy your application, you must define this
database by configuring the database manager configuration
parameter TM_DATABASE. For more information on the
TM_DATABASE database manager configuration parameter, refer to
the Administration Guide. For information on how these SYNCPOINT
options impact the way your program operates, refer to the concepts
section of the SQL Reference.
SQLRULES (DB2 | STD)
Specifies whether DB2 rules or standard (STD) rules based on
ISO/ANSI SQL92 should be used in multisite update applications.
DB2 rules allow you to issue a CONNECT statement to a dormant
database; STD rules do not allow this.
DISCONNECT (EXPLICIT | CONDITIONAL | AUTOMATIC)
Specifies which database connections are disconnected at COMMIT:
only databases that are marked for release with a RELEASE statement
(EXPLICIT), all databases that have no open WITH HOLD cursors
(CONDITIONAL), or all connections (AUTOMATIC).
For a more detailed description of these precompiler options, refer to the
Command Reference.
Multisite update precompiler options become effective when the first database
connection is made. You can use the SET CLIENT API to supersede
connection settings when there are no existing connections (before any
connection is established or after all connections are disconnected). You can
use the QUERY CLIENT API to query the current connection settings of the
application process.
The binder fails if an object referenced in your application program does not
exist. There are three possible ways to deal with multisite update applications:
539
v You can split the application into several files, each of which accesses only
one database. You then prep and bind each file against the one database
that it accesses.
v You can ensure that each table exists in each database. For example, the
branches of a bank might have databases whose tables are identical (except
for the data).
v You can use only dynamic SQL.
Specifying Configuration Parameters for a Multisite Update Application
For information on performing multisite updates coordinated by an XA
transaction manager with connections to a host or AS/400 database, refer to
the DB2 Connect Users Guide.
The following configuration parameters affect applications which perform
multisite updates. With the exception of LOCKTIMEOUT, the configuration
parameters are database manager configuration parameters. LOCKTIMEOUT
is a database configuration parameter.
TM_DATABASE
Specifies which database will act as a transaction manager for
two-phase commit transactions.
RESYNC_INTERVAL
Specifies the number of seconds that the system waits between
attempts to try to resynchronize an indoubt transaction. (An indoubt
transaction is a transaction that successfully completes the first phase
of a two-phase commit but fails during the second phase.)
LOCKTIMEOUT
Specifies the number of seconds before a lock wait will time-out and
roll back the current transaction for a given database. The application
must issue an explicit ROLLBACK to roll back all databases that
participate in the multisite update. LOCKTIMEOUT is a database
configuration parameter.
TP_MON_NAME
Specifies the name of the TP monitor, if any.
SPM_RESYNC_AGENT_LIMIT
Specifies the number of simultaneous agents that can perform resync
operations with the host or AS/400 server using SNA.
SPM_NAME
v If SPM is being used with a TCP/IP 2PC connection then the
SPM_NAME must be an unique identifier within the network.
When you create a DB2 instance, DB2 derives the default value of
SPM_NAME from the TCP/IP hostname. You may modify this
value if it is not acceptable in your environment. For TCP/IP
540
541
542
543
v
v
v
v
sqleAttachToCtx()
sqleDetachFromCtx()
sqleGetCurrentCtx()
sqleInterruptCtx()
|
|
544
|
|
|
|
|
545
Suppose the first context successfully executes the SELECT and the
UPDATE statements while the second context gets the semaphore and
accesses the data structure. The first context now tries to get the semaphore,
but it cannot because the second context is holding the semaphore. The
second context now attempts to read a row from table TAB1, but it stops on
a database lock held by the first context. The application is now in a state
where context 1 cannot finish before context 2 is done and context 2 is
waiting for context 1 to finish. The application is deadlocked, but because
the database manager does not know about the semaphore dependency
neither context will be rolled back. This leaves the application suspended.
Preventing Deadlocks for Multiple Contexts
Because the database manager cannot detect deadlocks between threads,
design and code your application in a way that will prevent deadlocks (or at
least allow them to be avoided). In the above example, you can avoid the
deadlock in several ways:
v Release all locks held before obtaining the semaphore.
Change the code for context 1 to perform a commit before it gets the
semaphore.
v Do not code SQL statements inside a section protected by semaphores.
Change the code for context 2 to release the semaphore before doing the
SELECT.
v Code all SQL statements within semaphores.
Change the code for context 1 to obtain the semaphore before running the
SELECT statement. While this technique will work, it is not highly
recommended because the semaphores will serialize access to the database
manager, which potentially negates the benefits of using multiple threads.
v Set the LOCKTIMEOUT database configuration parameter to a value other
than -1.
While this will not prevent the deadlock, it will allow execution to resume.
Context 2 is eventually rolled back because it is unable to obtain the
requested lock. When handling the roll back error, context 2 should release
the semaphore. Once the semaphore has been released, context 1 can
continue and context 2 is free to retry its work.
The techniques for avoiding deadlocks are shown in terms of the above
example, but you can apply them to all multithreaded applications. In general,
treat the database manager as you would treat any protected resource and
you should not run into problems with multithreaded applications.
546
Concurrent Transactions
Sometimes it is useful for an application to have multiple independent
connections called concurrent transactions. Using concurrent transactions, an
application can connect to several databases at the same time, and can
establish several distinct connections to the same database.
The context APIs described in Multiple Thread Database Access on page 543
allow an application to use concurrent transactions. Each context created in an
application is independent from the other contexts. This means you create a
context, connect to a database using the context, and run SQL statements
against the database without being affected by the activities such as running
COMMIT or ROLLBACK statements of other contexts.
For example, suppose you are creating an application that allows a user to
run SQL statements against one database, and keeps a log of the activities
performed in a second database. Since the log must be kept up to date, it is
necessary to issue a COMMIT statement after each update of the log, but you
do not want the users SQL statements affected by commits for the log. This is
a perfect situation for concurrent transactions. In your application, create two
contexts: one connects to the users database and is used for all the users
SQL; the other connects to the log database and is used for updating the log.
With this design, when you commit a change to the log database, you do not
affect the users current unit of work.
Another benefit of concurrent transactions is that if the work on the cursors in
one connection is rolled back, it has no affect on the cursors in other
connections. After the rollback in the one connection, both the work done and
the cursor positions are still maintained in the other connections.
547
Suppose the first context successfully executes the UPDATE statement. The
update establishes locks on all the rows of TAB1. Now context 2 tries to
select all the rows from TAB1. Since the two contexts are independent,
context 2 waits on the locks held by context 1. Context 1, however, cannot
release its locks until context 2 finishes executing. The application is now
deadlocked, but the database manager does not know that context 1 is
waiting on context 2 so it will not force one of the contexts to be rolled
back. This leaves the application suspended.
Preventing Deadlocks for Concurrent Transactions
Because the database manager cannot detect deadlocks between contexts, you
must design and code your application in a way that will prevent deadlocks
(or at least avoids deadlocks). In the above example, you can avoid the
deadlock in several ways:
v Release all locks held before switching contexts.
Change the code so that context 1 performs its commit before switching to
context 2.
v Do not access a given object from more than one context at a time.
Change the code so that both the update and the select are done from the
same context.
v Set the LOCKTIMEOUT database configuration parameter to a value other
than -1.
While this will not prevent the deadlock, it will allow execution to resume.
Context 2 is eventually rolled back because it is unable to obtain the
requested lock. Once context 2 is rolled back, context 1 can continue
executing (which releases the locks) and context 2 can retry its work.
The techniques for avoiding deadlocks are shown in terms of the above
example, but you can apply them to all applications which use concurrent
transactions.
548
549
550
551
Application Linkage
To produce an executable application, you need to link in the application
objects with the language libraries, the operating system libraries, the normal
database manager libraries, and the libraries of the TP monitor and
transaction manager products.
552
v VARGRAPHIC
v LONG VARGRAPHIC
v DBCLOB
See Data Types on page 77 for more information about this topic.
Note: Be sure to consider the possibility of character conversion when using
this technique. If you are passing data with one of the character string
data types such as VARCHAR, LONG VARCHAR, or CLOB, or graphic
data types such as VARGRAPHIC, LONG VARGRAPHIC, OR
DBCLOB, and the application code page is not the same as the
database code page, any non-character data will be converted as if it
were character data. To avoid character conversion, you should pass
data in a variable with a data type of BLOB.
See Conversion Between Different Code Pages on page 515 for more
information about how and when data conversion occurs.
553
554
.
.
.
.
.
.
555
555
555
555
556
557
. 560
562
.
.
.
.
568
569
569
570
. 570
. 571
. 571
. 562
Improving Performance
To take advantage of the performance benefits that partitioned environments
offer, you should consider using special programming techniques. For
example, if your application accesses DB2 data from more than one database
manager partition, you need to consider the information contained herein. For
an overview of partitioned environments, refer to the Administration Guide and
the SQL Reference.
555
Instead, break the query into multiple SELECT statements (each with a single
host variable) or use a single SELECT statement with a UNION to achieve the
same result. The coordinator partition can take advantage of simpler SELECT
statements to use directed DSS to communicate only to the necessary
partitions. The optimized query looks like:
SELECT ... AS res1 FROM t1
WHERE PARTKEY=:hostvar1
UNION
SELECT ... AS res2 FROM t1
WHERE PARTKEY=:hostvar2
Note that the above technique will only improve performance if the number
of selects in the UNION is significantly smaller than the number of partitions.
Using Local Bypass
A specialized form of the directed DSS query accesses data stored only on the
coordinator partition. This is called a local bypass because the coordinator
partition completes the query without having to communicate with another
partition.
Local bypass is enabled automatically whenever possible, but you can increase
its use by routing transactions to the partition containing the data for that
transactions. One technique for doing this is to have a remote client maintain
connections to each partition. A transaction can then use the correct
556
557
558
the buffers and the rows are inserted. It then executes the other
statement (the one closing the buffered insert), provided all the
rows were successfully inserted. See Considerations for Using
Buffered Inserts on page 560 for additional details.
The standard interface in a partitioned environment, (without a buffered
insert) loads one row at a time doing the following steps (assuming that the
application is running locally on one of the partitions):
1. The coordinator node passes the row to the database manager that is on
the same node.
2. The database manager uses indirect hashing to determine the partition
where the row should be placed:
v The target partition receives the row.
v The target partition inserts the row locally.
v The target partition sends a response to the coordinator node.
3. The coordinator node receives the response from the target partition.
4. The coordinator node gives the response to the application
The insertion is not committed until the application issues a COMMIT.
5. Any INSERT statement containing the VALUES clause is a candidate for
Buffered Insert, regardless of the number of rows or the type of elements
in the rows. That is, the elements can be constants, special registers, host
variables, expressions, functions and so on.
For a given INSERT statement with the VALUES clause, the DB2 SQL
compiler may not buffer the insert based on semantic, performance, or
implementation considerations. If you prepare or bind your application with
the INSERT BUF option, ensure that it is not dependent on a buffered insert.
This means:
v Errors may be reported asynchronously for buffered inserts, or
synchronously for regular inserts. If reported asynchronously, an insert
error may be reported on a subsequent insert within the buffer, or on the
other statement which closes the buffer. The statement that reports the error
is not executed. For example, consider using a COMMIT statement to close
a buffered insert loop. The commit reports an SQLCODE -803 (SQLSTATE
23505) due to a duplicate key from an earlier insert. In this scenario, the
commit is not executed. If you want your application to really commit, for
example, some updates that are performed before it enters the buffered
insert loop, you must reissue the COMMIT statement.
v Rows inserted may be immediately visible through a SELECT statement
using a cursor without a buffered insert. With a buffered insert, the rows
will not be immediately visible. Do not write your application to depend on
these cursor-selected rows if you precompile or bind it with the INSERT
BUF option.
Chapter 18. Programming Considerations in a Partitioned Environment
559
560
Suppose the file contains 8 000 values, but value 3 258 is not legal (for
example, a unique key violation). Each 1 000 inserts results in the execution of
another SQL statement, which then closes the INSERT INTO t2 statement.
During the fourth group of 1 000 inserts, the error for value 3 258 will be
detected. It may be detected after the insertion of more values (not necessarily
the next one). In this situation, an error code is returned for the INSERT INTO
t2 statement.
561
The error may also be detected when an insertion is attempted on table t3,
which closes the INSERT INTO t2 statement. In this situation, the error code is
returned for the INSERT INTO t3 statement, even though the error applies to
table t2.
Suppose, instead, that you have 3 900 rows to insert. Before being told of the
error on row number 3 258, the application may exit the loop and attempt to
issue a COMMIT. The unique-key-violation return code will be issued for the
COMMIT statement, and the COMMIT will not be performed. If the
application wants to COMMIT the 3000 rows which are in the database thus
far (the last execution of EXEC SQL INSERT INTO t3 ... ends the savepoint for
those 3000 rows), then the COMMIT has to be REISSUED! Similar
considerations apply to ROLLBACK as well.
Note: When using buffered inserts, you should carefully monitor the
SQLCODES returned to avoid having the table in an indeterminate
state. For example, if you remove the SQLCODE < 0 clause from the
THEN DO statement in the above example, the table could end up
containing an indeterminate number of rows.
Restrictions on Using Buffered Inserts
The following restrictions apply:
v For an application to take advantage of the buffered inserts, one of the
following must be true:
The application must either be prepared through PREP or bound with
the BIND command and the INSERT BUF option is specified.
The application must be bound using the BIND or the PREP API with
the SQL_INSERT_BUF option.
v If the INSERT statement with VALUES clause includes long fields or LOBS
in the explicit or implicit column list, the INSERT BUF option is ignored for
that statement and a normal insert section is done, not a buffered insert.
This is not an error condition, and no error or warning message is issued.
v INSERT with fullselect is not affected by INSERT BUF. A buffered INSERT
does not improve the performance of this type of INSERT.
v Buffered inserts can be used only in applications, and not through
CLP-issued inserts, as these are done through the EXECUTE IMMEDIATE
statement.
The application can then be run from any supported client platform.
562
But, the following query could be run on each partition in the database (that
is, if there are five partitions, five separate queries are required, one at each
partition). Each query generates the set of all the employee names whose
record is on the particular partition where the query runs. Each local result set
can be redirected to a file. The result sets then need to be merged into a single
result set.
On AIX, you can use a property of Network File System (NFS) files to
automate the merge. If all the partitions direct their answer sets to the same
file on an NFS mount, the results are merged. Note that using NFS without
blocking the answer into large buffers results in very poor performance.
SELECT FIRSTNME, LASTNAME, JOB FROM EMPLOYEE WHERE WORKDEPT IS NOT NULL
AND NODENUMBER(NAME) = CURRENT NODE
The result can either be stored in a local file (meaning that the final result
would be 20 files, each containing a portion of the complete answer set), or in
a single NFS-mounted file.
The following example uses the second method, so that the result is in a
single file that is NFS mounted across the 20 nodes. The NFS locking
mechanism ensures serialization of writes into the result file from the different
partitions. Note that this example, as presented, runs on the AIX platform
with an NFS file system installed.
#define _POSIX_SOURCE
#define INCL_32
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<fcntl.h>
Chapter 18. Programming Considerations in a Partitioned Environment
563
#include
#include
#include
#include
#include
<sqlenv.h>
<errno.h>
<sys/access.h>
<sys/flock.h>
<unistd.h>
564
exit(1);
}
else if ( argc == 5 ) {
strcpy( dbname, argv[2] ); /* get database name from the argument */
strcpy (userid, argv[3]);
strcpy (passwd, argv[4]);
EXEC SQL CONNECT TO :dbname IN SHARE MODE USER :userid USING :passwd;
if ( SQLCODE != 0 ) {
printf( "Error: CONNECT TO the database failed. SQLCODE = %ld\n",
SQLCODE );
exit( 1 );
}
}
else {
printf ("\nUSAGE: largevol txt_file database [userid passwd]\n\n");
exit( 1 ) ;
} /* endif */
/* Open the input file with the specified access permissions */
if ( ( iFileHandle = open(argv[1], iOpenOptions, 0666 ) ) == -1 ) {
printf( "Error: Could not open %s.\n", argv[2] ) ;
exit( 2 ) ;
}
/* Set up error and end of table escapes */
EXEC SQL WHENEVER SQLERROR GO TO ext ;
EXEC SQL WHENEVER NOT FOUND GO TO cls ;
/* Declare and open the cursor */
EXEC SQL DECLARE c1 CURSOR FOR
SELECT firstnme, lastname, job FROM employee
WHERE workdept IS NOT NULL
AND NODENUMBER(lastname) = CURRENT NODE;
EXEC SQL OPEN c1 ;
/* Set up the temporary buffer for storing the fetched result */
if ( ( file_buf = ( char * ) malloc( BUF_SIZE ) ) == NULL ) {
printf( "Error: Allocation of buffer failed.\n" ) ;
exit( 3 ) ;
}
memset( file_buf, 0, BUF_SIZE ) ; /* reset the buffer */
buffer_len = 0 ; /* reset the buffer length */
write_ptr = file_buf ; /* reset the write pointer */
/* For each fetched record perform the following
*/
/* - insert it into the buffer following the
*/
/*
previously stored record
*/
/* - check if there is still enough space in the */
/*
buffer for the next record and lock/write/
*/
/*
unlock the file and initialize the buffer
*/
/*
if not
*/
Chapter 18. Programming Considerations in a Partitioned Environment
565
do {
EXEC SQL FETCH c1 INTO :first_name, :last_name, :job_code;
buffer_len += sprintf( write_ptr, "%s %s %s\n",
first_name, last_name, job_code );
buffer_len = strlen( file_buf ) ;
/* Write the content of the buffer to the file if */
/* the buffer reaches the limit
*/
if ( buffer_len >= ( BUF_SIZE - MAX_RECORD_SIZE ) ) {
/* get excl. write lock */
lock_rc = fcntl( iFileHandle, lock_command, &lock );
if ( lock_rc != 0 ) goto file_lock_err;
/* position at the end of file */
lock_rc = lseek( iFileHandle, 0, SEEK_END );
if ( lock_rc < 0 ) goto file_seek_err;
/* write the buffer */
lock_rc = write( iFileHandle,
( void * ) file_buf, buffer_len );
if ( lock_rc < 0 ) goto file_write_err;
/* release the lock */
lock_rc = fcntl( iFileHandle, lock_command, &unlock );
if ( lock_rc != 0 ) goto file_unlock_err;
file_buf[0] = '\0' ; /* reset the buffer */
buffer_len = 0 ; /* reset the buffer length */
write_ptr = file_buf ; /* reset the write pointer */
}
else {
write_ptr = file_buf + buffer_len ; /* next write position */
}
} while (1) ;
cls:
/* Write the last piece of data out to the file */
if (buffer_len > 0) {
lock_rc = fcntl(iFileHandle, lock_command, &lock);
if (lock_rc != 0) goto file_lock_err;
lock_rc = lseek(iFileHandle, 0, SEEK_END);
if (lock_rc < 0) goto file_seek_err;
lock_rc = write(iFileHandle, (void *)file_buf, buffer_len);
if (lock_rc < 0) goto file_write_err;
lock_rc = fcntl(iFileHandle, lock_command, &unlock);
if (lock_rc != 0) goto file_unlock_err;
}
free(file_buf);
close(iFileHandle);
EXEC SQL CLOSE c1;
exit (0);
ext:
if ( SQLCODE != 0 )
printf( "Error: SQLCODE = %ld.\n", SQLCODE );
EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL CONNECT RESET;
if ( SQLCODE != 0 ) {
printf( "CONNECT RESET Error: SQLCODE = %ld\n", SQLCODE );
exit(4);
566
}
exit (5);
file_lock_err:
printf("Error: file lock error = %ld.\n",lock_rc);
/* unconditional unlock of the file */
fcntl(iFileHandle, lock_command, &unlock);
exit(6);
file_seek_err:
printf("Error: file seek error = %ld.\n",lock_rc);
/* unconditional unlock of the file */
fcntl(iFileHandle, lock_command, &unlock);
exit(7);
file_write_err:
printf("Error: file write error = %ld.\n",lock_rc);
/* unconditional unlock of the file */
fcntl(iFileHandle, lock_command, &unlock);
exit(8);
file_unlock_err:
printf("Error: file unlock error = %ld.\n",lock_rc);
/* unconditional unlock of the file */
fcntl(iFileHandle, lock_command, &unlock);
exit(9);
}
This method is applicable not only to a select from a single table, but also for
more complex queries. If, however, the query requires noncollocated
operations (that is, the Explain shows more than one subsection besides the
Coordinator subsection), this can result in too many processes on some
partitions if the query is run in parallel on all partitions. In this situation, you
can store the query result in a temporary table TEMP on as many partitions as
required, then do the final extract in parallel from TEMP.
If you want to extract all employees, but only for selected job classifications,
you can define the TEMP table with the column names, FIRSTNME,
LASTNAME, and JOB, as follows:
INSERT INTO TEMP
SELECT FIRSTNME, LASTNAME, JOB FROM EMPLOYEE WHERE WORKDEPT IS NOT NULL
AND EMPNO NOT IN (SELECT EMPNO FROM EMP_ACT WHERE
EMPNO<200)
567
v Create the TEMP table with the NOT LOGGED INITIALLY attribute, then
COMMIT the unit of work that created the table to release any acquired
catalog locks.
v When you use the TEMP table, you should issue the following statements
in a single unit of work:
1. ALTER TABLE TEMP ACTIVATE NOT LOGGED INITIALLY WITH
EMPTY TABLE (to empty the TEMP table and turn logging off)
2. INSERT INTO TEMP SELECT FIRSTNAME...
3. COMMIT
This technique allows you to insert a large answer set into a table without
logging and without catalog contention. Note that any error in the unit of
work that activated the NOT LOGGED state results in an unusable TEMP
table. If this occurs, you will have to drop and recreate the TEMP table. For
this reason, you should not use this technique to add data to a table that
you could not easily recreate.
If you require the final answer set (which is the merged partial answer set
from all nodes) to be sorted, you can:
v Specify the SORT BY clause on the final SELECT
v Do an extract into a separate file on each partition
v Merge the separate files into one output set using, for example, the sort -m
AIX command.
568
Error-Handling Considerations
In a partitioned environment, DB2 breaks up SQL statements into subsections,
each of which are processed on the partition that contains the relevant data.
As a result, an error may occur on a partition that does not have access to the
application. This does not occur in a single-partition environment.
You should consider the following:
v Non-CURSOR (EXECUTE) non-severe errors
v CURSOR non-severe errors
v Severe errors
v Merged multiple SQLCA structures
v How to identify the partition that returned the error
If an application ends abnormally because of a severe error, indoubt
transactions may be left in the database. (An indoubt transaction pertains to
global transactions when one phase completes successfully, but the system
fails before the a subsequent can complete, leaving the database in an
inconsistent state.) For information on handling them, see the Administration
Guide.
Severe Errors
If a severe error occurs in DB2 Universal Database, one of the following will
occur:
v The database manager on the node where the error occurs shuts down.
Active units of work are not rolled back.
In this situation, you must recover the node and any databases that were
active on the node when the shutdown occurred.
v All agents are forced off the database at the node where the error occurred.
All units of work on that database are rolled back.
In this situation, the database at the node where the error occurred is
marked as inconsistent. Any attempt to access it results in either SQLCODE
-1034 (SQLSTATE 58031) or SQLCODE -1015 (SQLSTATE 55025) being
returned. Before you or any other application on another node can access
the database at this node, you must run the RESTART DATABASE
command against the database. Refer to the Command Reference for
information on this command.
The severe error SQLCODE -1224 (SQLSTATE 55032) can occur for a variety of
reasons. If you receive this message, check the SQLCA, which will indicate
which node failed. Then check the db2diag.log file shared between the nodes
for details. See Identifying the Partition that Returned the Error on page 570
for additional information.
569
570
Debugging
You can use the tools described in the following sections for use in debugging
your applications. For more information, refer to the Troubleshooting Guide.
When you suspect that your application or query is either stalled or looping,
issue the following command:
db2_all "db2 GET SNAPSHOT FOR AGENTS ON database
Refer to the System Monitor Guide and Reference for information on how read
the information collected from the snapshot, and for the details of using the
database system monitor.
571
572
|
|
|
|
|
|
|
|
573
574
574
574
575
576
577
578
579
579
579
580
580
580
581
581
581
582
582
582
583
583
583
584
584
584
586
586
586
588
588
588
588
589
589
590
582
573
distributes the queries to the appropriate data sources, collects the requested
data, and returns this data to the applications.
Applications can use DB2 SQL to request values of any data types that DB2
can recognize, except for LOB data types. To write to a data sourcefor
example, to update a data source tablean application must use the data
sources own SQL in a special mode called pass-through.
The federated databases system catalog contains information not only about
the objects in the database, but also about the data sources and certain tables,
views, and functions in them. The catalog, then, contains information about
the entire federated system; accordingly, it is called a global catalog.
For a high-level overview of DB2 federated systems, see the Administration
Guide: Planning. For an extended overview, see the SQL Reference. For
examples of DB2 SQL queries that an application can submit, see Using
Distributed Requests to Query Data Sources on page 583. For information
about Pass-Through, see Using Pass-Through to Query Data Sources
Directly on page 588.
574
575
576
Valid Settings
numeric_string
Default
Setting
You set column options in the ALTER NICKNAME statement. For information
about this statement, see the SQL Reference.
Using Nicknames with Views
You can use nicknames with views in two main ways:
v You can create nicknames for data source views. The federated server treats
the nickname of a data source view the same way it treats the nickname of
a data source table.
v You can create federated database views of data source tables and views
that have nicknames. For example, because the federated server can
accommodate a join of base tables at different locations, you can easily
577
define federated database views of base tables that reside at different data
sources. Such multi-location views offer a high degree of data independence
for a globally integrated database, just as views defined on multiple local
tables do for centralized relational database managers. This global view
mechanism is one way in which the federated server offers a high degree of
data independence.
The action of creating a federated database view of data source data is
sometimes called creating a view on a nickname. This phrase reflects the
fact that for the view to be created, the CREATE VIEW statements fullselect
must reference the nickname of each table and view that the view is to
contain.
Views do not have statistics or indexes of their own because they are not
actual tables located in a database. This statement is true even when a view is
identical in structure and content to a single base table. For more information
about statistics and indexes, see Administration Guide: Implementation.
Cursor stability
RR
Repeatable read
RS
Read stability
UR
Uncommitted read
v The Oracle isolation levels that the requested levels map to.
Table 28. Comparable Isolation Levels between the Federated Server and Oracle Data
Sources.
578
Federated
Server
(DB2)
CS
RR
RS
UR
Oracle
Default
Transaction read-only
Transaction read-only
Same as
cursor
stability
579
How You Can Override Default Type Mappings and Create New Ones
As the preceding example indicates, the local type and remote type in a
default mapping are similar enough to ensure that when you query remote
columns for which the remote type is defined, all values that conform to both
types will be returned. But sometimes, you might require an alternative
mapping. Consider these scenarios:
Defining a Type Mapping That Applies to One or More Data Sources
Certain columns in three tables in an Oracle data source have a data type
DATE for time stamps. In a default mapping, this type points to the local DB2
type TIMESTAMP. So if you were to create nicknames for the three tables
without changing the default, TIMESTAMP would be defined locally for these
columns, and DB2 queries of the columns would yield time stamps. But
suppose that you want such queries to yield times only. You could then map
Oracle DATE to the DB2 type TIME, overriding the default. That way, when
you create the nicknames, TIME, not TIMESTAMP, would be defined locally
for the columns. As a result, when you query them, only the time portion of
the time stamps would be returned. To override the default type mapping,
you would use the CREATE TYPE MAPPING statement.
In the CREATE TYPE MAPPING statement, you can indicate whether the new
mapping that you want is to apply to a specific data source (for example, a
data source that a department in your organization uses) or to all data sources
of a specific type (for example, all Oracle data sources), or to all data sources
of a specific version of a type (for example, all Oracle 8.0.3 data sources).
Changing a Type Mapping for a Specific Table
You can change the local type in a type mapping for a specific table. For
example, Oracle data type NUMBER(32,3) maps by default to the DB2 data
type DOUBLE, a floating decimal data type. Suppose that in an Oracle table
for employee information, a column BONUS was defined with a data type of
NUMBER(32,3). Because of the mapping, a query of BONUS could return
values that look like this:
5.0000000000000E+002
1.0000000000000E+003
where +002 signifies that the decimal point should be moved two places to
the right, and +003 signifies that the decimal point should be moved three
places to the right.
So that queries of BONUS can return values that look like dollar amounts,
you could, for this particular table, remap NUMBER(32,3) to a DB2 DECIMAL
type with a precision and scale that reflect the format of actual bonuses. For
example, if you knew that the dollar portion of the bonuses would not exceed
580
To change the type mapping for a column of a specific table, use the ALTER
NICKNAME statement. With this statement, you can change the type defined
locally for a column of a table for which a nickname has been defined.
| Large Object (LOB) Support
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v
v
v
v
|
|
|
|
|
LOB Streaming
In LOB streaming, LOB data is retrieved in stages. DB2 uses LOB streaming
for data in result sets of queries that are completely pushed down. For
example, consider the following query:
|
|
|
|
|
|
581
LOB Materialization
In LOB materialization, the remote LOB data is retrieved by DB2 and stored
locally at the federated server. DB2 uses LOB materialization when:
v The LOB column cannot be deferred or streamed.
v A function must be applied to a LOB column locally, before the data is
transferred. This happens when DB2 compensates for functions not
available at a remote data source. For example, Microsoft SQL Server does
not provide a SUBSTR function for LOB columns. To compensate, DB2
materializes the LOB column locally and applies the DB2 SUBSTR function
to the retrieved LOB.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Applications can request LOB locators for LOBs stored in remote data sources.
A LOB locator is a 4-byte value stored in a host variable that a program can
use to refer to a LOB value (or LOB expression) held in the database system.
Using a LOB locator, a program can manipulate the LOB value as if the LOB
value was stored in a regular host variable. The difference in using the LOB
locator is that there is no need to transport the LOB value from the server to
the application (and possibly back again). For additional information about
LOB locators, see Understanding Large Object Locators on page 351.
DB2 can retrieve LOBs from remote data sources, store them at DB2, and then
issue a LOB locator against the stored LOB. LOB locators are released when:
|
|
|
|
|
Restrictions on LOBs
|
|
|
|
|
|
|
|
|
|
There are a few cases in which you can map a DB2 LOB data type to a
non-LOB data type at a data source. When you need to create a mapping
between a column with a DB2 LOB type and its counterpart column at a data
source, it is recommended that you use a LOB data type as a counterpart if at
all possible.
|
|
|
To create a mapping, use the create type mapping DDL statement. For
example:
where:
CREATE TYPE MAPPING my_oracle_lob FROM sysibm.clob TO SERVER TYPE oracle TYPElong
582
|
|
my_oracle_lob
Is the name of the type mapping.
|
|
sysibm.clob
Is the DB2 CLOB data type.
oracle
long
583
(SELECT country_code
FROM djadmin.ora_countries
WHERE country_name = 'CHINA')
584
the identifier of the instance that serves as the basis of a data source, the
database administrator assigns that identifier as a value to the server option
node.
Several server options address a major area of interaction between DB2 and
data sources: optimization of queries. For example, just as you can use the
column option varchar_no_trailing_blanks to inform the DB2 optimizer of
specific data source VARCHAR columns that have no trailing blanks, so can
you use a server optionalso called varchar_no_trailing_blanksto inform
the optimizer of data sources whose VARCHAR columns are all free of
trailing blanks. For a summary of how such information helps the optimizer
to create an access strategy, see Table 27 on page 577.
In addition, you can set the server option plan_hints to a value that enables
DB2 to provide Oracle data sources with statement fragments, called plan
hints, that help Oracle optimizers do their job. Specifically, plan hints can help
an optimizer to decide matters such as which index to use in accessing a
table, and which table join sequence to use in retrieving data for a result set.
Typically, the database administrator sets server options for a federated
system. However, a programmer can make good use of those options that
help to optimize queries. For example, suppose that for data sources
ORACLE1 and ORACLE2, the plan_hints server option is set to its default, n
(no, do not furnish this data source with plan hints). Also suppose that you
write a distributed request for data from ORACLE1 and ORACLE2, and that
you expect that plan hints would help the optimizers at these data sources
improve their strategies for accessing this data. You could override the default
with a setting of y (yes, furnish the plan hints) while your application is
connected to the federated database. When the connection is completed, the
setting would automatically revert to n.
To enforce a server option setting for the duration of a connection to the
federated database, use the SET SERVER OPTION statement. To ensure that
the setting takes effect, you must specify the statement right after the
CONNECT statement. In addition, it is advisable to prepare the statement
dynamically.
For documentation of the SET SERVER OPTION statement, see the SQL
Reference. For descriptions of all server options and their settings, see the
Administration Guide: Implementation.
585
586
Valid Settings
Default
Setting
ios_per_invoc
insts_per_invoc
450
ios_per_argbyte
insts_per_argbyte
percent_argbytes
100
initial_ios
initial_insts
587
For more information about the DROP FUNCTION MAPPING statement, the
SYSCAT.FUNCTIONS and SYSSTAT.FUNCTIONS views, and the
SYSCAT.FUNCMAPOPTIONS view, see the SQL Reference.
588
589
v You cannot pass through to more than one data source at a time.
v Pass-through does not support stored procedure calls.
v Pass-through does not support the SELECT INTO statement.
Using Pass-Through with Oracle Data Sources
The following information applies to Oracle data sources:
v The following restriction applies when a remote client issues a SELECT
statement from a command line processor (CLP) in pass-through mode: If
the client code is a DB2 Application Development Client prior to DB2
Universal Database Version 5, the SELECT will elicit an SQLCODE -30090
with reason code 11. To avoid this error, remote clients must use a DB2
Application Development Client that is at Version 5 or greater.
v Any DDL statement issued against an Oracle server is performed at parse
time and is not subject to transaction semantics. The operation, when
complete, is automatically committed by Oracle. If a rollback occurs, the
DDL is not rolled back.
v When you issue a SELECT statement from raw data types, use the
RAWTOHEX function to receive the hexadecimal values. When you
perform an INSERT into raw data types, provide the hexadecimal
representation.
590
591
592
593
593
593
594
594
595
598
599
600
600
601
606
606
608
611
612
613
613
614
616
617
|
|
619
620
621
621
622
622
623
626
627
633
633
635
Definition
??(
593
??)
??<
??>
Definition
??=
??/
??
Caret '|'
??!
??
Tilde ''
The extern "C" prevents type decoration of the function name by the C++
compiler. Without this declaration, you have to include all the type decoration
for the function name when you call the stored procedure, or issue the
CREATE FUNCTION statement.
.sqC
.sqx
594
.c
.C
.cxx
You can use the OUTPUT precompile option to override the name and path of
the output modified source file. If you use the TARGET C or TARGET
CPLUSPLUS precompile option, the input file does not need a particular
extension.
595
SQLDA (sqlda.h)
This file defines the SQL Descriptor Area (SQLDA) structure. The
SQLDA is used to pass data between an application and the database
manager.
SQLEAU (sqleau.h)
This file contains constant and structure definitions required for the
DB2 security audit APIs. If you use these APIs, you need to include
this file in your program. This file also contains constant and keyword
value definitions for fields in the audit trail record. These definitions
can be used by external or vendor audit trail extract programs.
SQLENV (sqlenv.h)
This file defines language-specific calls for the database environment
APIs, and the structures, constants, and return codes for those
interfaces.
SQLEXT (sqlext.h)
This file contains the function prototypes and constants of those
ODBC Level 1 and Level 2 APIs that are not part of the X/Open Call
Level Interface specification and is therefore used with the permission
of Microsoft Corporation.
SQLE819A (sqle819a.h)
If the code page of the database is 819 (ISO Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the
host CCSID 500 (EBCDIC International) binary collation. This file is
used by the CREATE DATABASE API.
SQLE819B (sqle819b.h)
If the code page of the database is 819 (ISO Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the
host CCSID 037 (EBCDIC US English) binary collation. This file is
used by the CREATE DATABASE API.
SQLE850A (sqle850a.h)
If the code page of the database is 850 (ASCII Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the
host CCSID 500 (EBCDIC International) binary collation. This file is
used by the CREATE DATABASE API.
SQLE850B (sqle850b.h)
If the code page of the database is 850 (ASCII Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the
host CCSID 037 (EBCDIC US English) binary collation. This file is
used by the CREATE DATABASE API.
SQLE932A (sqle932a.h)
If the code page of the database is 932 (ASCII Japanese), this sequence
596
sorts character strings that are not FOR BIT DATA according to the
host CCSID 5035 (EBCDIC Japanese) binary collation. This file is used
by the CREATE DATABASE API.
SQLE932B (sqle932b.h)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the
host CCSID 5026 (EBCDIC Japanese) binary collation. This file is used
by the CREATE DATABASE API.
SQLJACB (sqljacb.h)
This file defines constants, structures and control blocks for the DB2
Connect interface.
SQLMON (sqlmon.h)
This file defines language-specific calls for the database system
monitor APIs, and the structures, constants, and return codes for those
interfaces.
SQLSTATE (sqlstate.h)
This file defines constants for the SQLSTATE field of the SQLCA
structure.
SQLSYSTM (sqlsystm.h)
This file contains the platform-specific definitions used by the
database manager APIs and data structures.
SQLUDF (sqludf.h)
This file defines constants and interface structures for writing User
Defined Functions (UDFs). For more information on this file, see The
UDF Include File: sqludf.h on page 419.
SQLUTIL (sqlutil.h)
This file defines the language-specific calls for the utility APIs, and the
structures, constants, and codes required for those interfaces.
SQLUV (sqluv.h)
This file defines structures, constants, and prototypes for the
asynchronous Read Log API, and APIs used by the table load and
unload vendors.
SQLUVEND (sqluvend.h)
This file defines structures, constants and prototypes for the APIs to
be used by the storage management vendors.
SQLXA (sqlxa.h)
This file contains function prototypes and constants used by
applications that use the X/Open XA Interface.
597
598
Some debuggers and other tools that relate source code to object code do not
always work well with the #line macro. If the tool you wish to use behaves
unexpectedly, use the NOLINEMACRO option (used with DB2 PREP) when
precompiling. This will prevent the #line macros from being generated.
Correct Syntax
Statement initializer
EXEC SQL
Statement string
Statement terminator
semicolon (;)
For example:
EXEC SQL SELECT col INTO :hostvar FROM table;
v The SQL precompiler leaves CR/LFs and TABs in a quoted string as is.
v SQL comments are allowed on any line that is part of an embedded SQL
statement. These comments are not allowed in dynamically executed
statements. The format for an SQL comment is a double dash (--) followed
by a string of zero or more characters and terminated by a line end. Do not
place SQL comments after the SQL statement terminator as they will cause
compilation errors because they would appear to be part of the C/C++
language.
You can use comments in a static statement string wherever blanks are
allowed. Use the C/C++ comment delimiters /* */, or the SQL comment
symbol (--). //-style C++ comments are not permitted within static SQL
statements, but they may be used elsewhere in your program. The
precompiler removes comments before processing the SQL statement. You
599
Any new line characters (such as carriage return and line feed) are not
included in the string or delimited identifier.
v Substitution of white space characters such as end-of-line and TAB
characters occur as follows:
When they occur outside quotation marks (but inside SQL statements),
end-of-lines and TABs are substituted by a single space.
When they occur inside quotation marks, the end-of-line characters
disappear, provided the string is continued properly for a C program.
TABs are not modified.
Note that the actual characters used for end-of-line and TAB vary from
platform to platform. For example, OS/2 uses Carriage Return/Line Feed
for end-of-line, whereas UNIX-based systems use just a Line Feed.
600
It is also possible to have several local host variables with the same name
as long as they all have the same type and size. To do this, declare the first
occurrence of the host variable to the precompiler between BEGIN
DECLARE SECTION and END DECLARE SECTION statements, and leave
subsequent declarations of the variable out of declare sections. The
following code shows an example of this:
void f3(int i)
{
EXEC SQL BEGIN DECLARE SECTION;
char host_var_3[25];
EXEC SQL END DECLARE SECTION;
EXEC SQL SELECT COL2 INTO :host_var_3 FROM TBL2;
}
void f4(int i)
{
char host_var_3[25];
EXEC SQL INSERT INTO TBL2 VALUES (:host_var_3);
}
Since f3 and f4 are in the same module, and since host_var_3 has the same
type and length in both functions, a single declaration to the precompiler is
sufficient to use it in both places.
601
auto
extern
static
register
const
volatile
float
double
short
(1)
(2)
(3)
int
INTEGER (SQLTYPE 496)
BIGINT (SQLTYPE 492)
,
W X
varname
X
*
&
const
volatile
602
int
value
WY
(5)
int
int
Notes:
1
auto
extern
static
register
const
volatile
unsigned
char
603
,
W X
CHAR
C String
value
WY
CHAR
varname
X
*
&
(1)
const
volatile
C String
varname
(
X
varname
*
&
[length]
(2)
const
volatile
Notes:
1
604
auto
extern
static
register
const
volatile
struct
tag
W {
short
var1
int
unsigned
char
var2
[length]
(1)
,
W X
varname
X
*
&
Values
WY
const
volatile
Values
= {
value-1
value-2
Notes:
1
In form 2, length can be any valid constant expression. Its value after
evaluation determines if the host variable is VARCHAR (SQLTYPE 448)
or LONG VARCHAR (SQLTYPE 456).
605
WW
W ;
(1)
auto
extern
static
register
const
volatile
sqldbchar
wchar_t
varname
*
&
const
volatile
C String
606
value
WY
CHAR
CHAR
C String
(2)
varname
(
X
varname
*
&
[length]
(3)
const
volatile
Notes:
1
auto
extern
static
register
const
volatile
struct
tag
607
W {
short
var-1
int
(1)
sqldbchar
wchar_t
var-2
[length
(2)
,
W X
X
*
&
Variable
WY
const
volatile
Variable:
variable-name
value-1
value-2
Notes:
1
length can be any valid constant expression. Its value after evaluation
determines if the host variable is VARGRAPHIC (SQLTYPE 464) or
LONG VARGRAPHIC (SQLTYPE 472). The value of length must be
greater than or equal to 1 and not greater than the maximum length of
LONG VARGRAPHIC which is 16350.
608
auto
extern
static
register
const
volatile
SQL TYPE IS
BLOB
CLOB
DBCLOB
(1)
(length
,
W X
variable-name
X
*
&
LOB Data
WY
const
volatile
LOB Data
={init-len,init-data}
=SQL_BLOB_INIT(init-data)
=SQL_CLOB_INIT(init-data)
=SQL_DBCLOB_INIT(init-data)
Notes:
1
609
CLOB Example:
Declaration:
volatile sql type is clob(125m) *var1, var2 = {10, "data5data5"};
DBCLOB Example:
Declaration:
SQL TYPE IS DBCLOB(30000) my_dbclob1;
Declaration:
610
auto
extern
static
register
const
volatile
SQL TYPE IS
BLOB_LOCATOR
CLOB_LOCATOR
DBCLOB_LOCATOR
,
W X
Variable
WY
Variable
*
&
const
volatile
variable-name
= init-value
611
W ;
auto
extern
static
register
const
volatile
SQL TYPE IS
BLOB_FILE
CLOB_FILE
DBCLOB_FILE
Variable
WY
Variable
*
&
const
volatile
variable-name
= init-value
Note:
v SQL TYPE IS, BLOB_FILE, CLOB_FILE, DBCLOB_FILE may be in mixed
case.
CLOB File Reference Example (other LOB file reference type declarations are
similar):
Declaration:
static volatile SQL TYPE IS BLOB_FILE my_file;
612
C Macro Expansion
The C/C++ precompiler cannot directly process any C macro used in a
declaration within a declare section. Instead, you must first preprocess the
source file with an external C preprocessor. To do this, specify the exact
command for invoking a C preprocessor to the precompiler through the
PREPROCESSOR option.
When you specify the PREPROCESSOR option, the precompiler first processes
all the SQL INCLUDE statements by incorporating the contents of all the files
referred to in the SQL INCLUDE statement into the source file. The
precompiler then invokes the external C preprocessor using the command you
specify with the modified source file as input. The preprocessed file, which
the precompiler always expects to have an extension of .i, is used as the
new source file for the rest of the precompiling process.
Any #line macro generated by the precompiler no longer references the
original source file, but instead references the preprocessed file. In order to
relate any compiler errors back to the original source file, retain comments in
the preprocessed file. This helps you to locate various sections of the original
source files, including the header files. The option to retain comments is
commonly available in C preprocessors, and you can include the option in the
command you specify through the PREPROCESSOR option. You should not
have the C preprocessor output any #line macros itself, as they may be
incorrectly mixed with ones generated by the precompiler.
Notes on Using Macro Expansion:
1. The command you specify through the PREPROCESSOR option should
include all the desired options but not the name of the input file. For
example, for IBM C on AIX you can use the option:
xlC -P -DMYMACRO=1
613
The previous declarations resolve to the following after you use the
PREPROCESSOR option:
EXEC SQL BEGIN DECLARE SECTION;
char a[4];
char b[12];
struct
{
short length;
char data[18];
} m;
SQL TYPE IS BLOB(4) x;
SQL TYPE IS CLOB(15) y;
SQL TYPE IS DBCLOB(6144) z;
EXEC SQL END DECLARE SECTION;
614
short
years;
double salary;
} info;
} staff_record;
The fields of a host structure can be any of the valid host variable types.
These include all numeric, character, and large object types. Nested host
structures are also supported up to 25 levels. In the example above, the field
info is a sub-structure, whereas the field name is not, as it represents a
VARCHAR field. The same principle applies to LONG VARCHAR,
VARGRAPHIC and LONG VARGRAPHIC. Pointer to host structure is also
supported.
There are two ways to reference the host variables grouped in a host structure
in an SQL statement:
1. The host structure name can be referenced in an SQL statement.
EXEC SQL SELECT id, name, years, salary
INTO :staff_record
FROM staff
WHERE id = 10;
References to field names must be fully qualified even if there are no other
host variables with the same name. Qualified sub-structures can also be
referenced. In the example above, :staff_record.info can be used to
replace :staff_record.info.years, :staff_record.info.salary.
Since a reference to a host structure (first example) is equivalent to a
comma-separated list of its fields, there are instances where this type of
reference may lead to an error. For example:
EXEC SQL DELETE FROM :staff_record;
615
Other uses of host structures, which may cause an SQL0087N error to occur,
include PREPARE, EXECUTE IMMEDIATE, CALL, indicator variables and
SQLDA references. Host structures with exactly one field are permitted in
such situations, as are references to individual fields (second example).
The following lists each host structure field with its corresponding indicator
variable in the table:
staff_record.id
ind_tab[0]
staff_record.name
ind_tab[1]
staff_record.info.years
ind_tab[2]
staff_record.info.salary
ind_tab[3]
616
The array will be expanded into its elements when test_record is referenced
in an SQL statement making :test_record equivalent to :test_record.i[0],
:test_record.i[1].
Then...
k>n
k=n
617
k<n
For Output:
For Input:
If...
Then...
k >= n
k+1=n
k+1<n
When specified in any other SQL context, a host variable of SQLTYPE 460
with length n is treated as a VARCHAR data type with length n as defined
above. When specified in any other SQL context, a host variable of SQLTYPE
468 with length n is treated as a VARGRAPHIC data type with length n as
defined above.
618
/* Correct */
v Only the asterisk may be used as an operator over a host variable name.
v The maximum length of a host variable name is not affected by the number
of asterisks specified, because asterisks are not considered part of the name.
v Whenever using a pointer variable in an SQL statement, you should leave
the optimization level precompile option (OPTLEVEL) at the default setting
of 0 (no optimization). This means that no SQLDA optimization will be
done by the database manager.
619
Data members are only directly accessible in SQL statements through the
implicit this pointer provided by the C++ compiler in class member functions.
You cannot explicitly qualify an object instance (such as SELECT name INTO
:my_obj.staff_name ...) in an SQL statement.
If you directly refer to class data members in SQL statements, the database
manager resolves the reference using the this pointer. For this reason, you
should leave the optimization level precompile option (OPTLEVEL) at the
default setting of 0 (no optimization). This means that no SQLDA
optimization will be done by the database manager. (This is true whenever
pointer host variables are involved in SQL statements.)
The following example shows how you might directly use class data members
which you have declared as host variables in an SQL statement.
class STAFF
{
..
.
public:
..
.
};
620
621
v Single-graphic form.
Single-graphic host variables have an SQLTYPE of 468/469 that is
equivalent to GRAPHIC(1) SQL data type. (See Syntax for Graphic
Declaration (Single-Graphic Form and Null-Terminated Graphic Form) on
page 606.)
v Null-terminated graphic form.
Null-terminated refers to the situation where all the bytes of the last
character of the graphic string contain binary zeros ('\0's). They have an
SQLTYPE of 400/401. (See Syntax for Graphic Declaration (Single-Graphic
Form and Null-Terminated Graphic Form) on page 606.)
v VARGRAPHIC structured form.
VARGRAPHIC structured host variables have an SQLTYPE of 464/465 if
their length is between 1 and 16 336 bytes. They have an SQLTYPE of
472/473 if their length is between 2000 and 16 350 bytes. (See Syntax for
Graphic Declaration (VARGRAPHIC Structured Form) on page 607.)
Multi-byte Character Encoding in C and C++
Some character encoding schemes, particularly those from east Asian countries
require multiple bytes to represent a character. This external representation of
data is called the multi-byte character code representation of a character and
includes double-byte characters (characters represented by two bytes). Graphic
data in DB2 consists of double-byte characters.
To manipulate character strings with double-byte characters, it may be
convenient for an application to use an internal representation of data. This
internal representation is called the wide-character code representation of the
double-byte characters and is the format customarily used in the wchar_t
C/C++ data type. Subroutines that conform to ANSI C and X/OPEN
Portability Guide 4 (XPG4) are available to process wide-character data and to
convert data in wide-character format to and from multi-byte format.
Note that although an application can process character data in either
multi-byte format or wide-character format, interaction with the database
manager is done with DBCS (multi-byte) character codes only. That is, data is
stored in and retrieved from GRAPHIC columns in DBCS format. The
WCHARTYPE precompiler option is provided to allow application data in
wide-character format to be converted to/from multi-byte format when it is
exchanged with the database engine.
Selecting the wchar_t or sqldbchar Data Type in C and C++
While the size and encoding of DB2 graphic data is constant from one
platform to another for a particular code page, the size and internal format of
the ANSI C or C++ wchar_t data type depends on which compiler you use
and which platform you are on. The sqldbchar data type, however, is defined
by DB2 to be two bytes in size, and is intended to be a portable way of
622
manipulating DBCS and UCS-2 data in the same format in which it is stored
in the database. For more information on UCS-2 data, see Japanese and
Traditional Chinese EUC and UCS-2 Code Set Considerations on page 521
and refer to the Administration Guide.
You can define all DB2 C graphic host variable types using either wchar_t or
sqldbchar. You must use wchar_t if you build your application using the
WCHARTYPE CONVERT precompile option (as described in The
WCHARTYPE Precompiler Option in C and C++).
Note: When specifying the WCHARTYPE CONVERT option on a Windows
platform, you should note that wchar_t on Windows platforms is
Unicode. Therefore, if your C/C++ compilers wchar_t is not Unicode,
the wcstombs() function call may fail with SQLCODE -1421
(SQLSTATE=22504). If this happens, you can specify the WCHARTYPE
NOCONVERT option, and explicitly call the wcstombs() and
mbstowcs() functions from within your program.
If you build your application with the WCHARTYPE NOCONVERT
precompile option, you should use sqldbchar for maximum portability
between different DB2 client and server platforms. You may use wchar_t with
WCHARTYPE NOCONVERT, but only on platforms where wchar_t is defined
as two bytes in length.
If you incorrectly use either wchar_t or sqldbchar in host variable
declarations, you will receive an SQLCODE 15 (no SQLSTATE) at precompile
time.
The WCHARTYPE Precompiler Option in C and C++
Using the WCHARTYPE precompiler option, you can specify which graphic
character format you want to use in your C/C++ application. This option
provides you with the flexibility to choose between having your graphic data
in multi-byte format or in wide-character format. There are two possible
values for the WCHARTYPE option:
CONVERT
If you select the WCHARTYPE CONVERT option, character codes are
converted between the graphic host variable and the database
manager. For graphic input host variables, the character code
conversion from wide-character format to multi-byte DBCS character
format is performed before the data is sent to the database manager,
using the ANSI C function wcstombs(). For graphic output host
variables, the character code conversion from multi-byte DBCS
character format to wide-character format is performed before the
data received from the database manager is stored in the host
variable, using the ANSI C function mbstowcs().
623
624
625
626
that graphic data is encoded using the wide character format. This data will
be converted to UCS-2 and then sent to the database server. These
conversions will impact performance.
v NOCONVERT option used.
The graphic data is assumed by DB2 to be encoded using UCS-2 and is
tagged with the UCS-2 code page, and no conversions are done. DB2
assumes that the graphic host variable is being used simply as a bucket.
When the NOCONVERT option is chosen, graphic data retrieved from the
database server is passed to the application encoded using UCS-2. Any
conversions from the application code page to UCS-2 and from UCS-2 to
the application code page are your responsibility. Data tagged as UCS-2 is
sent to the database server without any conversions or alterations.
To minimize conversions you can either use the NOCONVERT option and
handle the conversions in your application, or not use GRAPHIC columns.
For the client environments where wchar_t encoding is in two-byte Unicode,
for example Windows NT or AIX version 4.3 and higher, you can use the
NOCONVERT option and work directly with UCS-2. In such cases, your
application should handle the difference between big-endian and little-endian
architectures. With NOCONVERT option, DB2 Universal Database uses
sqldbchar which is always two-byte big-endian.
Do not assign IBM-eucJP/IBM-eucTW CS0 (7-bit ASCII) and IBM-eucJP CS2
(Katakana) data to graphic host variables either after conversion to UCS-2 (if
NOCONVERT is specified) or by conversion to the wide character format (if
CONVERT is specified). This is because characters in both of these EUC code
sets become single-byte when converted from UCS-2 to PC DBCS.
In general, although eucJP and eucTW store GRAPHIC data as UCS-2, the
GRAPHIC data in these database is still non-ASCII eucJP or eucTW data.
Specifically, any space padded to such GRAPHIC data is DBCS space (also
known as ideographic space in UCS-2, U+3000). For a UCS-2 database,
however, GRAPHIC data can contain any UCS-2 character, and space padding
is done with UCS-2 space, U+0020. Keep this difference in mind when you
code applications to retrieve UCS-2 data from a UCS-2 database versus UCS-2
data from eucJP and eucTW databases.
For general EUC application development guidelines, see Japanese and
Traditional Chinese EUC and UCS-2 Code Set Considerations on page 521.
627
Table 30 shows the C/C++ equivalent of each column type. When the
precompiler finds a host variable declaration, it determines the appropriate
SQL type value. The database manager uses this value to convert the data
exchanged between the application and itself.
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
Table 30. SQL Data Types Mapped to C/C++ Declarations
SQL Column Type1
SMALLINT
(500 or 501)
short
short int
sqlint16
INTEGER
(496 or 497)
long
long int
sqlint322
BIGINT
(492 or 493)
long long
long
__int64
sqlint643
REAL4
(480 or 481)
float
DOUBLE5
(480 or 481)
double
DECIMAL(p,s)
(484 or 485)
Packed decimal
(Consider using the CHAR and DECIMAL
functions to manipulate packed decimal
fields as character data.)
CHAR(1)
(452 or 453)
char
Single character
CHAR(n)
(452 or 453)
VARCHAR(n)
(448 or 449)
struct tag {
short int;
char[n]
}
1<=n<=32 672
Alternately use char[n+1] where n null-terminated variable-length character
string
is large enough to hold the data
Note: Assigned an SQL type of 460/461.
1<=n<=32 672
628
LONG VARCHAR
(456 or 457)
struct tag {
short int;
char[n]
}
32 673<=n<=32 700
CLOB(n)
(408 or 409)
sql type is
clob(n)
sql type is
clob_locator
sql type is
clob_file
BLOB(n)
(404 or 405)
sql type is
blob(n)
sql type is
blob_locator
sql type is
blob_file
DATE
(384 or 385)
TIME
(388 or 389)
TIMESTAMP
(392 or 393)
Note: The following data types are only available in the DBCS or EUC environment when precompiled with the
WCHARTYPE NOCONVERT option.
GRAPHIC(1)
(468 or 469)
sqldbchar
GRAPHIC(n)
(468 or 469)
629
VARGRAPHIC(n)
(464 or 465)
struct tag {
short int;
sqldbchar[n]
}
1<=n<=16 336
LONG VARGRAPHIC
(472 or 473)
struct tag {
short int;
sqldbchar[n]
}
16 337<=n<=16 350
Note: The following data types are only available in the DBCS or EUC environment when precompiled with the
WCHARTYPE CONVERT option.
GRAPHIC(1)
(468 or 469)
wchar_t
GRAPHIC(n)
(468 or 469)
VARGRAPHIC(n)
(464 or 465)
struct tag {
short int;
wchar_t [n]
}
1<=n<=16 336
Alternately use char[n+1] where n null-terminated variable-length double-byte
character string
is large enough to hold the data
Note: Assigned an SQL type of 400/401.
1<=n<=16 336
LONG VARGRAPHIC
(472 or 473)
struct tag {
short int;
wchar_t [n]
}
16 337<=n<=16 350
Note: The following data types are only available in the DBCS or EUC environment.
DBCLOB(n)
(412 or 413)
sql type is
dbclob(n)
sql type is
dbclob_locator
630
sql type is
dbclob_file
Notes:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that would appear in the SQLTYPE field of the
SQLDA for these data types.
2. For platform compatibility, use sqlint32. On 64-bit UNIX platforms, long is a 64 bit integer. On 64-bit Windows
operating systems and 32-bit UNIX platforms long is a 32 bit integer.
3. For platform compatibility, use sqlint64. The DB2 Universal Database sqlsystm.h header file will type define
sqlint64 as __int64 on the Windows NT platform when using the Microsoft compiler, long long on 32-bit UNIX
platforms, and long on 64 bit UNIX platforms.
4. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
5. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
6. This is not a column type but a host variable type.
The following is a sample SQL declare section with host variables declared for
supported SQL data types.
EXEC SQL BEGIN DECLARE SECTION;
..
.
short
short
sqlint32
sqlint32
float
double
char
char
struct
age = 26;
year;
salary;
deptno;
bonus;
wage;
mi;
name[6];
{
short len;
char data[24];
} address;
struct
{
short len;
char data[32695];
} voice;
sql type is clob(1m)
chapter;
sql type is clob_locator
chapter_locator;
sql type is clob_file
/*
/*
/*
/*
/*
/*
/*
/*
SQL
SQL
SQL
SQL
SQL
SQL
SQL
SQL
type
type
type
type
type
type
type
type
500
500
496
496
480
480
452
460
*/
*/
*/
*/
*/
*/
*/
*/
/* SQL type
448 */
/* SQL type
456 */
/* SQL type
408 */
/* SQL type
964 */
631
chapter_file_ref;
sql type is blob(1m)
video;
sql type is blob_locator
video_locator;
sql type is blob_file
video_file_ref;
sql type is dbclob(1m)
tokyo_phone_dir;
sql type is dbclob_locator
tokyo_phone_dir_lctr;
sql type is dbclob_file
tokyo_phone_dir_flref;
struct
{
short len;
sqldbchar data[100];
} vargraphic1;
struct
{
short len;
wchar_t data[100];
} vargraphic2;
struct
/* SQL type
920 */
/* SQL type
404 */
/* SQL type
960 */
/* SQL type
916 */
/* SQL type
412 */
/* SQL type
968 */
/* SQL type
924 */
{
short len;
sqldbchar data[10000];
} long_vargraphic1;
/* SQL type 472 */
/* Precompiled with
WCHARTYPE NOCONVERT option */
struct
{
short len;
wchar_t data[10000];
} long_vargraphic2;
/* SQL type 472 */
/* Precompiled with
WCHARTYPE CONVERT option */
sqldbchar graphic1[100];
/* SQL type 468 */
/* Precompiled with
WCHARTYPE NOCONVERT option */
wchar_t
graphic2[100];
/* SQL type 468 */
/* Precompiled with
WCHARTYPE CONVERT option */
char
date[11];
/* SQL type 384 */
char
time[9];
/* SQL type 388 */
char
timestamp[27];
/* SQL type 392 */
short
wage_ind;
/* Null indicator */
..
.
EXEC SQL END DECLARE SECTION;
The following are additional rules for supported C/C++ data types:
632
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SMALLINT
(500 or 501)
sqlint16
INTEGER
(496 or 497)
sqlint32
BIGINT
(492 or 493)
sqlint64
REAL
(480 or 481)
float
DOUBLE
(480 or 481)
double
633
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
DECIMAL(p,s)
(484 or 485)
Not supported.
CHAR(n)
(452 or 453)
VARCHAR(n)
(448 or 449) (460 or 461)
struct {
sqluint16 length;
char[n]
}
LONG VARCHAR
(456 or 457)
struct {
sqluint16 length;
char[n]
}
1<=n<=32 672
Not null-terminated varying length character
string
32 673<=n<=32 700
CLOB(n)
(408 or 409)
struct {
sqluint32 length;
char
data[n];
}
struct {
sqluint32 length;
char
data[n];
}
char[11]
TIME
(388 or 389)
char[9]
TIMESTAMP
(392 or 393)
char[27]
Note: The following data types are only available in the DBCS or EUC environment when precompiled with the
WCHARTYPE NOCONVERT option.
634
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
GRAPHIC(n)
(468 or 469)
VARGRAPHIC(n)
(400 or 401)
LONG VARGRAPHIC
(472 or 473)
struct {
sqluint16 length;
sqldbchar[n]
}
16 337<=n<=16 350
DBCLOB(n)
(412 or 413)
struct {
sqluint32 length;
sqldbchar data[n];
}
635
636
637
637
638
638
638
639
639
639
641
641
642
642
644
644
646
647
647
648
649
649
649
651
651
652
652
654
655
656
660
660
660
662
663
664
665
665
665
666
668
668
669
670
672
673
673
673
637
638
.ser
SQLJ serialized profile files. You create packages in the database for
each profile file with the db2profc utility.
For an example of how to compile and run an SQLJ program, see Compiling
and Running SQLJ Programs on page 660.
Java Packages
To use the class libraries included with DB2 in your own applications, you
must include the appropriate import package statements at the top of your
source files. You can use the following packages in your Java applications:
java.sql.*
The JDBC API included in your JDK. You must import this package in
every JDBC and SQLJ program.
sqlj.runtime.*
SQLJ support included with every DB2 client. You must import this
package in every SQLJ program.
sqlj.runtime.ref.*
SQLJ support included with every DB2 client. You must import this
package in every SQLJ program.
Table 32 on page 640 shows the Java equivalent of each SQL data type, based
on the JDBC specification for data type mappings. Note that some mappings
depend on whether you use the JDBC version 1.2 or 2.0 driver. The JDBC
driver converts the data exchanged between the application and the database
using the following mapping schema. Use these mappings in your Java
applications and your PARAMETER STYLE JAVA stored procedures and
UDFs. For information on data type mappings for PARAMETER STYLE
DB2GENERAL stored procedures and UDFs, see Supported SQL Data Types
on page 770.
639
Note: There is no host variable support for the DATALINK data type in any
of the programming languages supported by DB2.
Table 32. SQL Data Types Mapped to Java Declarations
SQL Column Type
SMALLINT
(500 or 501)
short
INTEGER
(496 or 497)
int
BIGINT
(492 or 493)
long
REAL
(480 or 481)
float
DOUBLE
(480 or 481)
double
DECIMAL(p,s)
(484 or 485)
java.math.BigDecimal
Packed decimal
CHAR(n)
(452 or 453)
String
VARCHAR(n)
(448 or 449)
String
LONG VARCHAR
(456 or 457)
String
CHAR(n)
FOR BIT DATA
byte[]
VARCHAR(n)
FOR BIT DATA
byte[]
LONG VARCHAR
FOR BIT DATA
byte[]
|
|
BLOB(n)
(404 or 405)
|
|
CLOB(n)
(408 or 409)
|
|
DBCLOB(n)
(412 or 413)
DATE
(384 or 385)
java.sql.Date
TIME
(388 or 389)
java.sql.Time
TIMESTAMP
(392 or 393)
java.sql.Timestamp
640
SQLException method
SQLCODE
SQLException.getErrorCode()
SQLMSG
SQLException.getMessage()
SQLSTATE
SQLException.getSQLState()
For example:
int sqlCode=0;
// Variable to hold SQLCODE
String sqlState=00000; // Variable to hold SQLSTATE
try
{
// JDBC statements may throw SQLExceptions
stmt.executeQuery("Your JDBC statement here");
641
You can also install run-time call tracing capability into SQLJ programs. The
utility operates on the profiles associated with a program. Suppose a program
uses a profile called App_SJProfile0. To install call tracing into the program,
use the command:
profdb App_SJProfile0.ser
The profdb utility uses the Java Virtual Machine to run the main() method of
class sqlj.runtime.profile.util.AuditorInstaller. For more details on
usage and options for the AuditorInstaller class, visit the DB2 Java Web site
at http://www.ibm.com/software/data/db2/java.
642
SQLJ applications use this JDBC support, and in addition require the SQLJ
run-time classes to authenticate and execute any SQL packages that were
bound to the database at the precompiling and binding stage.
SQLJ
Application
SQLJ
Run-Time Classes
Remote
Database
Java
Application
JDBC
DB2 Client
Applet Support in Java: Figure 22 on page 644 illustrates how the JDBC
applet driver, also known as the net driver, works. The driver consists of a
JDBC client and a JDBC server, db2jd. The JDBC client driver is loaded on the
Web browser along with the applet. When the applet requests a connection to
a DB2 database, the client opens a TCP/IP socket to the JDBC server on the
machine where the Web server is running. After a connection is set up, the
client sends each of the subsequent database access requests from the applet
to the JDBC server though the TCP/IP connection. The JDBC server then
makes corresponding CLI (ODBC) calls to perform the task. Upon completion,
the JDBC server sends the results back to the client through the connection.
SQLJ applets add the SQLJ client driver on top of the JDBC client driver, but
otherwise work the same as JDBC applets.
For information on starting the DB2 JDBC server, refer to the db2jstrt
command in the Command Reference.
643
Web Browser
SQLJ Applet
JDBC Server
SQLJ Run-Time
Classes
Local DB2
Database
CLI
HTTP
Java/
JDBC
Applet
JDBC
Client
TCP/IP
Socket
Remote DB2
Database
Figure 22. DB2 Java Applet Implementation
JDBC Programming
Both applications and applets typically perform the following tasks:
1. Import the appropriate Java packages and classes (java.sql.*)
2. Load the appropriate JDBC driver (COM.ibm.db2.jdbc.app.DB2Driver for
applications; COM.ibm.db2.jdbc.net.DB2Driver for applets)
3. Connect to the database, specifying the location with a URL as defined in
the JDBC specification and using the db2 subprotocol. Applets require you
to provide the user ID, password, host name, and the port number for the
applet server. Applications implicitly use the default value for user ID and
password from the DB2 client catalog, unless you explicitly specify
alternate values.
4. Pass SQL statements to the database
5. Receive the results
6. Close the connection
After coding your program, compile it as you would any other Java program.
You dont need to perform any special precompile or bind steps.
644
1. Import the JDBC package. Every JDBC and SQLJ program must import
the JDBC package.
2. Declare a Connection object. The Connection object establishes and
manages the database connection.
3. Set database URL variable. The DB2 application driver accepts URLs that
take the form of jdbc:db2:>database-name<.
4. Connect to database. The DriverManager.getConnection() method is most
often used with the following parameters:
getConnection(String url)
Establish a connection to the database specified by url with the
default user ID and password.
getConnection(String url, String userid, String password)
Establish a connection to the database specified by url with the
values for user ID and password specified by userid and passwd
respectively.
5. Create a Statement object. Statement objects send SQL statements to the
database.
6. Execute an SQL SELECT statement. Use the executeQuery() method for
SQL statements, like SELECT statements, that return a single result set.
Assign the result to a ResultSet object.
7. Retrieve rows from the ResultSet. The ResultSet object allows you to treat
a result set like a cursor in host language embedded SQL. The
ResultSet.next() method moves the cursor to the next row and returns a
boolean false if the final row of the result set has been reached.
Restrictions on result set processing depend on the level of the JDBC API
that is enabled through the database manager configuration parameters.
v The JDBC 2.0 API allows you to scroll backwards and forwards through
a result set.
|
|
v The JDBC 1.2 API restricts you to scrolling forward through a result set
with the ResultSet.next() method.
8. Return the value of a column. The ResultSet.getString(n) returns the
value of the nth column as a String object.
9. Execute an SQL UPDATE statement. Use the executeUpdate() method for
SQL UPDATE statements. The method returns the number of rows
updated as an int value.
645
rs.close();
646
stmt.close();
// update the database
System.out.println("Update the database... ");
stmt = con.createStatement();
int rowsUpdated = stmt.executeUpdate("UPDATE employee
SET firstnme = 'SHILI' where empno = '000010'");9
System.out.print("Changed "+rowsUpdated);
if (1 == rowsUpdated)
System.out.println(" row.");
else
System.out.println(" rows.");
stmt.close();
con.close();
} catch( Exception e ) {
System.out.println(e);
}
647
To run your applet, you need only a Java-enabled Web browser on the client
machine. When you load your HTML page, the applet tag instructs your
browser to download the Java applet and the db2java.zip class library, which
includes the DB2 JDBC driver implemented by the COM.ibm.db2.jdbc.net
class. When your applet calls the JDBC API to connect to DB2, the JDBC
driver establishes separate communications with the DB2 database through
the JDBC applet server running on the Web server.
Note: To ensure that the Web browser downloads db2java.zip from the server,
ensure that the CLASSPATH environment variable on the client does
not include db2java.zip. Your applet may not function correctly if the
client uses a local version of db2java.zip.
For information on building and distributing Java applets, refer to the
Application Building Guide.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It is essential that the db2java.zip file used by the Java applet be at the same
FixPak level as the JDBC applet server. Under normal circumstances,
db2java.zip is loaded from the Web Server where the JDBC applet server is
running, as shown in Figure 22 on page 644. This ensures a match. If,
however, your configuration has the Java applet loading db2java.zip from a
different location, a mismatch can occur. Prior to DB2 Version 7.1 FixPak 2,
this could lead to unexpected failures. As of DB2 Version 7.1 FixPak 2,
matching FixPak levels between the two files is strictly enforced at connection
time. If a mismatch is detected, the connection is rejected, and the client
receives one of the following exceptions:
v If db2java.zip is at DB2 Version 7.1 FixPak 2 or later:
COM.ibm.db2.jdbc.DB2Exception: [IBM][JDBC Driver]
CLI0621E Unsupported JDBC server configuration.
If a mismatch occurs, the JDBC applet server logs one of the following
messages in the jdbcerr.log file:
v If the JDBC applet server is at DB2 Version 7.1 FixPak 2 or later:
|
|
|
|
|
|
|
648
JDBC 2.0
JDBC 2.0 is the latest version of JDBC from Sun. This version of JDBC has two
defined parts: the core API, and the Optional Package API. For information
on the JDBC specification, see the DB2 Universal Database Java Web site at
http://www.ibm.com/software/data/db2/java/.
For information on installing the JDBC 2.0 drivers for your operating system,
refer to the Application Building Guide.
JDBC 2.0 Core API Support
The DB2 JDBC 2.0 driver supports the JDBC 2.0 core API, however, it does not
support all of the features defined in the specification. The DB2 JDBC 2.0
driver supports the following features of the JDBC 2.0 core API:
|
|
|
|
|
|
|
|
|
|
|
The DB2 JDBC 2.0 driver does not support the following features:
v Updatable Scrollable ResultSet
v New SQL types (Array, Ref, Distinct, Java Object, Struct)
v Customized SQL type mapping
v java.sql.Blob or java.sql.Clob in Java stored procedures, UDFs or methods.
v Scrollable sensitive ResultSets (scroll type of
ResultSet.TYPE_SCROLL_SENSITIVE)
v ResultSet.setFetchDirection(int) (ignored, does not throw Exception)
v ResultSet.setFetchSize(int) (ignored, does not throw Exception)
v Statement.setFetchSize(int) (ignored, does not throw Exception)
v
v
v
v
ResultSet.getTime(int, Calendar)
ResultSet.getTimestamp(int, Calendar)
CallableStatement.getClob()
CallableStatement.getBlob()
649
javax.naming.Context
This interface is implemented by COM.ibm.db2.jndi.DB2Context, which
handles the storage and retrieval of DataSource objects. In order to
support persistent associations of logical data source names to
physical database information, such as database names, these
associations are saved in a file named .db2.jndi. For an application,
the file resides (or is created if none exists) in the directory specified
by the USER.HOME environment variable. For an applet, you must
create this file in the root directory of the web server to facilitate the
lookup() operation. Applets do not support the bind(), rebind(),
unbind() and rename() methods of this class. Only applications can
bind DataSource objects to JNDI.
javax.sql.Datasource
This interface is implemented by COM.ibm.db2.jdbc.DB2DataSource.
You can save an object of this class in any implementation of
javax.naming.Context. This class also makes use of connection
pooling support.
DB2DataSource supports the following methods:
v public void setDatabaseName( String databaseName )
v public void setServerName( String serverName )
v public void setPortNumber( int portNumber )
|
|
|
|
javax.naming.InitialContextFactory
This interface is implemented by
COM.ibm.db2.jndi.DB2InitialContextFactory, which creates an
instance of DB2Context. Applications automatically set the value of the
JAVA.NAMING.FACTORY.INITIAL environment variable to
COM.ibm.db2.jndi.DB2InitialContextFactory To use this class in an
applet, call InitialContext() using the following syntax:
Hashtable env = new Hashtable( 5 );
env.put( "java.naming.factory.initial",
"COM.ibm.db2.jndi.DB2InitialContextFactory" );
Context ctx = new InitialContext( env );
650
Java Transaction APIs (JTA): DB2 supports the Java Transaction APIs (JTA)
through the DB2 JDBC application driver. DB2 does not provide JTA support
with the DB2 JDBC net driver.
javax.sql.XAConnection
This interface is implemented by COM.ibm.db2.jdbc.DB2XAConnection.
javax.sql.XADataSource
This interface is implemented by COM.ibm.db2.jdbc.DB2XADataSource,
and is a factory of COM.ibm.db2.jdbc.DB2PooledConnection objects.
javax.transactions.xa.XAResource
This interface is implemented by COM.ibm.db2.jdbc.app.DBXAResource.
javax.transactions.xa.Xid
This interface is implemented by COM.ibm.db2.jdbc.DB2Xid.
|
|
|
|
|
|
|
|
|
|
|
|
|
Note: You cannot use the DB2 JDBC 2.0 driver support for LOB and graphic
types in stored procedures or UDFs. To use LOB or graphic types in
stored procedures or UDFs, you must use the JDBC 1.2 driver support.
SQLJ Programming
DB2 SQLJ support is based on the SQLJ ANSI standard. Refer to the DB2 Java
Web site at http://www.ibm.com/software/data/db2/java for a pointer to the
ANSI Web site and other SQLJ resources. This chapter contains an overview
of SQLJ programming and information that is specific to DB2 SQLJ support.
The following kinds of SQL constructs may appear in SQLJ programs:
v Queries; for example, SELECT statements and expressions.
v SQL Data Change Statements (DML); for example, INSERT, UPDATE,
DELETE.
v Data Statements; for example, FETCH, SELECT..INTO.
v Transaction Control; for example, COMMIT, ROLLBACK, etc.
651
|
|
|
|
For more information on the db2profc and db2profp commands, refer to the
Command Reference. For more information on the SQLJ run-time classes, refer
to the DB2 Java Web site at http://www.ibm.com/software/data/db2/java.
DB2 SQLJ Restrictions
When you create DB2 applications with SQLJ, you should be aware of the
following restrictions:
652
653
In an SQLJ executable clause, the tokens that appear inside the braces are SQL
tokens, except for the host variables. All host variables are distinguished by
the colon character so the translator can identify them. SQL tokens never
occur outside the braces of an SQLJ executable clause. For example, the
following Java method inserts its arguments into an SQL table. The method
body consists of an SQLJ executable clause containing the host variables x, y,
and z:
void m (int x, String y, float z) throws SQLException
{
#sql { INSERT INTO TAB1 VALUES (:x, :y, :z) };
}
In general, SQL tokens are case insensitive (except for identifiers delimited by
double quotation marks), and can be written in upper, lower, or mixed case.
Java tokens, however, are case sensitive. For clarity in examples, case
insensitive SQL tokens are uppercase, and Java tokens are lowercase or mixed
case. Throughout this chapter, the lowercase null is used to represent the Java
null value, and the uppercase NULL to represent the SQL null value.
654
You can then use the translated and compiled iterator in a different source
file. To use the iterator:
1. Declare an instance of the generated iterator class
2. Assign the SELECT statement for the positioned UPDATE or DELETE to
the iterator instance
3. Execute positioned UPDATE or DELETE statements using the iterator
To use DelByName for a positioned DELETE in file2.sqlj, execute
statements like those in Deleting Rows Using a Positioned Iterator.
{
1
2
3
}
Notes:
1. 1This SQLJ clause executes the SELECT statement, constructs an iterator
object that contains the result table for the SELECT statement, and assigns
the iterator object to variable deliter.
2. 2This statement positions the iterator to the next row to be deleted.
3. 3This SQLJ clause performs the positioned DELETE.
655
656
class App
{
/**********************
** Register Driver **
**********************/
static
{
try
{
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver").newInstance();
}
catch (Exception e)
{
e.printStackTrace();
}
}
/********************
**
Main
**
********************/
public static void main(String argv[])
{
try
{
App_Cursor1 cursor1;
App_Cursor2 cursor2;
String str1 = null;
String str2 = null;
long
count1;
// URL is jdbc:db2:dbname
String url = "jdbc:db2:sample";
DefaultContext ctx = DefaultContext.getDefaultContext();
if (ctx == null)
{
try
{
// connect with default id/password
Connection con = DriverManager.getConnection(url);
con.setAutoCommit(false);
ctx = new DefaultContext(con);
}
catch (SQLException e)
{
System.out.println("Error: could not get a default context");
System.err.println(e) ;
System.exit(1);
}
DefaultContext.setDefaultContext(ctx);
}
Chapter 21. Programming in Java
657
}
cursor1.close(); 9
}
cursor2.close(); 9
658
catch( Exception e )
{
e.printStackTrace();
}
659
COL1,
COL2
FROM
TABLE1
WHERE
:x
> COL3
All host variables specified in compound SQL are input host variables by
default. You have to specify the parameter mode identifier OUT or INOUT
before the host variable in order to mark it as an output host variable. For
example:
#sql {begin compound atomic static
select count(*) into :OUT count1 from employee;
end compound}
Stored procedures may have IN, OUT, or INOUT parameters. In the above
case, the value of host variable myarg is changed by the execution of that
clause. An SQLJ executable clause may call a function by means of the SQL
VALUES construct. For example, assume a function F that returns an integer.
The following example illustrates a call to that function that then assigns its
result to Java local variable x:
{
}
int x;
#sql x = { VALUES( F(34) ) };
660
1. Translate the Java source code with Embedded SQL to generate the Java
source code MyClass.java and profiles MyClass_SJProfile0.ser,
MyClass_SJProfile1.ser, ... (one profile for each connection context):
sqlj MyClass.sqlj
where dbname is the name of the database. You can also specify these
options on the command line. For example, to specify the database mydata
when translating MyClass, you can issue the following command:
sqlj -url=jdbc:db2:mydata MyClass.sqlj
Note that the SQLJ translator automatically compiles the translated source
code into class files, unless you explicitly turn off the compile option with
the -compile=false clause.
2. Install DB2 SQLJ Customizers on generated profiles and create the DB2
packages in the DB2 database dbname:
db2profc -user=user-name
-prepoptions="bindfile
MyClass_SJProfile0.ser
db2profc -user=user-name
-prepoptions="bindfile
MyClass_SJProfile1.ser
...
-password=user-password -url=jdbc:db2:dbname
using MyClass0.bnd package using MyClass0"
-password=user-password -url=jdbc:db2:dbname
using MyClass1.bnd package using MyClass1"
The translator generates the SQL syntax for the database for which the SQLJ
profile is customized. For example,
i = { VALUES (
F(:x) ) };
661
but when connecting to a DB2 Universal Database for OS/390 database, DB2
customizes the VALUE statement into:
SELECT F(?) INTO ? FROM SYSIBM.SYSDUMMY1
If you run the DB2 SQLJ profile customizer, db2profc, against a DB2 Universal
Database database and generate a bind file, you cannot use that bind file to
bind up to a DB2 for OS/390 database when there is a VALUES clause in the
bind file. This also applies to generating a bind file against a DB2 for OS/390
database and trying to bind with it to a DB2 Universal Database database.
For detailed information on building and running DB2 SQLJ programs, refer
to the Application Building Guide.
To print the content of the profiles generated by the SQLJ translator in plain
text, use the profp utility as follows:
profp MyClass_SJProfile0.ser
profp MyClass_SJProfile1.ser
...
To print the content of the DB2 customized version of the profile in plain text,
use the db2profp utility as follows, where dbname is the name of the database:
db2profp -user=user-name -password=user-password -url=jdbc:db2:dbname
MyClass_SJProfile0.ser
db2profp -user=user-name -password=user-password -url=jdbc:db2:dbname
MyClass_SJProfile1.ser
...
662
663
for SBCS databases. For mixed databases, support is intended for the
BLOB and the DBCLOB types. As a workaround, applications running
on a mixed database system should convert CLOB arguments to
DBCLOB, LONG VARGRAPHIC, or LONG VARCHAR types. For
UDFs, this can be done with the CAST operator.
If you choose to use individual class files, you must store the class files in the
appropriate directory for your operating system. If you declare a class to be
part of a Java package, create the corresponding subdirectories in the function
directory and place the files in the corresponding subdirectory. For example, if
you create a class ibm.tests.test1 for a Linux system, store the
corresponding Java bytecode file (named test1.class) in
sqllib/function/ibm/tests.
The JVM that DB2 invokes uses the CLASSPATH environment variable to
locate Java files. DB2 adds the function directory and sqllib/java/db2java.zip
to the front of your CLASSPATH setting.
To set your environment so that the JVM can find the Java class files, you may
need to set the jdk11_path configuration parameter, or else use the default
664
value. Also, you may need to set the java_heap_sz configuration parameter to
increase the heap size for your application. For more information on
configuration parameters, refer to the Administration Guide.
665
If you do not use the Stored Procedure Builder to invoke the debugger,
create the debug table with the following command:
db2 -tf sqllib/misc/db2debug.ddl
where:
authid The user name used for debugging the stored procedure, that is, the
user name used to connect to the database.
schema
The schema name for the stored procedure.
666
proc_name
The specific name of the stored procedure. This is the specific name
that was provided on the CREATE PROCEDURE command or a
system generated identifier, if no specific name has been provided.
IP_num
The IP address in the form nnn.nnn.nnn.nnn of the client used to
debug the stored procedure.
For example, to enable debugging for the stored procedure MySchema.myProc
by the user USER1 with the debugging client located at the IP address
123.234.111.222, type the following command:
DB2 INSERT INTO db2dbg.routine_debug_user (AUTHID, TYPE,
ROUTINE_SCHEMA, SPECIFICNAME, DEBUG_ON, CLIENT_IPADDR)
VALUES ('USER1', 'S', 'MySchema', 'myProc', 'Y', '123.234.111.222')
Data Type
Attributes
Description
AUTHID
VARCHAR(128)
NOT NULL,
The application authid under which the
DEFAULT USER debugging for this stored procedure is to
be performed. This is the user ID that
was provided on connect to the database.
TYPE
CHAR(1)
NOT NULL
ROUTINE_SCHEMA
VARCHAR(128)
NOT NULL
SPECIFICNAME
VARCHAR(18)
NOT NULL
DEBUG_ON
CHAR(1)
NOT NULL,
DEFAULT N
Valid values:
v Y - enables debugging for the stored
procedure named in
ROUTINE_SCHEMA.SPECIFICNAME
v N - disables debugging for stored
procedure named in
ROUTINE_SCHEMA.SPECIFICNAME.
This is the default.
667
Data Type
Attributes
Description
CLIENT_IPADDR
VARCHAR(15)
NOT NULL
CLIENT_PORT
INTEGER
NOT NULL,
DEFAULT 8000
DEBUG_STARTN
INTEGER
NOT NULL
Not used.
DEBUG_STOPN
INTEGER
NOT NULL
Not used.
668
Installing, Replacing, and Removing JAR Files. You can also CALL the
sqlj.install_jar procedure in an application or from the CLP.
4. Issue the appropriate CREATE PROCEDURE or CREATE FUNCTION SQL
statement for the Java routine.
v For a description and examples of using the CREATE PROCEDURE
statement, see Registering Stored Procedures on page 199.
v For a description and examples of using the CREATE FUNCTION
statement, refer to the SQL Reference.
When you install a JAR file, DB2 extracts the Java class files from the JAR file
and registers each class in the system catalog. DB2 copies the JAR file to a
jar/schema subdirectory of the function directory. DB2 gives the new copy of
the JAR file the name given in the jar-id clause. Do not directly modify a JAR
file which has been installed in the DB2 instance. Instead, you can use the
CALL SQLJ.REMOVE_JAR and CALL SQLJ.REPLACE_JAR commands to
remove or replace an installed JAR file.
Installing, Replacing, and Removing JAR Files
To install or replace a JAR file in the DB2 instance, you can use the following
command syntax at the Command Line Processor:
WW
CALL
WY
CALL
WW
SQLJ.INSTALL_JAR
SQLJ.REPLACE_JAR
jar-url
(1)
jar-id
(2)
WY
Notes:
1
Specifies the JAR identifier in the database to be associated with the file
specified by the jar-url.
Note: On OS/2 and Windows 32-bit operating systems, DB2 stores JAR files
in the path specified by the DB2INSTPROF instance-specific registry
setting. To make JAR files unique for an instance, you must specify a
unique value for DB2INSTPROF for that instance.
For example, to install the Proc.jar file located in the
file:/home/db2inst/classes/ directory in the DB2 instance, issue the
following command from the Command Line Processor:
Chapter 21. Programming in Java
669
Subsequent SQL commands that use of the Procedure.jar file refer to it with
the name myproc_jar. To remove a JAR file from the database, use the CALL
REMOVE_JAR command with the following syntax:
WW
CALL
WY
CALL
WW
SQLJ.REMOVE_JAR
jar-id
(1)
WY
Notes:
1
Specifies the JAR identifier of the JAR file that is to be removed from the
database
To remove the JAR file myProc_jar from the database, enter the following
command at the Command Line Processor:
CALL SQLJ.REMOVE_JAR('myProc_jar')
670
Functions That Return A Single Value in Java: Declare Java methods that
return a single value with the Java return type that corresponds to the
respective SQL data type (see Supported SQL Data Types in Java on
page 639). You can write a scalar UDF that returns an SQL INTEGER value as
follows:
public class JavaExamples {
public static int getDivision(String division)
if (division.equals("Corporate")) return 1;
else if (division.equals("Eastern")) return
else if (division.equals("Midwest")) return
else if (division.equals("Western")) return
else return 5;
}
}
throws SQLException {
2;
3;
4;
Functions That Return Multiple Values in Java: Java methods which are
cataloged as stored procedures may return one or more values. You can also
write Java stored procedures that return multiple result sets; see Returning
Result Sets from Stored Procedures on page 233. To code a method which
will return a predetermined number of values, declare the return type void
and include the types of the expected output as arrays in the method
signature. You can write a stored procedure which returns the names, years of
service, and salaries of the two most senior employees with a salary under a
given threshold as follows:
public Class JavaExamples {
public static void lowSenioritySalary
(String[] name1, int[] years1, BigDecimal[] salary1,
String[] name2, int[] years2, BigDecimal[] salary2,
Integer threshhold) throws SQLException {
#sql iterator ByNames (String name, int years, BigDecimal salary);
ByNames result;
#sql result = {"SELECT name, years, salary
FROM staff
WHERE salary < :threshhold
ORDER BY years DESC"};
if (result.next()) {
name1[0] = result.name();
years1[0] = result.years();
salary1[0] = result.salary();
}
else {
name1[0] = "****";
return;
}
if (result.next()) {
name2[0] = result.name();
years2[0] = result.years();
salary2[0] = result.salary();
}
else {
name2[0] = "****";
671
return;
Note: You cannot use the DB2 JDBC 2.0 driver support for LOB and graphic
types in stored procedures or UDFs. To use LOB or graphic types in
stored procedures or UDFs, you must use the JDBC 1.2 LOB support.
For more information on using using DB2 JDBC 1.2 LOB support with
the DB2 JDBC 2.0 driver, see JDBC 2.0 Compatibility on page 651.
|
|
|
However, the JDBC 1.2 specification does not explicitly mention large objects
(LOBs) or graphic types. DB2 provides the following support for LOBs and
graphic types if you use the JDBC 1.2 driver.
If you use LOBs or graphic types in your applications, treat LOBs as the
corresponding LONGVAR type. Because LOB types are declared in SQL with
a maximum length, ensure that you do not return arrays or strings longer
than the declared limit. This consideration applies to SQL string types as well.
Treat GRAPHIC and DBCLOB data types as the corresponding CHAR types.
DB2 clients convert data directly from the server code page to Unicode. The
following JDBC APIs convert data to or from Unicode:
|
|
getString
Converts from server code page to Unicode.
setString
Converts from Unicode to server code page.
getUnicodeStream
Converts from server code page to Unicode.
setUnicodeStream
Converts from Unicode to server code page.
The following JDBC APIs involve conversion between the client code page
and the server code page:
setAsciiStream
Converts from client code page to server code page.
672
getAsciiStream
Converts from server code page to client code page.
Session Sharing
The interoperability methods described above provide a conversion between
the connection abstractions used in SQLJ and those used in JDBC. Both
abstractions share the same database session, that is, the underlying database
connection. Accordingly, calls to methods that affect session state on one
object will also be reflected in the other object, as it is actually the underlying
shared session that is being affected.
JDBC defines the default values for session state of newly created connections.
In most cases, SQLJ adopts these default values. However, whereas a newly
created JDBC connection has auto commit mode on by default, an SQLJ
connection context requires the auto commit mode to be specified explicitly
upon construction.
673
674
.
.
.
.
.
.
.
.
.
.
.
.
675
675
675
676
Perl Restrictions
The Perl DBI module supports only dynamic SQL. When you need to execute
a statement multiple times, you can improve the performance of your Perl
DB2 applications by issuing a prepare call to prepare the statement.
For current information on the restrictions of the version of the DBD::DB2
driver that you install on your workstation, refer to the CAVEATS file in the
DBD::DB2 driver package.
675
The DBI module automatically loads the DBD::DB2 driver when you create a
database handle using the DBI->connect statement with the following syntax:
my $dbhandle = DBI->connect(dbi:DB2:dbalias, $userID, $password);
where:
$dbhandle
represents the database handle returned by the connect statement
dbalias
represents a DB2 alias cataloged in your DB2 database directory
$userID
represents the user ID used to connect to the database
$password
represents the password for the user ID used to connect to the
database
Step 4. Fetch a row from the result set associated with the statement handle
with a call to fetchrow(). The Perl DBI returns a row as an array
with one value per column. For example, you can return all of the
rows from the statement handle in the previous example using the
following Perl statement:
676
677
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
my $database='dbi:DB2:sample';
my $user='';
my $password='';
my $dbh = DBI->connect($database, $user, $password)
or die "Can't connect to $database: $DBI::errstr";
my $sth = $dbh->prepare(
q{ SELECT firstnme, lastname
FROM employee }
)
or die "Can't prepare statement: $DBI::errstr";
my $rc = $sth->execute
or die "Can't execute statement: $DBI::errstr";
print "Query will return $sth->{NUM_OF_FIELDS} fields.\n\n";
print "$sth->{NAME}->[0]: $sth->{NAME}->[1]\n";
while (($firstnme, $lastname) = $sth->fetchrow()) {
print "$firstnme: $lastname\n";
}
# check for problems which may have terminated the fetch early
warn $DBI::errstr if $DBI::err;
$sth->finish;
$dbh->disconnect;
678
679
679
679
680
683
685
685
685
689
689
690
691
. 691
.
.
.
.
.
.
.
.
.
.
. 694
. 694
. 695
. 695
. 699
. 699
. 699
. 700
679
If you build the DB2 sample programs with the supplied script files, you must
change the include file path specified in the script files to the cobol_i
directory and not the cobol_a directory.
If you do not use the System/390 host data type support feature of the IBM
COBOL compiler, or you use an earlier version of this compiler, then the DB2
include files for your applications are in the following directory:
$HOME/sqllib/include/cobol_a
The include files that are intended to be used in your applications are
described below.
SQL (sql.cbl)
SQLAPREP (sqlaprep.cbl)
This file contains definitions required to write your own
precompiler.
SQLCA (sqlca.cbl)
This file defines the SQL Communication Area (SQLCA)
structure. The SQLCA contains variables that are used by the
database manager to provide an application with error
information about the execution of SQL statements and API
calls.
SQLCA_92 (sqlca_92.cbl)
This file contains a FIPS SQL92 Entry Level compliant version
of the SQL Communications Area (SQLCA) structure. This file
should be included in place of the sqlca.cbl file when
writing DB2 applications that conform to the FIPS SQL92
Entry Level standard. The sqlca_92.cbl file is automatically
included by the DB2 precompiler when the LANGLEVEL
precompiler option is set to SQL92E.
SQLCODES (sqlcodes.cbl)
This file defines constants for the SQLCODE field of the
SQLCA structure.
680
SQLDA (sqlda.cbl)
This file defines the SQL Descriptor Area (SQLDA) structure.
The SQLDA is used to pass data between an application and
the database manager.
SQLEAU (sqleau.cbl)
This file contains constant and structure definitions required
for the DB2 security audit APIs. If you use these APIs, you
need to include this file in your program. This file also
contains constant and keyword value definitions for fields in
the audit trail record. These definitions can be used by
external or vendor audit trail extract programs.
SQLENV (sqlenv.cbl)
This file defines language-specific calls for the database
environment APIs, and the structures, constants, and return
codes for those interfaces.
SQLETSD (sqletsd.cbl)
This file defines the Table Space Descriptor structure,
SQLETSDESC, which is passed to the Create Database API,
sqlgcrea.
SQLE819A (sqle819a.cbl)
If the code page of the database is 819 (ISO Latin-1), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 500 (EBCDIC International)
binary collation. This file is used by the CREATE DATABASE
API.
SQLE819B (sqle819b.cbl)
If the code page of the database is 819 (ISO Latin-1), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 037 (EBCDIC US English) binary
collation. This file is used by the CREATE DATABASE API.
SQLE850A (sqle850a.cbl)
If the code page of the database is 850 (ASCII Latin-1), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 500 (EBCDIC International)
binary collation. This file is used by the CREATE DATABASE
API.
SQLE850B (sqle850b.cbl)
If the code page of the database is 850 (ASCII Latin-1), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 037 (EBCDIC US English) binary
collation. This file is used by the CREATE DATABASE API.
681
SQLE932A (sqle932a.cbl)
If the code page of the database is 932 (ASCII Japanese), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 5035 (EBCDIC Japanese) binary
collation. This file is used by the CREATE DATABASE API.
SQLE932B (sqle932b.cbl)
If the code page of the database is 932 (ASCII Japanese), this
sequence sorts character strings that are not FOR BIT DATA
according to the host CCSID 5026 (EBCDIC Japanese) binary
collation. This file is used by the CREATE DATABASE API.
SQL1252A (sql1252a.cbl)
If the code page of the database is 1252 (Windows Latin-1),
this sequence sorts character strings that are not FOR BIT
DATA according to the host CCSID 500 (EBCDIC
International) binary collation. This file is used by the
CREATE DATABASE API.
SQL1252B (sql1252b.cbl)
If the code page of the database is 1252 (Windows Latin-1),
this sequence sorts character strings that are not FOR BIT
DATA according to the host CCSID 037 (EBCDIC US English)
binary collation. This file is used by the CREATE DATABASE
API.
SQLMON (sqlmon.cbl)
This file defines language-specific calls for the database
system monitor APIs, and the structures, constants, and return
codes for those interfaces.
SQLMONCT (sqlmonct.cbl)
This file contains constant definitions and local data structure
definitions required to call the Database System Monitor APIs.
SQLSTATE (sqlstate.cbl)
This file defines constants for the SQLSTATE field of the
SQLCA structure.
SQLUTBCQ (sqlutbcq.cbl)
This file defines the Table Space Container Query data
structure, SQLB-TBSCONTQRY-DATA, which is used with the
table space container query APIs, sqlgstsc, sqlgftcq and
sqlgtcq.
SQLUTBSQ (sqlutbsq.cbl)
This file defines the Table Space Query data structure,
SQLB-TBSQRY-DATA, which is used with the table space
query APIs, sqlgstsq, sqlgftsq and sqlgtsq.
682
SQLUTIL (sqlutil.cbl)
This file defines the language-specific calls for the utility APIs,
and the structures, constants, and codes required for those
interfaces.
Keyword pair
EXEC SQL
Statement string
Statement terminator
END-EXEC.
For example:
EXEC SQL SELECT col INTO :hostvar FROM table END-EXEC.
683
v Do not use the COBOL COPY statement to include files containing SQL
statements. SQL statements are precompiled before the module is compiled.
The precompiler will ignore the COBOL COPY statement. Instead, use the
SQL INCLUDE statement to include these files.
To locate the INCLUDE file, the DB2 COBOL precompiler searches the
current directory first, then the directories specified by the DB2INCLUDE
environment variable. Consider the following examples:
EXEC SQL INCLUDE payroll END-EXEC.
If the file specified in the INCLUDE statement is not enclosed in
quotation marks, as above, the precompiler searches for payroll.sqb,
then payroll.cpy, then payroll.cbl, in each directory in which it looks.
EXEC SQL INCLUDE 'pay/payroll.cbl' END-EXEC.
If the file name is enclosed in quotation marks, as above, no extension is
added to the name.
If the file name in quotation marks does not contain an absolute path,
then the contents of DB2INCLUDE are used to search for the file,
prepended to whatever path is specified in the INCLUDE file name. For
example, with DB2 for AIX, if DB2INCLUDE is set to
/disk2:myfiles/cobol, the precompiler searches for
./pay/payroll.cbl, then /disk2/pay/payroll.cbl, and finally
./myfiles/cobol/pay/payroll.cbl. The path where the file is actually
found is displayed in the precompiler messages. On OS/2 and Windows
platforms, substitute back slashes (\) for the forward slashes in the
above example.
Note: The setting of DB2INCLUDE is cached by the DB2 Command Line
Processor. To change the setting of DB2INCLUDE after any CLP
commands have been issued, enter the TERMINATE command, then
reconnect to the database and precompile as usual.
v To continue a string constant to the next line, column 7 of the continuing
line must contain a '-' and column 12 or beyond must contain a string
delimiter.
v SQL arithmetic operators must be delimited by blanks.
v Full-line COBOL comments can occur anywhere in the program, including
within SQL statements.
v
684
685
01
77
variable-name
IS
PICTURE
PIC
(1)
COMP-3
COMPUTATIONAL-3
COMP-5
COMPUTATIONAL-5
IS
USAGE
picture-string
VALUE
IS
WY
value
Notes:
1
Floating Point
WW
01
77
W
VALUE
variable-name
USAGE
IS
IS
COMPUTATIONAL-1
COMP-1
COMPUTATIONAL-2
COMP-2
.
value
Notes:
1
686
(1)
(2)
WY
01
77
variable-name
W
VALUE
IS
PICTURE
PIC
picture-string
IS
WY
value
Variable Length
WW
01
variable-name
WW
49
identifier-1
W
USAGE
WW
49
IS
identifier-2
WY
PICTURE
PIC
IS
COMP-5
COMPUTATIONAL-5
PICTURE
PIC
IS
S9(4)
VALUE
IS
WY
value
picture-string
VALUE
IS
W
value
W .
WY
687
6. Variable-length strings consist of a length item and a value item. You can
use acceptable COBOL names for the length item and the string item.
However, refer to the variable-length string by the collective name in SQL
statements.
7. In a CONNECT statement, such as shown below, COBOL character string
host variables dbname and userid will have any trailing blanks removed
before processing:
EXEC SQL CONNECT TO :dbname USER :userid USING :p-word
END-EXEC.
Syntax for Graphic Host Variables in COBOL: Fixed Length shows the syntax
for graphic host variables.
Syntax for Graphic Host Variables in COBOL: Fixed Length
WW
01
77
IS
variable-name
DISPLAY-1
VALUE
PICTURE
PIC
IS
IS
picture-string
USAGE
WY
value
Variable Length
WW
688
01
variable-name
WY
WW
49
identifier-1
WW
49
identifier-2
W
VALUE
IS
PICTURE
PIC
S9(4)
COMP-5
COMPUTATIONAL-5
IS
USAGE
IS
PICTURE
PIC
IS
VALUE
picture-string
WY
value
USAGE
IS
IS
DISPLAY-1
WY
value
01
variable-name
W ( length
K
M
G
USAGE
SQL TYPE IS
IS
BLOB
CLOB
DBCLOB
WY
689
CLOB Example:
Declaring:
01 MY-CLOB USAGE IS SQL TYPE IS CLOB(125M).
DBCLOB Example:
Declaring:
01 MY-DBCLOB USAGE IS SQL TYPE IS DBCLOB(30000).
690
01
variable-name
USAGE
SQL TYPE IS
IS
BLOB-LOCATOR
CLOB-LOCATOR
DBCLOB-LOCATOR
WY
01
variable-name
USAGE
SQL TYPE IS
IS
BLOB-FILE
CLOB-FILE
DBCLOB-FILE
WY
691
Group data items in the declare section can have any of the valid host
variable types described above as subordinate data items. This includes all
numeric and character types, as well as all large object types. You can nest
group data items up to 10 levels. Note that you must declare VARCHAR
character types with the subordinate items at level 49, as in the above
example. If they are not at level 49, the VARCHAR is treated as a group data
item with two subordinates, and is subject to the rules of declaring and using
group data items. In the example above, staff-info is a group data item,
whereas staff-name is a VARCHAR. The same principle applies to LONG
VARCHAR, VARGRAPHIC and LONG VARGRAPHIC. You may declare
group data items at any level between 02 and 49.
You can use group data items and their subordinates in four ways:
Method 1.
The entire group may be referenced as a single host variable in an SQL
statement:
EXEC SQL SELECT id, name, dept, job
INTO :staff-record
FROM staff WHERE id = 10 END-EXEC.
692
:staff-record.staff-info.staff-dept,
:staff-record.staff-info.staff-job
FROM staff WHERE id = 10 END-EXEC.
Note: The reference to staff-id is qualified with its group name using the
prefix staff-record., and not staff-id of staff-record as in pure
COBOL.
Assuming there are no other host variables with the same names as the
subordinates of staff-record, the above statement can also be coded as in
method 3, eliminating the explicit group qualification.
Method 3.
Here, subordinate items are referenced in a typical COBOL fashion, without
being qualified to their particular group item:
EXEC SQL SELECT id, name, dept, job
INTO
:staff-id,
:staff-name,
:staff-dept,
:staff-job
FROM staff WHERE id = 10 END-EXEC.
Method 4.
To resolve the ambiguous reference, you can use partial qualification of the
subordinate item, for example:
EXEC SQL SELECT id, name, dept, job
INTO
:staff-id,
:staff-name,
:staff-info.staff-dept,
:staff-info.staff-job
FROM staff WHERE id = 10 END-EXEC.
693
For example:
01 staff-indicator-table.
05 staff-indicator pic s9(4) comp-5
occurs 7 times.
This indicator table can be used effectively with the first format of group item
reference above:
EXEC SQL SELECT id, name, dept, job
INTO :staff-record :staff-indicator
FROM staff WHERE id = 10 END-EXEC.
694
That is, the subordinate item a1, declared with the REDEFINES clause is not
automatically expanded out in such situations. If a1 is unambiguous, you can
explicitly refer to a subordinate with a REDEFINES clause in an SQL
statement, as follows:
... INTO :foo.a1 ...
or
... INTO :a1 ...
695
Not every possible data description for host variables is recognized. COBOL
data items must be consistent with the ones described in the following table.
If you use other data items, an error can result.
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
Table 34. SQL Data Types Mapped to COBOL Declarations
SQL Column Type1
SMALLINT
(500 or 501)
INTEGER
(496 or 497)
BIGINT
(492 or 493)
DECIMAL(p,s)
(484 or 485)
Packed decimal
REAL2
(480 or 481)
Single-precision floating
point
DOUBLE3
(480 or 481)
Double-precision floating
point
CHAR(n)
(452 or 453)
Fixed-length character
string
VARCHAR(n)
(448 or 449)
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC X(n).
Variable-length character
string
1<=n<=32 672
LONG VARCHAR
(456 or 457)
01 name.
49 length PIC S9(4) COMP-5.
49 data PIC X(n).
Long variable-length
character string
32 673<=n<=32 700
CLOB(n)
(408 or 409)
Large object
variable-length character
string
BLOB(n)
(404 or 405)
Large object
variable-length binary
string
696
DATE
(384 or 385)
TIME
(388 or 389)
TIMESTAMP
(392 or 393)
Note: The following data types are only available in the DBCS environment.
GRAPHIC(n)
(468 or 469)
Fixed-length double-byte
character string
VARGRAPHIC(n)
(464 or 465)
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC G(n) DISPLAY-1.
Variable length
double-byte character
string with 2-byte string
length indicator
1<=n<=16 336
LONG VARGRAPHIC
(472 or 473)
Variable length
double-byte character
string with 2-byte string
length indicator
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC G(n) DISPLAY-1.
16 337<=n<=16 350
DBCLOB(n)
(412 or 413)
Note:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that would appear in the SQLTYPE field of the
SQLDA for these data types.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
The following is a sample SQL declare section with a host variable declared
for each supported SQL data type.
Chapter 23. Programming in COBOL
697
EXEC
*
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
*
EXEC SQL END DECLARE SECTION END-EXEC.
The following are additional rules for supported COBOL data types:
v PIC S9 and COMP-3/COMP-5 are required where shown.
v You can use level number 77 instead of 01 for all column types except
VARCHAR, LONG VARCHAR, VARGRAPHIC, LONG VARGRAPHIC and
all LOB variable types.
v Use the following rules when declaring host variables for DECIMAL(p,s)
column types. Refer to the following sample:
01 identifier PIC S9(m)V9(n) COMP-3
698
identifier
identifier
identifier
identifier
PIC
PIC
PIC
PIC
S9(3)V COMP-3
SV9(3) COMP-3
S9V COMP-3
SV9 COMP-3
699
use the system calls available from your operating system. In the case of a
UCS-2 database, you may also consider using the VARCHAR and
VARGRAPHIC scalar functions.
For further information on these functions, refer to the SQL Reference. For
general EUC application development guidelines, see Japanese and
Traditional Chinese EUC and UCS-2 Code Set Considerations on page 521.
700
701
701
701
702
702
702
702
705
705
707
707
707
710
710
711
711
712
714
714
715
701
SQLAPREP (sqlaprep.f)
This file contains definitions required to write your own
precompiler.
SQLCA (sqlca_cn.f, sqlca_cs.f)
This file defines the SQL Communication Area (SQLCA)
structure. The SQLCA contains variables that are used by the
database manager to provide an application with error
information about the execution of SQL statements and API
calls.
702
703
704
SQLUTIL (sqlutil.f)
This file defines the language-specific calls for the utility APIs,
and the structures, constants, and codes required for those
interfaces.
Keyword
EXEC SQL
705
Statement string
Statement terminator
The end of the source line serves as the statement terminator. If the line is
continued, the statement terminator is the end of the last continued line.
For example:
EXEC SQL SELECT COL INTO :hostvar FROM TABLE
706
When they occur outside quotation marks (but inside SQL statements),
end-of-lines and TABs are substituted by a single space.
When they occur inside quotation marks, the end-of-line characters
disappear, provided the string is continued properly for a FORTRAN
program. TABs are not modified.
Note that the actual characters used for end-of-line and TAB vary from
platform to platform. For example, OS/2 uses Carriage Return/Line Feed
for end-of-line, whereas UNIX-based systems use just a Line Feed.
707
,
WW
X varname
INTEGER*2
INTEGER*4
REAL*4
REAL *8
DOUBLE PRECISION
/ initial-value /
WY
CHARACTER
*n
X varname
/ initial-value /
WY
Variable Length
,
WW
(length)
X varname
WY
708
character
integer*2
character
equivalence(
my_varchar(1000+2)
my_varchar_length
my_varchar_data(1000)
my_varchar(1),
+
my_varchar_length )
equivalence( my_varchar(3),
+
my_varchar_data )
my_lvarchar(10000+2)
my_lvarchar_length
my_lvarchar_data(10000)
my_lvarchar(1),
+
my_lvarchar_length )
equivalence( my_lvarchar(3),
+
my_lvarchar_data )
709
passwd_length= 8
passwd_string = 'password'
EXEC SQL CONNECT TO :dbname USER :userid USING :passwd
BLOB
CLOB
(length
K
M
G
X variable-name
WY
my_blob(2097152+4)
my_blob_length
my_blob_data(2097152)
my_blob(1),
+
my_blob_length )
equivalence( my_blob(5),
+
my_blob_data )
CLOB Example:
Declaring:
sql type is clob(125m) my_clob
710
my_clob(131072000+4)
my_clob_length
my_clob_data(131072000)
my_clob(1),
+
my_clob_length )
equivalence( my_clob(5),
+
my_clob_data )
SQL TYPE IS
X variable-name
BLOB_LOCATOR
CLOB_LOCATOR
WY
SQL TYPE IS
BLOB_FILE
CLOB_FILE
X variable-name
WY
711
my_file(267)
my_file_name_length
my_file_data_length
my_file_file_options
my_file_name
my_file(1),
+
my_file_name_length )
equivalence( my_file(5),
+
my_file_data_length )
equivalence( my_file(9),
+
my_file_file_options )
equivalence( my_file(13),
+
my_file_name )
SMALLINT
(500 or 501)
INTEGER*2
INTEGER
(496 or 497)
INTEGER*4
REAL2
(480 or 481)
REAL*4
712
DOUBLE3
(480 or 481)
REAL*8
DECIMAL(p,s)
(484 or 485)
Packed decimal
CHAR(n)
(452 or 453)
CHARACTER*n
VARCHAR(n)
(448 or 449)
LONG VARCHAR
(456 or 457)
CLOB(n)
(408 or 409)
SQL TYPE IS CLOB (n) where n is Large object variable-length character string
from 1 to 2 147 483 647
BLOB(n)
(404 or 405)
DATE
(384 or 385)
CHARACTER*10
TIME
(388 or 389)
CHARACTER*8
TIMESTAMP
(392 or 393)
CHARACTER*26
Note:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that would appear in the SQLTYPE field of the
SQLDA for these data types.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
The following is a sample SQL declare section with a host variable declared
for each supported data type:
EXEC SQL BEGIN DECLARE SECTION
INTEGER*2
AGE /26/
INTEGER*4
DEPT
Chapter 24. Programming in FORTRAN
713
REAL*4
BONUS
REAL*8
SALARY
CHARACTER
MI
CHARACTER*112 ADDRESS
SQL TYPE IS VARCHAR (512) DESCRIPTION
SQL TYPE IS VARCHAR (32000) COMMENTS
SQL TYPE IS CLOB (1M) CHAPTER
SQL TYPE IS CLOB_LOCATOR CHAPLOC
SQL TYPE IS CLOB_FILE CHAPFL
SQL TYPE IS BLOB (1M) VIDEO
SQL TYPE IS BLOB_LOCATOR VIDLOC
SQL TYPE IS BLOB_FILE VIDFL
CHARACTER*10 DATE
CHARACTER*8
TIME
CHARACTER*26 TIMESTAMP
INTEGER*2
WAGE_IND
EXEC SQL END DECLARE SECTION
The following are additional rules for supported FORTRAN data types:
v You may define dynamic SQL statements longer than 254 characters by
using VARCHAR, LONG VARCHAR, OR CLOB host variables.
714
715
716
717
718
718
719
721
721
721
722
722
724
724
725
726
726
728
729
729
730
732
732
733
733
734
734
717
718
On OS/2, the RxFuncAdd commands need to be executed only once for all
sessions.
On AIX, the SysAddFuncPkg should be executed in every REXX/SQL
application.
Details on the RXfuncadd and SysAddFuncPkg APIs are available in the
REXX documentation for OS/2 and AIX, respectively.
SQL statements can be continued onto more than one line. Each part of the
statement should be enclosed in single quotation marks, and a comma must
delimit additional statement text as follows:
CALL SQLEXEC 'SQL text',
'additional text',
.
.
.
'final text'
719
720
The pre-declared identifiers must be used for cursor and statement names.
Other names are not allowed.
v When declaring cursors, the cursor name and the statement name should
correspond in the DECLARE statement. For example, if c1 is used as a
cursor name, s1 must be used for the statement name.
v Do not use comments within an SQL statement.
REXX sets the variable VAR to the 3-byte character string 100. If single
quotation marks are to be included as part of the string, follow this example:
VAR = "'100'"
When inserting numeric data into a CHARACTER field, the REXX interpreter
treats numeric data as integer data, thus you must concatenate numeric
strings explicitly and surround them with single quotation marks.
721
722
13
14
15
16
17
18
723
CLOB
CLOB
DBCLOB
BLOB
DECLARE
X :
variable-name
LANGUAGE TYPE
BLOB
CLOB
DBCLOB
LOCATOR
WY
You must declare LOB locator host variables in your application. When
REXX/SQL encounters these declarations, it treats the declared host variables
as locators for the remainder of the program. Locator values are stored in
REXX variables in an internal format.
Example:
CALL SQLEXEC 'DECLARE :hv1, :hv2 LANGUAGE TYPE CLOB LOCATOR'
724
Data represented by LOB locators returned from the engine can be freed in
REXX/SQL using the FREE LOCATOR statement which has the following
format:
Syntax for FREE LOCATOR Statement
,
WW
FREE
LOCATOR
X :
WY
variable-name
Example:
CALL SQLEXEC 'FREE LOCATOR :hv1, :hv2'
DECLARE
X :
variable-name
LANGUAGE TYPE
BLOB
CLOB
DBCLOB
WY
FILE
Example:
CALL SQLEXEC 'DECLARE :hv3, :hv4 LANGUAGE TYPE CLOB FILE'
File reference variables in REXX contain three fields. For the above example
they are:
hv3.FILE_OPTIONS.
Set by the application to indicate how the file will be used.
hv3.DATA_LENGTH.
Set by DB2 to indicate the size of the file.
hv3.NAME.
Set by the application to the name of the LOB file.
For FILE_OPTIONS, the application sets the following keywords:
Keyword (Integer Value)
Meaning
725
READ (2)
File is to be used for input. This is a regular file that can be opened,
read and closed. The length of the data in the file (in bytes) is
computed (by the application requestor code) upon opening the file.
CREATE (8)
On output, create a new file. If the file already exists, it is an error.
The length (in bytes) of the file is returned in the DATA_LENGTH field of
the file reference variable structure.
OVERWRITE (16)
On output, the existing file is overwritten if it exists, otherwise a new
file is created. The length (in bytes) of the file is returned in the
DATA_LENGTH field of the file reference variable structure.
APPEND (32)
The output is appended to the file if it exists, otherwise a new file is
created. The length (in bytes) of the data that was added to the file
(not the total file length) is returned in the DATA_LENGTH field of the
file reference variable structure.
Note: A file reference host variable is a compound variable in REXX, thus you
must set values for the NAME, NAME_LENGTH and FILE_OPTIONS fields in
addition to declaring them.
You should code this statement at the end of LOB applications. Note that you
can code it anywhere as a precautionary measure to clear declarations which
might have been left by previous applications (for example, at the beginning
of a REXX SQL application).
726
SMALLINT
(500 or 501)
INTEGER
(496 or 497)
REAL2
(480 or 481)
DOUBLE3
(480 or 481)
DECIMAL(p,s)
(484 or 485)
Packed decimal
CHAR(n)
(452 or 453)
Equivalent to CHAR(n)
LONG VARCHAR
(456 or 457)
Equivalent to CHAR(n)
CLOB(n)
(408 or 409)
Equivalent to CHAR(n)
BLOB(n)
(404 or 405)
DATE
(384 or 385)
Equivalent to CHAR(10)
727
TIME
(388 or 389)
Equivalent to CHAR(8)
TIMESTAMP
(392 or 393)
Equivalent to CHAR(26)
Note: The following data types are only available in the DBCS environment.
GRAPHIC(n)
(468 or 469)
VARGRAPHIC(n)
(464 or 465)
Equivalent to GRAPHIC(n)
LONG VARGRAPHIC
(472 or 473)
Equivalent to GRAPHIC(n)
DBCLOB(n)
(412 or 413)
Equivalent to GRAPHIC(n)
Notes:
1. The first number under Column Type indicates that an indicator variable is not provided, and the second number
indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values, or to
hold the length of a truncated string.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
728
prep_string = "SELECT
CALL SQLEXEC 'PREPARE
CALL SQLEXEC 'DECLARE
CALL SQLEXEC 'OPEN C1
On AIX, your application file can have any extension. You can run your
application using either of the following two methods:
1. At the shell command prompt, type rexx name where name is the name of
your REXX program.
2. If the first line of your REXX program contains a magic number (#!) and
identifies the directory where the REXX/6000 interpreter resides, you can
run your REXX program by typing its name at the shell command prompt.
For example, if the REXX/6000 interpreter file is in the /usr/bin directory,
include the following as the very first line of your REXX program:
#! /usr/bin/rexx
Run your REXX program by typing its file name at the shell command
prompt.
Note: On AIX, you should set the LIBPATH environment variable to include
the directory where the REXX SQL library, db2rexx is located. For
example:
export LIBPATH=/lib:/usr/lib:/usr/lpp/db2_07_01/lib
729
For information on how the DB2 APIs work, see the complete descriptions in
the DB2 API chapter of the Administrative API Reference.
If a DB2 API you want to use cannot be called using the SQLDBS routine (and
consequently, not listed in the Administrative API Reference), you may still call
the API by calling the DB2 command line processor (CLP) from within the
REXX application. However, since the DB2 CLP directs output either to the
standard output device or to a specified file, your REXX application cannot
directly access the output from the called DB2 API nor can it easily make a
determination as to whether the called API is successful or not. The SQLDB2
730
API provides an interface to the DB2 CLP that provides direct feedback to
your REXX application on the success or failure of each called API by setting
the compound REXX variable, SQLCA, after each call.
You can use the SQLDB2 routine to call DB2 APIs using the following syntax:
CALL SQLDB2 'command string'
731
732
:value.SQLD
:value.n.SQLTYPE
:value.n.SQLLEN
:value.n.SQLDATA
:value.n.SQLDIND
Notes:
1. Before invoking the stored procedure, the client application must initialize
the REXX variable with appropriate data.
When the SQL CALL statement is executed, the database manager
allocates storage and retrieves the value of the REXX variable from the
REXX variable pool. For an SQLDA used in a CALL statement, the
database manager allocates storage for the SQLDATA and SQLIND fields
based on the SQLTYPE and SQLLEN values.
In the case of a REXX stored procedure (that is, the procedure being called
is itself written in OS/2 REXX), the data passed by the client from either
type of CALL statement or the DARI API is placed in the REXX variable
pool at the database server using the following predefined names:
SQLRIDA
Predefined name for the REXX input SQLDA variable
SQLRODA
Predefined name for the REXX output SQLDA variable
2. When the stored procedure terminates, the database manager also retrieves
the value of the variables from the stored procedure. The values are
returned to the client application and placed in the clients REXX variable
pool.
Considerations on the Client for REXX
When using host variables in the CALL statement, initialize each host variable
to a value that is type compatible with any data that is returned to the host
variable from the server procedure. You should perform this initialization
even if the corresponding indicator is negative.
When using descriptors, SQLDATA must be initialized and contain data that
is type compatible with any data that is returned from the server procedure.
You should perform this initialization even if the SQLIND field contains a
negative value.
Considerations on the Server for REXX
Ensure that all the SQLDATA fields and SQLIND (if it is a nullable type) of
the predefined output sqlda SQLRODA are initialized. For example, if
SQLRODA.SQLD is 2, the following fields must contain some data (even if
the corresponding indicators are negative and the data is not passed back to
the client):
v SQLRODA.1.SQLDATA
v SQLRODA.2.SQLDATA
733
734
Part 7. Appendixes
735
736
Dynamic1
SQL
Procedure
ALLOCATE CURSOR
assignment statement
ASSOCIATE LOCATORS
ALTER { BUFFERPOOL,
NICKNAME,10 NODEGROUP,
SERVER,10 TABLE,
TABLESPACE, USER
MAPPING,10 TYPE, VIEW }
X9
X4
CLOSE
SQLCloseCursor(),
SQLFreeStmt()
X
X
COMMENT ON
COMMIT
SQLEndTran, SQLTransact()
737
Dynamic1
SQL
Procedure
CONNECT (Type 1)
SQLBrowseConnect(),
SQLConnect(),
SQLDriverConnect()
CONNECT (Type 2)
SQLBrowseConnect(),
SQLConnect(),
SQLDriverConnect()
X11
SQLAllocStmt()
CREATE { ALIAS,
BUFFERPOOL,
DISTINCT TYPE,
EVENT MONITOR,
FUNCTION, FUNCTION
MAPPING,10 INDEX, INDEX
EXTENSION, METHOD,
NICKNAME,10 NODEGROUP,
PROCEDURE, SCHEMA,
TABLE, TABLESPACE,
TRANSFORM TYPE
MAPPING,1 TRIGGER, USER
MAPPING,10 TYPE, VIEW,
WRAPPER10 }
DECLARE CURSOR2
DECLARE GLOBAL
TEMPORARY TABLE
DELETE
SQLColAttributes(),
SQLDescribeCol(),
SQLDescribParam()6
SQLDisconnect()
X11
EXECUTE
SQLExecute()
EXECUTE IMMEDIATE
SQLExecDirect()
DESCRIBE
DISCONNECT
DROP
EXPLAIN
FETCH
FLUSH EVENT MONITOR
FOR statement
738
SQLExtendedFetch() ,
SQLFetch(), SQLFetchScroll()7
Dynamic1
FREE LOCATOR
SQL
Procedure
GET DIAGNOSTICS
GOTO statement
GRANT
IF statement
INCLUDE
INSERT
ITERATE
LEAVE statement
LOCK TABLE
LOOP statement
OPEN
PREPARE
REFRESH TABLE
RELEASE
SQLExecute(), SQLExecDirect()
SQLPrepare()
RELEASE SAVEPOINT
RENAME TABLE
RENAME TABLESPACE
REPEAT statement
RESIGNAL statement
RETURN statement
REVOKE
ROLLBACK
SQLEndTran(), SQLTransact()
SAVEPOINT
select-statement
SELECT INTO
SET CONNECTION
SQLSetConnection()
739
Dynamic1
X, SQLSetConnectAttr()
X, SQLSetConnectAttr()
SET INTEGRITY
SQL
Procedure
SET PASSTHRU
10
SET PATH
SET SCHEMA
SET SERVER OPTION
10
SET transition-variable
SIGNAL statement
SIGNAL SQLSTATE
UPDATE
X
5
VALUES INTO
WHENEVER
WHILE statement
740
Dynamic1
SQL
Procedure
Note:
1. You can code all statements in this list as static SQL, but only those marked with X as dynamic
SQL.
2. You cannot execute this statement.
3. An X indicates that you can execute this statement using either SQLExecDirect() or SQLPrepare()
and SQLExecute(). If there is an equivalent DB2 CLI function, the function name is listed.
4. Although this statement is not dynamic, with DB2 CLI you can specify this statement when calling
either SQLExecDirect(), or SQLPrepare() and SQLExecute().
5. You can only use this within CREATE TRIGGER statements.
6. You can only use the SQL DESCRIBE statement to describe output, whereas with DB2 CLI you can
also describe input (using the SQLDescribeParam() function).
7. You can only use the SQL FETCH statement to fetch one row at a time in one direction, whereas
with the DB2 CLI SQLExtendedFetch() and SQLFetchScroll() functions, you can fetch into arrays.
Furthermore, you can fetch in any direction, and at any position in the result set.
8. The DESCRIBE SQL statement has a different syntax than that of the CLP DESCRIBE command. For
information on the DESCRIBE SQL statement, refer to the SQL Reference. For information on the
DESCRIBE CLP command, refer to the Command Reference.
9. When CALL is issued through the command line processor, only certain procedures and their
respective parameters are supported (see page ). (see Installing, Replacing, and Removing JAR
Files on page 669). .
10. Statement is supported only for federated database servers.
11. SQL procedures can only issue CREATE and DROP statements for indexes, tables, and views.
741
742
743
744
Directory
Non-embedded SQL
Programs
samples/c
.sqc
samples/cli (CLI programs)
.c
C++
samples/cpp
.sqC (UNIX)
.sqx (Windows & OS/2)
.C (UNIX)
.cxx (Windows & OS/2)
COBOL
samples/cobol
samples/cobol_mf
.sqb
.cbl
JAVA
samples/java
.sqlj
.java
REXX
samples/rexx
.cmd
.cmd
Directory
File Extension
CLP
samples/clp
.db2
OLE
OLE DB
samples\oledb
.db2
SQL Procedures
samples/sqlproc
.db2
.c .sqc (Client Applications)
User Exit
samples/c
.cad (OS/2)
.cadsm (UNIX & Windows)
.cdisk (UNIX & Windows)
.ctape (UNIX)
Note:
Directory Delimiters
On UNIX are /. On OS/2 and Windows platforms, are \. In the
tables, the UNIX delimiters are used unless the directory is only
available on Windows and/or OS/2.
File Extensions
Are provided for the samples in the tables where only one
extension exists.
745
746
You can find the C source code for embedded SQL and DB2 API programs
in sqllib/samples/c under your database instance directory; the C source
code for DB2 CLI programs is in sqllib/samples/cli. For additional
information about the programs in the samples tables, refer to the README
file in the appropriate samples subdirectory under your DB2 instance. The
README file will contain any additional samples that are not listed in this
book.
v On OS/2 and Windows 32-bit operating systems.
You can find the C source code for embedded SQL and DB2 API programs
in %DB2PATH%\samples\c under the DB2 install directory; the C source code
for DB2 CLI programs is in %DB2PATH%\samples\cli. The variable %DB2PATH%
determines where DB2 is installed. Depending on the drive where DB2 is
installed, %DB2PATH% will point to drive:\sqllib. For additional information
about the sample programs in the samples tables, refer to the README file in
the appropriate %DB2PATH%\samples subdirectory. The README file will
contain any additional samples that are not listed in this book.
If your platform is not addressed in Table 39 on page 745, please refer to the
Application Building Guide for information specific to your environment.
The sample programs directory is typically read-only on most platforms.
Before you alter or build the sample programs, copy them to your working
directory.
Included APIs
backrest
checkerr
cli_info
client
747
Table 41. DB2 API Sample Programs with No Embedded SQL (continued)
Sample Program
Included APIs
d_dbconf
v sqleatin - Attach
v sqledtin - Detach
v sqlfddb - Get Database Configuration Defaults
d_dbmcon
v sqleatin - Attach
v sqledtin - Detach
v sqlfdsys - Get Database Manager Configuration Defaults
db_udcs
v sqleatin - Attach
v sqlecrea - Create Database
v sqledrpd - Drop Database
db2mon
v sqleatin - Attach
v sqlmon - Get/Update Monitor Switches
v sqlmonss - Get Snapshot
v sqlmonsz - Estimate Size Required for sqlmonss() Output
Buffer
v sqlmrset - Reset Monitor
dbcat
dbcmt
dbconf
v sqleatin - Attach
v sqlecrea - Create Database
v sqledrpd - Drop Database
v sqlfrdb - Reset Database Configuration
v sqlfudb - Update Database Configuration
v sqlfxdb - Get Database Configuration
748
Table 41. DB2 API Sample Programs with No Embedded SQL (continued)
Sample Program
Included APIs
dbinst
dbmconf
v sqleatin - Attach
v sqledtin - Detach
v sqlfrsys - Reset Database Manager Configuration
v sqlfusys - Update Database Manager Configuration
v sqlfxsys - Get Database Manager Configuration
dbsnap
v sqleatin - Attach
v sqlmonss - Get Snapshot
dbstart
dbstop
dcscat
dmscont
v sqleatin - Attach
v sqlecrea - Create Database
v sqledrpd - Drop Database
ebcdicdb
v sqleatin - Attach
v sqlecrea - Create Database
v sqledrpd - Drop Database
migrate
monreset
v sqleatin - Attach
v sqlmrset - Reset Monitor
monsz
v sqleatin - Attach
v sqlmonss - Get Snapshot
v sqlmonsz - Estimate Size Required for sqlmonss() Output
Buffer
749
Table 41. DB2 API Sample Programs with No Embedded SQL (continued)
Sample Program
Included APIs
nodecat
restart
setact
setrundg
sws
v sqleatin - Attach
v sqlmon - Get/Update Monitor Switches
utilapi
Included APIs
asynrlog
autocfg
v db2AutoConfig -- Autoconfig
v db2AutoConfigMemory -- Autoconfig Free Memory
v sqlfudb -- Update Database Configuration
v sqlfusys -- Update Database Manager Configuration
v sqlesetc -- Set Client
v sqlaintp -- SQLCA Message
dbauth
dbstat
expsamp
v sqluexpr - Export
v sqluimpr - Import
impexp
v sqluexpr - Export
v sqluimpr - Import
loadqry
750
Included APIs
makeapi
v sqlabndx - Bind
v sqlaprep - Precompile Program
v sqlepstp - Stop Database Manager
v sqlepstr - Start Database Manager
rebind
v sqlarbnd - Rebind
rechist
tabscont
tabspace
tload
v sqluexpr - Export
v sqluload - Load
v sqluvqdp - Quiesce Tablespaces for Table
751
Included APIs
tspace
utilemb
Program Description
adhoc
Demonstrates dynamic SQL and the SQLDA structure to process SQL commands
interactively. SQL commands are input by the user, and output corresponding to
the SQL command is returned. See Example: ADHOC Program on page 154 for
details.
advsql
Demonstrates the use of advanced SQL expressions like CASE, CAST, and scalar
full selects.
blobfile
columns
Demonstrates the use of a cursor that is processed using dynamic SQL. This
program lists a result set from SYSCAT.COLUMNS under a desired schema name.
cursor
Demonstrates the use of a cursor using static SQL. See Example: Cursor Program
on page 84 for details.
delet
dynamic
joinsql
752
Table 43. Embedded SQL Sample programs with No DB2 APIs (continued)
Sample Program
Name
Program Description
largevol
lobeval
Demonstrates the use of LOB locators and defers the evaluation of the actual LOB
data. See How the Sample LOBEVAL Program Works on page 360 for details.
lobfile
Demonstrates the use of LOB file handles. See How the Sample LOBFILE
Program Works on page 368 for details.
lobloc
Demonstrates the use of LOB locators. See How the Sample LOBLOC Program
Works on page 353 for details.
lobval
openftch
Demonstrates fetching, updating, and deleting of rows using static SQL. See How
the OPENFTCH Program Works on page 93 for details.
recursql
sampudf
spclient
A client application that calls stored procedures in the spserver shared library.
spcreate.db2
A CLP script that contains the CREATE PROCEDURE statements to register the
stored procedures created by the spserver program.
spdrop.db2
A CLP script that contains the DROP PROCEDURE statements necessary for
deregistering the stored procedures created by the spserver program.
spserver
static
tabsql
tbdefine
thdsrver
Demonstrates the use of POSIX threads APIs for thread creation and management.
The program maintains a pool of contexts. A generate_work function is executed
from main, and creates dynamic SQL statements that are executed by worker
threads. When a context becomes available, a thread is created and dispatched to
do the specified work. The work generated consists of statements to delete entries
from either the STAFF or EMPLOYEE tables of the sample database. This program
is only available on UNIX platforms.
trigsql
udfcli
753
Table 43. Embedded SQL Sample programs with No DB2 APIs (continued)
Sample Program
Name
Program Description
updat
varinp
Program Description
DB2Udf.java
udfsrv.c
Creates a library with the User-Defined Function ScalarUDF, to access the sample
database tables.
UDFsrv.java
Program Description
utilapi.c
Application Level - Samples that deal with the application level of DB2 and CLI.
apinfo.c
aphndls.c
apsqlca.c
Installation Image Level - Samples that deal with the installation image level of DB2 and CLI.
ilinfo.c
How to get and set installation level information (such as the version of the CLI
driver).
Instance Level - Samples that deal with the instance level of DB2 and CLI.
ininfo.c
754
Program Description
dbinfo.c
dbmconn.c
How to connect and disconnect from multiple databases (uses DB2 APIs to
create and drop second database).
dbmuse.c
How to perform transactions with multiple databases (uses DB2 APIs to create
and drop second database).
dbnative.c
dbuse.c
dbusemx.sqc
tbconstr.c
tbinfo.c
tbmod.c
tbread.c
dtlob.c
dtudt.c
udfsrv.c
spdrop.db2
spclient.c
spserver.c
spcall.c
755
Program Description
Java Samples
Table 46. Java Database Connectivity (JDBC) Sample Programs
Sample Program
Name
Program Description
DB2Appl.java
A JDBC application that queries the sample database using the invoking users
privileges.
DB2Applt.java
A JDBC applet that queries the database using the JDBC applet driver. It uses the
user name, password, server, and port number parameters specified in
DB2Applt.html.
DB2Applt.html
An HTML file that embeds the applet sample program, DB2Applt. It needs to be
customized with server and user information.
DB2UdCli.java
A Java client application that calls the Java user-defined function, DB2Udf.
Dynamic.java
MRSPcli.java
This is the client program that calls the server program MRSPsrv. The program
demonstrates multiple result sets being returned from a Java stored procedure.
MRSPsrv.java
This is the server program that is called by the client program, MRSPcli. The
program demonstrates multiple result sets being returned from a Java stored
procedure.
Outcli.java
A Java client application that calls the SQLJ stored procedure, Outsrv.
PluginEx.java
A Java program that adds new menu items and toolbar buttons to the DB2 Web
Control Center.
Spclient.java
A JDBC client application that calls PARAMETER STYLE JAVA stored procedures
in the Spserver stored procedure class.
Spcreate.db2
A CLP script that contains the CREATE PROCEDURE statements to register the
methods contained in the Spserver class as stored procedures.
Spdrop.db2
A CLP script that contains the DROP PROCEDURE statements necessary for
deregistering the stored procedures contained in the Spserver class.
Spserver.java
UDFcli.java
A JDBC client application that calls functions in the Java user-defined function
library, UDFsrv.
756
Program Description
UseThrds.java
Shows how to use threads to run an SQL statement asynchronously (JDBC version
of CLI sample async.c).
V5SpCli.java
A Java client application that calls the DB2GENERAL stored procedure, V5Stp.java.
V5Stp.java
Varinp.java
Program Description
App.sqlj
Uses static SQL to retrieve and update data from the EMPLOYEE table of the
sample database.
Applt.sqlj
An applet that queries the database using the JDBC applet driver. It uses the user
name, password, server, and port number parameters specified in Applt.html.
Applt.html
An HTML file that embeds the applet sample program, Applt. It needs to be
customized with server and user information.
Cursor.sqlj
OpF_Curs.sqlj
Openftch.sqlj
Outsrv.sqlj
Demonstrates a stored procedure using the SQLDA structure. It fills the SQLDA
with the median salary of the employees in the STAFF table of the sample database.
After the database processing (finding the median), the stored procedure returns
the filled SQLDA and the SQLCA status to the JDBC client application, Outcli.
Stclient.sqlj
An SQLJ client application that calls PARAMETER STYLE JAVA stored procedures
created by the SQLJ stored procedure program, Stserver.
Stcreate.db2
A CLP script that contains the CREATE PROCEDURE statements to register the
methods contained in the Stserver class as stored procedures.
Stdrop.db2
A CLP script that contains the DROP PROCEDURE statements necessary for
deregistering the stored procedures contained in the Stserver class.
Stserver.sqlj
Static.sqlj
Stp.sqlj
A stored procedure that updates the EMPLOYEE table on the server, and returns
new salary and payroll information to the JDBC client program, StpCli.
757
Table 47. Embedded SQL for Java (SQLJ) Sample Programs (continued)
Sample Program
Name
Program Description
UDFclie.sqlj
A client application that calls functions from the Java user-defined function library,
UDFsrv.
Updat.sqlj
Program Description
basecase.db2
basecase.sqc
baseif.db2
baseif.sqc
dynamic.db2
dynamic.sqc
iterate.db2
The ITERATOR procedure uses a FETCH loop to retrieve data from the
department table. If the value of the deptno column is not D11, modified data
is inserted into the department table. If the value of the deptno column is D11,
an ITERATE statement passes the flow of control back to the beginning of the
LOOP statement.
iterate.sqc
leave.db2
leave.sqc
loop.db2
loop.sqc
758
Program Description
nestcase.db2
The BUMP_SALARY procedure uses nested CASE statements to raise the salaries
of employees in a department identified by the dept IN parameter from the staff
table of the sample database.
nestcase.sqc
nestif.db2
nestif.sqc
repeat.db2
repeat.sqc
rsultset.c
rsultset.db2
spserver.db2
The SQL procedures in this CLP script demonstrate basic error-handling, nested
stored procedure calls, and returning result sets to the client application or the
calling application. You can call the procedures using the spcall application, in the
CLI samples directory. You can also use the spclient application, in the C and
CPP samples directories, to call the procedures that do not return result sets.
whiles.db2
whiles.sqc
759
Program Description
Bank.vbp
An RDO program to create and maintain data for bank branches, with the ability to
perform transactions on customer accounts. The program can use any database
specified by the user as it contains the DDL to create the necessary tables for the
application to store data.
Blob.vbp
This ADO program demonstrates retrieving BLOB data. It retrieves and displays
pictures from the emp_photo table of the sample database. The program can also
replace an image in the emp_photo table with one from a local file.
BLOBAccess.dsw
Connect.vbp
This ADO program will create a connection object, and establish a connection, to
the sample database. Once completed, the program will disconnect and exit.
Commit.vbp
db2com.vbp
This Visual Basic project demonstrates updating a database using the Microsoft
Transaction Server. It creates a server DLL used by the client program, db2mts.vbp,
and has four class modules:
v UpdateNumberColumn.cls
v UpdateRow.cls
v UpdateStringColumn.cls
v VerifyUpdate.cls
For this program a temporary table, DB2MTS, is created in the sample database.
db2mts.vbp
This is a Visual Basic project for a client program that uses the Microsoft
Transaction Server to call the server DLL created from db2com.vbp.
Select-Update.vbp
This ADO program performs the same functions as Connect.vbp, but also provides
a GUI interface. With this interface, the user can view, update, and delete data
stored in the ORG table of the sample database.
760
Program Description
Sample.vbp
This Visual Basic project uses Keyset cursors via ADO to provide a graphical user
interface to all data in the sample database.
VarCHAR.dsp
A Visual C++ program that uses ADO to access VarChar data as textfields. It
provides a graphical user interface to allow users to view and update data in the
ORG table of the sample database.
Program Description
sales
names
inbox
invoice
bcounter
ccounter
salarysrv
An OLE automation stored procedure that calculates the median salary of the
STAFF table of the sample database (implemented in Visual Basic).
salarycltvc
A Visual C++ embedded SQL sample that calls the Visual Basic stored procedure,
salarysrv.
salarycltvb
A Visual Basic DB2 CLI sample that calls the Visual Basic stored procedure,
salarysrv.
testcli
An OLE automation embedded SQL client application that calls the stored
procedure, tstsrv (implemented in Visual C++).
tstsrv
Table 51. Object Linking and Embedding Database (OLE DB) Table Functions
Sample Program
Name
Program Description
jet.db2
Microsoft.Jet.OLEDB.3.51 Provider
761
Table 51. Object Linking and Embedding Database (OLE DB) Table Functions (continued)
Sample Program
Name
Program Description
mapi.db2
msdaora.db2
msdasql.db2
msidxs.db2
notes.db2
sampprov.db2
sqloledb.db2
File Description
const.db2
cte.db2
flt.db2
join.db2
stock.db2
Demonstrates the use of triggers. The equivalent sample program demonstrating this
advanced SQL statement is trigsql.
testdata.db2
Uses DB2 built-in functions such as RAND() and TRANSLATE() to populate a table
with randomly generated test data.
thaisort.db2
This script is particularly for Thai users. Thai sorting is by phonetic order requiring
pre-sorting/swapping of the leading vowel and its consonant, as well as post-sorting
in order to view the data in the correct sort order. The file implements Thai sorting
by creating UDF functions presort and postsort, and creating a table; then it calls the
functions against the table to sort the table data. To run this program, you first have
to build the user-defined function program, udf, from the C source file, udf.c.
762
File Description
db2uext2.cadsm
This is a sample User Exit utilizing ADSTAR DSM ( ADSM ) APIs to archive and
retrieve database log files. The sample provides an audit trail of calls (stored in a
separate file for each option) including a timestamp and parameters received. It also
provides an error trail of calls in error including a timestamp and an error isolation
string for problem determination. These options can be disabled. The file must be
renamed db2uext2.c and compiled as a C program. Available on UNIX and Windows
32-bit operating systems. The OS/2 version is db2uexit.cad.
db2uexit.cad
This is the OS/2 version of db2uext2.cadsm. The file must be renamed db2uexit.c
and compiled as a C program.
db2uext2.cdisk
This is a sample User Exit utilizing the system copy command for the particular
platform on which it ships. The program archives and retrieves database log files,
and provides an audit trail of calls (stored in a separate file for each option)
including a timestamp and parameters received. It also provides an error trail of calls
in error including a timestamp and an error isolation string for problem
determination. These options can be disabled. The file must be renamed db2uext2.c
and compiled as a C program. Available on UNIX and Windows 32-bit operating
systems.
db2uext2.ctape
This is a sample User Exit utilizing system tape commands for the particular UNIX
platform on which it ships. The program archives and retrieves database log files. All
limitations of the system tape commands are limitations of this user exit. The sample
provides an audit trail of calls (stored in a separate file for each option) including a
timestamp and parameters received. It also provides an error trail of calls in error
including a timestamp and an error isolation string for problem determination. These
options can be disabled. The file must be renamed db2uext2.c and compiled as a C
program. Available on UNIX platforms only.
763
764
765
765
766
766
767
767
768
769
769
770
COM.ibm.db2.app.StoredProc . . .
COM.ibm.db2.app.UDF . . . . .
COM.ibm.db2.app.Lob . . . . .
COM.ibm.db2.app.Blob . . . . .
COM.ibm.db2.app.Clob . . . . .
NOT FENCED Stored Procedures . .
Example Input-SQLDA Programs . . .
How the Example Input-SQLDA Client
Application Works . . . . . . .
C Example: V5SPCLI.SQC . . . . .
How the Example Input-SQLDA Stored
Procedure Works . . . . . . . .
C Example: V5SPSRV.SQC . . . . .
.
.
.
.
.
.
.
772
773
775
776
776
777
778
. 779
. 781
. 784
. 785
771
This chapter describes how you can write DB2DARI and DB2GENERAL
parameter style stored procedures and DB2GENERAL UDFs.
765
v Allocate storage for the SQLDATA and SQLIND fields based upon the
values in SQLTYPE and SQLLEN.
If your application will be working with character strings defined as FOR BIT
DATA, you need to initialize the SQLDAID field to indicate that the SQLDA
includes FOR BIT DATA definitions and the SQLNAME field of each SQLVAR
that defines a FOR BIT DATA element.
If your application will be working with large objects, that is, data with types
of CLOB, BLOB, or DBCLOB, you will also need to initialize the secondary
SQLVAR elements. For information on the SQLDA structure, refer to the SQL
Reference.
The SQL_API_FN is a macro that specifies the calling convention for a function
that may vary across each supported operating system. This macro is required
when you write stored procedures or UDFs.
Following is an example of how a CALL statement maps to a servers
parameter list:
CALL OUTSRV (:empno:empind,:salary:salind)
766
The parameters to this call are converted into an SQLDA structure with two
SQLVARs. The first SQLVAR points to the empno host variable and the empind
indicator variable. The second SQLVAR points to the salary host variable and
the salind indicator variable.
Note: The SQLDA structure is not passed to the stored procedure if the
number of elements, SQLD, is set to 0. In this case, if the SQLDA is not
passed, the stored procedure receives a NULL pointer.
Data Structure Manipulation
The database manager automatically allocates a duplicate SQLDA structure at
the database server. To reduce network traffic, it is important to indicate
which host variables are input-only, and which ones are output-only. The
client procedure should set the indicator of output-only SQLVARs to -1. The
server procedure should set the indicator for input-only SQLVARs to -128.
This allows the database manager to choose which SQLVARs are passed.
Note that an indicator variable is not reset if the client or the server sets it to a
negative value (indicating that the SQLVAR should not be passed). If the host
variable to which the SQLVAR refers is given a value in the stored procedure
or the client code, its indicator variable should be set to zero or a positive
value so that the value is passed. For example, consider a stored procedure
which takes one output-only parameter, called as follows:
empind = -1;
EXEC SQL CALL storproc(:empno:empind);
When the stored procedure sets the value for the first SQLVAR, it should also
set the value of the indicator to a non-negative value so that the result is
passed back to empno.
Input/Output SQLVAR
sqlda.SQLDAID
sqlda.SQLDABC
sqlda.SQLN
sqlda.SQLD
sqlda.n.SQLTYPE
sqlda.n.SQLLEN
sqlda.n.SQLDATA
767
sqlda.n.SQLNAME.length
sqlda.n.SQLNAME.data
sqlda.n.SQLDATATYPE_NAME
sqlda.n.SQLLONGLEN
sqlda.n.SQLDATALEN
Note:
Before invoking the stored procedure, the client application must:
1. Allocate storage for the pointer element based on SQLTYPE and SQLLEN.
2. Initialize the element with the appropriate data.
When called by the application, the database manager:
3. Sends data in the original element to a duplicate element allocated at the stored procedure. The SQLN element is
initialized with the data in the SQLD element.
When invoked, the stored procedure can:
4. Alter data in the duplicate element. The data can be altered as needed since it is not checked for validity or
returned to the client application.
When the stored procedure terminates, the database manager:
5. Checks data in the duplicate elements. If the values in these fields do not match the data in the original elements,
an error is returned.
6. Returns data in the duplicate elements to the original element.
7. The data can be altered as needed since it is not checked for validity.
8. The data pointed to by the elements can be altered as needed since they are not checked for validity but are
returned to the client application.
9. The SQLIND field is not passed in or out if SQLTYPE indicates the column type is not nullable.
768
In addition, do not change the pointer for the SQLDATA and the SQLIND
fields, although you can change the value that is pointed to by these fields.
Note: It is possible to use the same variable for both input and output.
Before the stored procedure returns, SQLCA information should be explicitly
copied to the SQLCA parameter of the stored procedure.
769
SMALLINT (500/501)
short
short
INTEGER (496/497)
int
int
BIGINT (492/493)
long
long
FLOAT (480/481)
double
double
REAL (480/481)
float
float
DECIMAL(p,s) (484/485)
java.math.BigDecimal
java.math.BigDecimal
NUMERIC(p,s) (504/505)
java.math.BigDecimal
java.math.BigDecimal
CHAR(n) (452/453)
String
String
Blob
Blob
n/a
String
VARCHAR(n)(448/449)
String
String
Blob
Blob
String
String
Blob
Blob
String
String
n/a
String
VARGRAPHIC(n) (464/465)
String
String
String
String
Blob
Blob
Clob
Clob
Clob
Clob
String
String
String
String
String
String
GRAPHIC(n) (468/469)
2
BLOB(n)(404/405)
CLOB(n) (408/409)
DBCLOB(n) (412/413)
4
DATE (384/385)
TIME (388/389)
TIMESTAMP (392/393)
770
Notes:
1. The difference between REAL and DOUBLE in the SQLDA is the length value (4 or 8).
2. Parenthesized types, such as the C null-terminated graphic string, occur in stored procedures when
the calling application uses embedded SQL with some host variable types.
3. The Blob and Clob classes are provided in the COM.ibm.db2.app package. Their interfaces include
routines to generate an InputStream and OutputStream for reading from and writing to a Blob, and a
Reader and Writer for a Clob. See Classes for Java Stored Procedures and UDFs for descriptions of
the classes.
4. SQL DATE, TIME, and TIMESTAMP values use the ISO string encoding in Java, as they do for UDFs
coded in C.
771
You can use this handle to run SQL statements. Other methods of the
StoredProc interface are listed in the file
sqllib/samples/java/StoredProc.java.
There are five classes/interfaces that you can use with Java Stored Procedures
or UDFs:
v COM.ibm.db2.app.StoredProc
v COM.ibm.db2.app.UDF
v COM.ibm.db2.app.Lob
v COM.ibm.db2.app.Blob
v COM.ibm.db2.app.Clob
The following sections describe the public aspects of these classes behavior:
COM.ibm.db2.app.StoredProc
A Java class that contains methods intended to be called as PARAMETER
STYLE DB2GENERAL stored procedures must be public and must implement
this Java interface. You must declare such a class as follows:
public class <user-STP-class> extends COM.ibm.db2.app.StoredProc{ ... }
This constructor is called by the database before the stored procedure call.
public boolean isNull(int) throws Exception
This function tests whether an input argument with the given index is an SQL
NULL.
772
public
public
public
public
public
public
public
public
void
void
void
void
void
void
void
void
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
This function sets the output argument with the given index to the given
value. The index has to refer to a valid output argument, the data type must
match, and the value must have an acceptable length and contents. Strings
with Unicode characters must be representable in the database code page.
Errors result in an exception being thrown.
public java.sql.Connection getConnection() throws Exception
This function returns a JDBC object that represents the calling applications
connection to the database. It is analogous to the result of a null SQLConnect()
call in a C stored procedure.
COM.ibm.db2.app.UDF
A Java class that contains methods intended to be called as PARAMETER
STYLE DB2GENERAL UDFs must be public and must implement this Java
interface. You must declare such a class as follows:
public class <user-UDF-class> extends COM.ibm.db2.app.UDF{ ... }
You can only call methods of the COM.ibm.db2.app.UDF interface in the context
of the currently executing UDF. For example, you cannot use operations on
LOB arguments, result- or status-setting calls, etc., after a UDF returns. A Java
exception will be thrown if this rule is violated.
Argument-related calls use a column index to identify the column being set.
These start at 1 for the first argument. Output arguments are numbered
higher than the input arguments. For example, a scalar UDF with three inputs
uses index 4 for the output.
Any exception returned from the UDF is caught by the database and returned
to the caller with SQLCODE -4302, SQLSTATE 38501.
The following methods are associated with the COM.ibm.db2.app.UDF class:
public UDF() [default constructor]
773
This function is called by the database at the end of a UDF evaluation, if the
UDF was created with the FINAL CALL option. It is analogous to the final
call for a C UDF. For table functions, close() is called after the CLOSE call to
the UDF method (if NO FINAL CALL is coded or defaulted), or after the
FINAL call (if FINAL CALL is coded). If a Java UDF class does not implement
this function, a no-op stub will handle and ignore this event.
public int getCallType() throws Exception
Table function UDF methods use getCallType() to find out the call type for a
particular call. It returns a value as follows (symbolic defines are provided for
these values in the COM.ibm.db2.app.UDF class definition):
v -2 FIRST call
v -1 OPEN call
v
v
v
0 FETCH call
1 CLOSE call
2 FINAL call
This function tests whether an input argument with the given index is an SQL
NULL.
public boolean needToSet(int) throws Exception
This function tests whether an output argument with the given index needs to
be set. This may be false for a table UDF declared with DBINFO, if that
column is not used by the UDF caller.
public
public
public
public
public
public
public
public
void
void
void
void
void
void
void
void
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
set(int,
This function sets the output argument with the given index to the given
value. The index has to refer to a valid output argument, the data type must
match, and the value must have an acceptable length and contents. Strings
with Unicode characters must be representable in the database code page.
Errors result in an exception being thrown.
public void setSQLstate(String) throws Exception
This function may be called from a UDF to set the SQLSTATE to be returned
from this call. A table UDF should call this function with 02000 to signal the
end-of-table condition. If the string is not acceptable as an SQLSTATE, an
exception will be thrown.
774
This function is similar to the setSQLstate function. It sets the SQL message
result. If the string is not acceptable (for example, longer than 70 characters),
an exception will be thrown.
public String getFunctionName() throws Exception
This function returns a raw, unprocessed DBINFO structure for the executing
UDF, as a byte array. You must first declare it with the DBINFO option.
public
public
public
public
public
public
public
public
String
String
String
String
String
String
String
String
These functions return the value of the appropriate field from the DBINFO
structure of the executing UDF.
public int[] getDBcodepg() throws Exception
This function returns the SBCS, DBCS, and composite code page numbers for
the database, from the DBINFO structure. The returned integer array has the
respective numbers as its first three elements.
public byte[] getScratchpad() throws Exception
This function returns a copy of the scratchpad of the currently executing UDF.
You must first declare the UDF with the SCRATCHPAD option.
public void setScratchpad(byte[]) throws Exception
This function overwrites the scratchpad of the currently executing UDF with
the contents of the given byte array. You must first declare the UDF with the
SCRATCHPAD option. The byte array must have the same size as
getScratchpad() returns.
COM.ibm.db2.app.Lob
This class provides utility routines that create temporary Blob or Clob objects
for computation inside user-defined functions or stored procedures.
775
This function returns a new InputStream to read the contents of the BLOB.
Efficient seek/mark operations are available on that object.
public java.io.OutputStream getOutputStream() throws Exception
776
This function returns a new Reader to read the contents of the CLOB or
DBCLOB. Efficient seek/mark operations are available on that object.
public java.io.Writer getWriter() throws Exception
777
RECEIVE message of
create table.
Instead, the sample program makes use of the stored procedures technique to
transmit all of the data across the network in one request, allowing the server
procedure to execute the SQL statements as a group. This technique is shown
in Figure 24.
778
Database
Server
Client
Workstation
CREATE the
presidents table.
Call server
procedure.
INSERT Washington
into the table.
INSERT Jefferson.
Receive message
on completion
of the server
procedure.
INSERT Lincoln.
Return the status
of the server
procedure.
779
FORTRAN
REXX
See Using GET ERROR MESSAGE in Example Programs on page 119 for the
source code for this error checking utility.
780
C Example: V5SPCLI.SQC
#include
#include
#include
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<sqlenv.h>
<sqlca.h>
<sqlda.h>
<sqlutil.h>
"util.h"
#define
CHECKERR(CE_STR)
781
if (SQLCODE == 0)
{ EXEC SQL COMMIT;
printf("Server Procedure Complete.\n\n");
}
else
{ /* print the error message, roll back the transaction and return */
sqlaintp (eBuffer, 1024, 80, &sqlca);
printf("\n%s\n", eBuffer);
=
=
=
=
SQL_TYP_NCSTR;
table_name;
strlen( table_name ) + 1;
&tableind;
inout_sqlda->sqlvar[1].sqltype
inout_sqlda->sqlvar[1].sqldata
inout_sqlda->sqlvar[1].sqllen
inout_sqlda->sqlvar[1].sqlind
=
=
=
=
SQL_TYP_NCSTR;
data_item0;
strlen( data_item0 ) + 1;
&dataind0;
inout_sqlda->sqlvar[2].sqltype
inout_sqlda->sqlvar[2].sqldata
inout_sqlda->sqlvar[2].sqllen
inout_sqlda->sqlvar[2].sqlind
=
=
=
=
SQL_TYP_NCSTR;
data_item1;
strlen( data_item1 ) + 1;
&dataind1;
inout_sqlda->sqlvar[3].sqltype
inout_sqlda->sqlvar[3].sqldata
inout_sqlda->sqlvar[3].sqllen
inout_sqlda->sqlvar[3].sqlind
=
=
=
=
SQL_TYP_NCSTR;
data_item2;
strlen( data_item2 ) + 1;
&dataind2;
/***********************************************\
* Call the Remote Procedure via CALL with SQLDA *
\***********************************************/
printf("Use CALL with SQLDA to invoke the Server Procedure named "
"inpsrv.\n");
tableind = dataind0 = dataind1
inout_sqlda->sqlvar[0].sqlind
inout_sqlda->sqlvar[1].sqlind
inout_sqlda->sqlvar[2].sqlind
inout_sqlda->sqlvar[3].sqlind
=
=
=
=
=
dataind2 = 0;
&tableind;
&dataind0;
&dataind1;
&dataind2;
782
}
/* end of program : inpcli.sqc */
783
784
C Example: V5SPSRV.SQC
#include
#include
#include
#include
<memory.h>
<string.h>
<sqlenv.h>
<sqlutil.h>
#ifdef __cplusplus
extern "C"
#endif
SQL_API_RC SQL_API_FN inpsrv(void *reserved1,
void *reserved2,
struct sqlda
*inout_sqlda,
struct sqlca
*ca)
{
/* Declare a local SQLCA */
EXEC SQL INCLUDE SQLCA;
1
785
786
return(SQLZ_DISCONNECT_PROC);
787
v Locking
v Differences in SQLCODEs and SQLSTATEs
v Using system catalogs
v
v
v
v
v
Isolation levels
Stored procedures
NOT ATOMIC compound SQL
Distributed unit of work
SQL statements supported or rejected by DB2 Connect.
788
Mixed-Byte Data
Mixed-byte data can consist of characters from an extended UNIX code (EUC)
character set, a double-byte character set (DBCS) and a single-byte character
set (SBCS) in the same column. On systems that store data in EBCDIC
(OS/390, OS/400, VSE, and VM), shift-out and shift-in characters mark the
start and end of double-byte data. On systems that store data in ASCII (such
as OS/2 and UNIX), shift-in and shift-out characters are not required.
If your application transfers mixed-byte data from an ASCII system to an
EBCDIC system, be sure to allow enough room for the shift characters. For
each switch from SBCS to DBCS data, add 2 bytes to your data length. For
better portability, use variable-length strings in applications that use
mixed-byte data.
Long Fields
Long fields (strings longer than 254 characters) are handled differently on
different systems. A host or AS/400 server may support only a subset of
scalar functions for long fields; for example, DB2 Universal Database for
OS/390 allows only the LENGTH and SUBSTR functions for long fields.
Also, a host or AS/400 server may require different handling for certain SQL
statements; for example, DB2 for VSE & VM requires that with the INSERT
statement, only a host variable, the SQLDA, or a NULL value be used.
789
ARI
QSQ
SQL
790
Precompiling
There are some differences in the precompilers for different IBM relational
database systems. The precompiler for DB2 Universal Database differs from
the host or AS/400 server precompilers in the following ways:
v It makes only one pass through an application.
v When binding against DB2 Universal Database databases, objects must exist
for a successful bind. VALIDATE RUN is not supported.
Blocking
The DB2 Connect program supports the DB2 database manager blocking bind
options:
UNAMBIG
Only unambiguous cursors are blocked (the default).
ALL
NO
The DB2 Connect program uses the block size defined in the DB2 database
manager configuration file for the RQRIOBLK field. Current versions of DB2
Connect support block sizes up to 32 767. If larger values are specified in the
DB2 database manager configuration file, DB2 Connect uses a value of 32 767
but does not reset the DB2 database manager configuration file. Blocking is
handled the same way using the same block size for dynamic and static SQL.
Note: Most host or AS/400 server systems consider dynamic cursors
ambiguous, but DB2 Universal Database systems consider some
dynamic cursors unambiguous. To avoid confusion, you can specify
BLOCKING ALL with DB2 Connect.
Specify the block size in the DB2 database manager configuration file by using
the CLP, the Control Center, or an API, as listed in the Administrative API
Reference and Command Reference.
791
Package Attributes
A package has the following attributes:
Collection ID
The ID of the package. It can be specified on the PREP command.
Owner
The authorization ID of the package owner. It can be specified on the
PREP or BIND command.
Creator
The user name that binds the package.
Qualifier
The implicit qualifier for objects in the package. It can be specified on
the PREP or BIND command.
Each host or AS/400 server system has limitations on the use of these
attributes:
DB2 Universal Database for OS/390
All four attributes can be different. The use of a different qualifier
requires special administrative privileges. For more information on the
conditions concerning the usage of these attributes, refer to the
Command Reference for DB2 Universal Database for OS/390.
DB2 for VSE & VM
All of the attributes must be identical. If USER1 creates a bind file
(with PREP), and USER2 performs the actual bind, USER2 needs DBA
authority to bind for USER1. Only USER1s user name is used for the
attributes.
DB2 Universal Database for AS/400
The qualifier indicates the collection name. The relationship between
qualifiers and ownership affects the granting and revoking of
privileges on the object. The user name that is logged on is the creator
and owner unless it is qualified by a collection ID, in which case the
collection ID is the owner. The collection ID must already exist before
it is used as a qualifier.
DB2 Universal Database
All four attributes can be different. The use of a different owner
requires administrative authority and the binder must have
CREATEIN privilege on the schema (if it already exists).
Note: DB2 Connect provides support for the SET CURRENT PACKAGESET
command for DB2 Universal Database for OS/390 and DB2 Universal
Database.
792
C Null-terminated Strings
The CNULREQD bind option overrides the handling of null-terminated
strings that are specified using the LANGLEVEL option.
See Null-terminated Strings in C and C++ on page 617 for a description of
how null-terminated strings are handled when prepared with the
LANGLEVEL option set to MIA or SAA1.
For a description of how null-terminated strings are handled when prepared
with the LANGLEVEL option set to MIA or SAA1, refer to Application
Development Guide.
By default, CNULREQD is set to YES. This causes null-terminated strings to
be interpreted according to MIA standards. If connecting to a DB2 Universal
Database for OS/390 server it is strongly recommended to set CNULREQD to
YES. You need to bind applications coded to SAA1 standards (with respect to
null-terminated strings) with the CNULREQD option set to NO. Otherwise,
null-terminated strings will be interpreted according to MIA standards, even if
they are prepared using LANGLEVEL set to SAA1.
793
Locking
The way in which the database server performs locking can affect some
applications. For example, applications designed around row-level locking
and the isolation level of cursor stability are not directly portable to systems
that perform page-level locking. Because of these underlying differences,
applications may need to be adjusted.
The DB2 Universal Database for OS/390 and DB2 Universal Database
products have the ability to time-out a lock and send an error return code to
waiting applications.
794
specify your own SQLCODE mapping file if you want to override the
default mapping or you are using a database server that does not have
SQLCODE mapping (a non-IBM database server). You can also turn off
SQLCODE mapping.
For more information, see .
Isolation Levels
DB2 Connect accepts the following isolation levels when you prep or bind an
application:
RR
Repeatable Read
RS
Read Stability
CS
Cursor Stability
UR
Uncommitted Read
NC
No Commit
The isolation levels are listed in order from most protection to least protection.
If the host or AS/400 server does not support the isolation level that you
specify, the next higher supported level is used.
795
Table 56 shows the result of each isolation level on each host or AS/400
application server.
Table 56. Isolation Levels
DB2 for VSE &
VM
DB2 Universal
Database for
AS/400
DB2 Universal
Database
DB2 Connect
DB2 Universal
Database for
OS/390
RR
RR
RR
note 1
RR
RS
note 2
RR
COMMIT(*ALL)
RS
CS
CS
CS
COMMIT(*CS)
CS
UR
note 3
CS
COMMIT(*CHG)
UR
NC
note 4
note 5
COMMIT(*NONE) UR
Notes:
1. There is no equivalent COMMIT option on DB2 Universal Database for AS/400 that matches
RR. DB2 Universal Database for AS/400 supports RR by locking the whole table.
2. Results in RR for Version 3.1, and results in RS for Version 4.1 with APAR PN75407 or Version
5.1.
3. Results in CS for Version 3.1, and results in UR for Version 4.1 or Version 5.1.
4. Results in CS for Version 3.1, and results in UR for Version 4.1 with APAR PN60988 or Version
5.1.
5. Isolation level NC is not supported with DB2 for VSE & VM.
With DB2 Universal Database for AS/400, you can access an unjournalled
table if an application is bound with an isolation level of UR and blocking set
to ALL, or if the isolation level is set to NC.
Stored Procedures
v Invocation
A client program can invoke a server program by issuing an SQL CALL
statement. Each server works a little differently to the other servers in this
case.
OS/390
The schema name must be no more than 8 bytes long, the
procedure name must be no more than 18 bytes long, and the
stored procedure must be defined in the SYSIBM.SYSPROCEDURES
catalog on the server.
VSE or VM
The procedure name must not be more than 18 bytes long and must
be defined in the SYSTEM.SYSROUTINES catalog on the server.
OS/400
The procedure name must be an SQL identifier. You can also use
796
797
798
Additionally, you can obtain SQL costing information about the SQL stored
procedure, including information about CPU time and other DB2 costing
information for the thread on which the SQL stored procedure is running. In
particular, you can obtain costing information about latch/lock contention
wait time, the number of getpages, the number of read I/Os, and the number
of write I/Os.
To obtain costing information, Stored Procedure Builder connects to a DB2 for
OS/390 server, executes the SQL statement, and calls a stored procedure
(DSNWSPM) to find out how much CPU time the SQL stored procedure used.
799
Enterprise Edition Version 7 on AIX, OS/2, and Windows NT. This enables
the following host database servers to participate in a distributed unit of
work:
DB2
DB2
DB2
DB2
The above is true for native DB2 UDB applications and applications
coordinated by an external Transaction Processing (TP) Monitor such as
IBM TXSeries, CICS for Open Systems, BEA Tuxedo, Encina Monitor, and
Microsoft Transaction Server.
Note: For more information on BEA Tuxedo, see .For more information on
the XA concentrator, see .
v If you have TCP/IP network connections, then a DB2 for OS/390 V5.1 or
later server can participate in a distributed unit of work. If the application
is controlled by a Transaction Processing Monitor such as IBM TXSeries,
CICS for Open Systems, Encina Monitor, or Microsoft Transaction Server,
then you must use the sync point manager.
If a common DB2 Connect Enterprise Edition server is used by both native
DB2 applications and TP monitor applications to access host data over
TCP/IP connections then the sync point manager must be used.
If a single DB2 Connect Enterprise Edition server is used to access host data
using both SNA and TCP/IP network protocols and two phase commit is
required, then the sync point manager must be used. This is true for both
DB2 applications and TP monitor applications.
800
801
802
The code page 500 binary collation sequence (the desired sequence) is:
'a' < 'b' < 'A' < 'B'
If you create the database with ASCII code page 850, binary collation would
yield:
Character
a
b
A
B
The code page 850 binary collation (which is not the desired sequence) is:
'A' < 'B' < 'a' < 'b'
To achieve the desired collation, you need to create your database with a
user-defined collating sequence. A sample collating sequence for just this
purpose is supplied with DB2 in the sqle850a.h include file. The content of
sqle850a.h is shown in Figure 25 on page 804.
803
#ifndef SQL_H_SQLE850A
#define SQL_H_SQLE850A
#ifdef __cplusplus
extern "C" {
#endif
unsigned char sqle_850_500[256] = {
0x00,0x01,0x02,0x03,0x37,0x2d,0x2e,0x2f,0x16,0x05,0x25,0x0b,0x0c,0x0d,0x0e,0x0f,
0x10,0x11,0x12,0x13,0x3c,0x3d,0x32,0x26,0x18,0x19,0x3f,0x27,0x1c,0x1d,0x1e,0x1f,
0x40,0x4f,0x7f,0x7b,0x5b,0x6c,0x50,0x7d,0x4d,0x5d,0x5c,0x4e,0x6b,0x60,0x4b,0x61,
0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0x7a,0x5e,0x4c,0x7e,0x6e,0x6f,
0x7c,0xc1,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xd1,0xd2,0xd3,0xd4,0xd5,0xd6,
0xd7,0xd8,0xd9,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0x4a,0xe0,0x5a,0x5f,0x6d,
0x79,0x81,0x82,0x83,0x84,0x85,0x86,0x87,0x88,0x89,0x91,0x92,0x93,0x94,0x95,0x96,
0x97,0x98,0x99,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xc0,0xbb,0xd0,0xa1,0x07,
0x68,0xdc,0x51,0x42,0x43,0x44,0x47,0x48,0x52,0x53,0x54,0x57,0x56,0x58,0x63,0x67,
0x71,0x9c,0x9e,0xcb,0xcc,0xcd,0xdb,0xdd,0xdf,0xec,0xfc,0x70,0xb1,0x80,0xbf,0xff,
0x45,0x55,0xce,0xde,0x49,0x69,0x9a,0x9b,0xab,0xaf,0xba,0xb8,0xb7,0xaa,0x8a,0x8b,
0x2b,0x2c,0x09,0x21,0x28,0x65,0x62,0x64,0xb4,0x38,0x31,0x34,0x33,0xb0,0xb2,0x24,
0x22,0x17,0x29,0x06,0x20,0x2a,0x46,0x66,0x1a,0x35,0x08,0x39,0x36,0x30,0x3a,0x9f,
0x8c,0xac,0x72,0x73,0x74,0x0a,0x75,0x76,0x77,0x23,0x15,0x14,0x04,0x6a,0x78,0x3b,
0xee,0x59,0xeb,0xed,0xcf,0xef,0xa0,0x8e,0xae,0xfe,0xfb,0xfd,0x8d,0xad,0xbc,0xbe,
0xca,0x8f,0x1b,0xb9,0xb6,0xb5,0xe1,0x9d,0x90,0xbd,0xb3,0xda,0xfa,0xea,0x3e,0x41
};
#ifdef __cplusplus
}
#endif
#endif /* SQL_H_SQLE850A */
Figure 25. User-Defined Collating Sequence - sqle_850_500
To see how to achieve code page 500 binary collation on code page 850
characters, examine the sample collating sequence in sqle_850_500. For each
code page 850 character, its weight in the collating sequence is simply its
corresponding code point in code page 500.
For example, consider the letter a. This letter is code point X'61' for code
page 850 as shown in Figure 27 on page 807. In the array sqle_850_500, letter
a is assigned a weight of X'81' (that is, the 98th element in the array
sqle_850_500).
Consider how the four characters collate when the database is created with
the above sample user-defined collating sequence:
Character
a
b
A
B
804
The code page 850 user-defined collation by weight (the desired collation) is:
'a' < 'b' < 'A' < 'B'
In this example, you achieve the desired collation by specifying the correct
weights to simulate the desired behavior.
Closely observing the actual collating sequence, notice that the sequence itself
is merely a conversion table, where the source code page is the code page of
the data base (850) and the target code page is the desired binary collating
code page (500). Other sample collating sequences supplied by DB2 enable
different conversions. If a conversion table that you require is not supplied
with DB2, additional conversion tables can be obtained from the IBM
publication, Character Data Representation Architecture, Reference and Registry,
SC09-2190. You will find the additional conversion tables in a CD-ROM
enclosed with this publication.
For more details on collating sequences, see Collating Sequence Overview
on page 504. Also see the CREATE DATABASE API described in the
Administrative API Reference for a description of the collating sequences
supplied with DB2, and for the listing of a sample program (db_udcs.c) that
demonstrates how to create a database with a user-defined collating sequence.
805
806
807
808
809
languages; however, all the information is not translated into every language.
Whenever information is not available in a specific language, the English
information is provided
On UNIX platforms, you can install multiple language versions of the HTML
files under the doc/%L/html directories, where %L represents the locale. For
more information, refer to the appropriate Quick Beginnings book.
You can obtain DB2 books and access information in a variety of ways:
v
v
v
v
Description
Form Number
HTML
Directory
SC09-2946
Administration Guide: Planning provides
an overview of database concepts,
db2d1x70
information about design issues (such as
logical and physical database design),
and a discussion of high availability.
db2d0
810
SC09-2947
Describes the DB2 application
programming interfaces (APIs) and data
db2b0x70
structures that you can use to manage
your databases. This book also explains
how to call APIs from your applications.
db2b0
Description
Form Number
HTML
Directory
SC09-2948
db2ax
db2axx70
db2apx70
db2a0
Application Development
Guide
SC09-2950
Explains how to develop applications
that access DB2 databases using the DB2
db2l0x70
Call Level Interface, a callable SQL
interface that is compatible with the
Microsoft ODBC specification.
db2l0
Command Reference
SC09-2951
db2n0
Connectivity Supplement
db2a0x70
db2n0x70
811
Description
Form Number
HTML
Directory
db2dm
db2dd
SC26-9994
Provides information to help
programmers integrate applications with
the Data Warehouse Center and with the db2adx70
Information Catalog Manager.
SC26-9993
db2ddx70
SC09-2954
db2ad
db2c0
db2c0x70
db2dw
SC09-2960
db2ww
Glossary
db2wwx70
db2t0x70
SC26-9929
Information Catalog
Manager Administration
Guide
SC26-9995
812
dmbu7
dmbu7x70
db2dix70
db2di
Description
Form Number
HTML
Directory
SC26-9997
db2bi
Information Catalog
Manager Users Guide
SC26-9996
Installation and
Configuration Supplement
GC09-2957
Guides you through the planning,
installation, and setup of
db2iyx70
platform-specific DB2 clients. This
supplement also contains information on
binding, setting up client and server
communications, DB2 GUI tools, DRDA
AS, distributed installation, the
configuration of distributed requests,
and accessing heterogeneous data
sources.
db2iy
Message Reference
db2m0
SC27-0787
SC27-0784
SC27-0783
SC27-0702
SC27-0786
db2bix70
db2ai
db2aix70
n/a
db2dpx70
n/a
db2upx70
n/a
db2lpx70
db2ip
db2ipx70
db2ep
db2epx70
813
Description
Form Number
HTML
Directory
SC27-0785
db2tp
SC26-9920
SC27-0701
Provides information about installing,
configuring, administering,
db2sbx70
programming, and troubleshooting the
Spatial Extender. Also provides
significant descriptions of spatial data
concepts and provides reference
information (messages and SQL) specific
to the Spatial Extender.
db2sb
db2y0
db2tpx70
db2e0
db2e0x70
SC09-2973
db2y0x70
db2s0
System Monitor Guide and Describes how to collect different kinds SC09-2956
Reference
of information about databases and the
db2f0x70
database manager. This book explains
how to use the information to
understand database activity, improve
performance, and determine the cause of
problems.
db2f0
desu9
Text Extender
Administration and
Programming
814
Description
Form Number
HTML
Directory
db2p0
Whats New
db2q0
SC09-2976
db2q0x70
GC09-2953
Provides planning, migration,
installation, and configuration
information for DB2 Connect Enterprise db2c6x70
Edition on the OS/2 and Windows 32-bit
operating systems. This book also
contains installation and setup
information for many supported clients.
db2c6
db2cy
GC09-2967
Provides planning, migration,
installation, configuration, and task
db2c1x70
information for DB2 Connect Personal
Edition on the OS/2 and Windows 32-bit
operating systems. This book also
contains installation and setup
information for all supported clients.
db2c1
GC09-2962
Provides planning, installation,
migration, and configuration information
for DB2 Connect Personal Edition on all db2c4x70
supported Linux distributions.
db2c4
db2z6
GC09-2966
db2z6x70
815
Description
Form Number
HTML
Directory
GC09-2964
db2v3
db2v3x70
GC09-2963
DB2 Enterprise - Extended Provides planning, installation, and
Edition for Windows Quick configuration information for DB2
db2v6x70
Beginnings
Enterprise - Extended Edition for
Windows 32-bit operating systems. This
book also contains installation and setup
information for many supported clients.
db2v6
GC09-2968
Provides planning, installation,
migration, and configuration information
for DB2 Universal Database on the OS/2 db2i2x70
operating system. This book also
contains installation and setup
information for many supported clients.
db2i2
GC09-2970
Provides planning, installation,
migration, and configuration information
db2ixx70
for DB2 Universal Database on
UNIX-based platforms. This book also
contains installation and setup
information for many supported clients.
db2ix
db2i6
db2i1
GC09-2972
Provides planning, installation,
migration, and configuration information
db2i4x70
for DB2 Universal Database Personal
Edition on all supported Linux
distributions.
db2i4
816
Description
Form Number
HTML
Directory
GC09-2959
GC26-9998
db2iw
db2iwx70
db2id
db2idx70
Provides late-breaking
installation-specific information that
could not be included in the DB2 books.
Available on
product
CD-ROM only.
db2cr
db2ir
Notes:
1. The character x in the sixth position of the file name indicates the
language version of a book. For example, the file name db2d0e70 identifies
the English version of the Administration Guide and the file name db2d0f70
identifies the French version of the same book. The following letters are
used in the sixth position of the file name to indicate the language version:
Language
Brazilian Portuguese
Identifier
b
817
Bulgarian
Czech
Danish
Dutch
English
Finnish
French
German
Greek
Hungarian
Italian
Japanese
Korean
Norwegian
Polish
Portuguese
Russian
Simp. Chinese
Slovenian
Spanish
Swedish
Trad. Chinese
Turkish
u
x
d
q
e
y
f
g
a
h
i
j
k
n
p
v
r
c
l
z
s
t
m
2. Late breaking information that could not be included in the DB2 books is
available in the Release Notes in HTML format and as an ASCII file. The
HTML version is available from the Information Center and on the
product CD-ROMs. To view the ASCII file:
v On UNIX-based platforms, see the Release.Notes file. This file is located
in the DB2DIR/Readme/%L directory, where %L represents the locale
name and DB2DIR represents:
/usr/lpp/db2_07_01 on AIX
/opt/IBMdb2/V7.1 on HP-UX, PTX, Solaris, and Silicon Graphics
IRIX
/usr/IBMdb2/V7.1 on Linux.
v On other platforms, see the RELEASE.TXT file. This file is located in the
directory where the product is installed. On OS/2 platforms, you can
also double-click the IBM DB2 folder and then double-click the Release
Notes icon.
818
You can obtain the latest version of the Adobe Acrobat Reader from the
Adobe Web site at http://www.adobe.com.
The PDF files are included on the DB2 publications CD-ROM with a file
extension of PDF. To access the PDF files:
1. Insert the DB2 publications CD-ROM. On UNIX-based platforms, mount
the DB2 publications CD-ROM. Refer to your Quick Beginnings book for
the mounting procedures.
2. Start the Acrobat Reader.
3. Open the desired PDF file from one of the following locations:
v On OS/2 and Windows platforms:
x:\doc\language directory, where x represents the CD-ROM drive and
language represent the two-character country code that represents your
language (for example, EN for English).
v On UNIX-based platforms:
/cdrom/doc/%L directory on the CD-ROM, where /cdrom represents the
mount point of the CD-ROM and %L represents the name of the desired
locale.
You can also copy the PDF files from the CD-ROM to a local or network drive
and read them from there.
819
Books Included
v Administration Guide: Planning
v Administration Guide: Implementation
v Administration Guide: Performance
v Administrative API Reference
v Application Building Guide
v Application Development Guide
v CLI Guide and Reference
v Command Reference
v Data Movement Utilities Guide and
Reference
v Data Warehouse Center Administration
Guide
v Data Warehouse Center Application
Integration Guide
v DB2 Connect Users Guide
v Installation and Configuration
Supplement
v Image, Audio, and Video Extenders
Administration and Programming
v Message Reference, Volumes 1 and 2
SBOF-8935
820
Type of Help
Contents
How to Access...
Command Help
Client Configuration
Assistant Help
Command Center Help
Control Center Help
Data Warehouse Center
Help
821
Type of Help
Contents
How to Access...
SQL Help
SQLSTATE Help
822
If you have not installed the Information Center, you can open the page
by double-clicking the DB2 Information icon. Depending on the system
you are using, the icon is in the main product folder or the Windows
Start menu.
Installing the Netscape Browser
If you do not already have a Web browser installed, you can install Netscape
from the Netscape CD-ROM found in the product boxes. For detailed
instructions on how to install it, perform the following:
1. Insert the Netscape CD-ROM.
2. On UNIX-based platforms only, mount the CD-ROM. Refer to your Quick
Beginnings book for the mounting procedures.
3. For installation instructions, refer to the CDNAVnn.txt file, where nn
represents your two character language identifier. The file is located at the
root directory of the CD-ROM.
Accessing Information with the Information Center
The Information Center provides quick access to DB2 product information.
The Information Center is available on all platforms on which the DB2
administration tools are available.
You can open the Information Center by double-clicking the Information
Center icon. Depending on the system you are using, the icon is in the
Information folder in the main product folder or the Windows Start menu.
You can also access the Information Center by using the toolbar and the Help
menu on the DB2 Windows platform.
The Information Center provides six types of information. Click the
appropriate tab to look at the topics provided for that type.
Tasks
Reference
Books
DB2 books.
Troubleshooting
Categories of error messages and their recovery actions.
Sample Programs
Sample programs that come with the DB2 Application
Development Client. If you did not install the DB2
Application Development Client, this tab is not displayed.
Web
823
When you select an item in one of the lists, the Information Center launches a
viewer to display the information. The viewer might be the system help
viewer, an editor, or a Web browser, depending on the kind of information
you select.
The Information Center provides a find feature, so you can look for a specific
topic without browsing the lists.
For a full text search, follow the hypertext link in the Information Center to
the Search DB2 Online Information search form.
The HTML search server is usually started automatically. If a search in the
HTML information does not work, you may have to start the search server
using one of the following methods:
On Windows
Click Start and select Programs > IBM DB2 > Information >
Start HTML Search Server.
On OS/2
Double-click the DB2 for OS/2 folder, and then double-click the Start
HTML Search Server icon.
Refer to the release notes if you experience any other problems when
searching the HTML information.
Note: The Search function is not available in the Linux, PTX, and Silicon
Graphics IRIX environments.
How to Access...
Add Database
Backup Database
824
Wizard
How to Access...
Configure Multisite
Update
Create Database
Create Table
Create Index
Performance
Configuration
Restore Database
825
2. Configure the Web server to look for the files in the new location. For
information, refer to the NetQuestion Appendix in the Installation and
Configuration Supplement.
3. If you are using the Java version of the Information Center, you can
specify a base URL for all HTML files. You should use the URL for the list
of books.
4. When you are able to view the book files, you can bookmark commonly
viewed topics. You will probably want to bookmark the following pages:
v List of books
v Tables of contents of frequently used books
v Frequently referenced articles, such as the ALTER TABLE topic
v The Search form
For information about how you can serve the DB2 Universal Database online
documentation files from a central machine, refer to the NetQuestion
Appendix in the Installation and Configuration Supplement.
826
Appendix G. Notices
IBM may not offer the products, services, or features discussed in this
document in all countries. Consult your local IBM representative for
information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or
imply that only that IBM product, program, or service may be used. Any
functionally equivalent product, program, or service that does not infringe
any IBM intellectual property right may be used instead. However, it is the
users responsibility to evaluate and verify the operation of any non-IBM
product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the
IBM Intellectual Property Department in your country or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any
other country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow
disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will
be incorporated in new editions of the publication. IBM may make
Copyright IBM Corp. 1993, 2001
827
828
All statements regarding IBMs future direction or intent are subject to change
or withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the examples
include the names of individuals, companies, brands, and products. All of
these names are fictitious and any similarity to the names and addresses used
by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information may contain sample application programs in source
language, which illustrates programming techniques on various operating
platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using,
marketing or distributing application programs conforming to the application
programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under
all conditions. IBM, therefore, cannot guarantee or imply reliability,
serviceability, or function of these programs.
Each copy or any portion of these sample programs or any derivative work
must include a copyright notice as follows:
(your company name) (year). Portions of this code are derived from IBM
Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. All
rights reserved.
Appendix G. Notices
829
Trademarks
The following terms, which may be denoted by an asterisk(*), are trademarks
of International Business Machines Corporation in the United States, other
countries, or both.
ACF/VTAM
AISPO
AIX
AIX/6000
AIXwindows
AnyNet
APPN
AS/400
BookManager
CICS
C Set++
C/370
DATABASE 2
DataHub
DataJoiner
DataPropagator
DataRefresher
DB2
DB2 Connect
DB2 Extenders
DB2 OLAP Server
DB2 Universal Database
Distributed Relational
Database Architecture
DRDA
eNetwork
Extended Services
FFST
First Failure Support Technology
IBM
IMS
IMS/ESA
LAN DistanceMVS
MVS/ESA
MVS/XA
Net.Data
OS/2
OS/390
OS/400
PowerPC
QBIC
QMF
RACF
RISC System/6000
RS/6000
S/370
SP
SQL/DS
SQL/400
System/370
System/390
SystemView
VisualAge
VM/ESA
VSE/ESA
VTAM
WebExplorer
WIN-OS/2
830
Appendix G. Notices
831
832
Index
Special Characters
#ifdefs, C/C++ language
restrictions 613
#include macro, C/C++ language
restrictions 598
#line macros, C/C++ language
restrictions 598
Numerics
64-bit integer (BIGINT) data type
supported by DB2 Connect
Version 7 790
A
access to data consideration
DB2 Call Level Interface (DB2
CLI) 24
embedded SQL 23
JDBC 24
Microsoft specifications 25
ODBC 24
REXX 24
using Perl 25
using query products 25
ACQUIRE statement 800
activation time and triggers 488
ActiveX Data Object specification
supported in DB2 25
add database wizard 824, 825
ADD METHOD 298
ADHOC.SQC C program
listing 157
ADO specification
supported in DB2 25
AFTER triggers 488, 493
aggregating functions 378
alerts, supported by triggers 484
allocating dynamic memory in a
UDF 448
ALLOW PARALLEL clause 422
ALTER NICKNAME statement
column options 577
data type mappings 581
altering view 314
ambiguous cursors 791
APPC, handling interrupts 119
application design
access to data 23
binding 47
833
B
backing out changes 19
backup database wizard 824
BASIC language
implementation of OLE
automation UDF 425
BASIC types and OLE automation
types 428
BEFORE triggers 488, 493
BEGIN DECLARE SECTION 11
beginning transactions 18
BigDecimal Java type 639
BIGINT parameter to UDF 412
BIGINT SQL data type 77
C/C++ 628
COBOL 696
FORTRAN 712
Java 639
Java stored procedures
(DB2GENERAL) 770
834
buffered insert
advantages 560
asynchronous nature of 560
buffer size 557
closed state 561
considerations for using 560
deadlock errors 561
error detection during 560
error reporting in buffered
insert 561
group of rows 560
INSERT BUF bind option 560
long field restriction 562
not supported in CLP 562
open state 561
overview 557
partially filled 558
restrictions on using 562
savepoint consideration 189, 558
SELECT buffered insert 560
statements that close 558
transaction log
consideration 560
unique key violation 561
C
C++
consideration for stored
procedures 229
considerations for UDFs 451
type decoration
consideration 594
C++ types and OLE automation
types 428
C/C++ data types
blob 628
blob_file 628
blob_locator 628
char 628
clob 628
clob_file 628
clob_locator 628
dbclob 628
dbclob_file 628
dbclob_locator 628
double 628
float 628
long 628
long int 628
long long 628
long long int 628
null-terminated character
form 628
short 628
short int 628
call-type (continued)
contents with scalar
functions 402
contents with table
functions 403
call-type, passing to UDF 402
CALL USING DESCRIPTOR
statement (OS/400) 796
calling convention
for UDFs 410
calling from a REXX
application 730
calling the DB2 CLP from a REXX
application 730
CARDINALITY specification in table
functions 441
cascade 794
cascading triggers 494
CAST FROM clause 396
CAST FROM clause in the CREATE
FUNCTION statement 410
castability 375
casting
UDFs 391
CHAR 398
char C/C++ type 628
CHAR parameter to UDF 413
CHAR SQL data type 77, 428
C/C++ 628
COBOL 696
FORTRAN 712
Java 639
Java stored procedures
(DB2GENERAL) 770
OLE DB table function 436
REXX 726
CHAR type 672
character comparison,
overview 505
character conversion
coding SQL statements 511
coding stored procedures 513,
533
during precompiling and
binding 514
expansion 517
national language support
(NLS) 515
programming
considerations 511
rules for string conversions 533
string length overflow 532
string length overflow past data
types 533
supported code pages 516
Index
835
836
consideration (continued)
data relationship control 27
data value control 25
DB2 application design 21
consistency
of data 17
consistency of data 17
consistent behavior and distinct
types 281
constraint mechanisms on large
objects 275
constructor functions 296
contexts
application dependencies
between 545
database dependencies
between 545
preventing deadlocks
between 546
setting in multithreaded DB2
applications 543
control information to access large
object data 350
conventions used in this book 7
CONVERT
WCHARTYPE
in stored procedures 229
coordinator node
behavior without buffered
insert 559
cost of a UDT example 382
counter for UDFs example 461
counter OLE automation UDF object
example in BASIC 474
counter OLE automation UDF object
example in C++ 476
counting and defining UDFs
example 383
counting with an OLE automation
object 384
country code
in SQLERRMC field of
SQLCA 790
creatable multi-use OLE automation
server 430
creatable single-use OLE automation
server 430
CREATE DATABASE API
SQLEDBDESC structure 508
create database wizard 825
CREATE DISTINCT TYPE statement
and castability 375
examples of using 283
to define a distinct type 282
Index
837
D
data
avoiding bottlenecks when
extracting 562
838
data (continued)
extracting large volumes 562
data control language (DCL) 790
data definition language (DDL) 788
Data Definition Language (DDL)
issuing in savepoints 188
data manipulation language
(DML) 789
data relationship consideration
application logic 29
referential integrity 28
triggers 28
data sources in federated systems
accessing tables, views 574
invoking functions 586
mapping data types from 579
mapping DB2 functions to 586
mapping isolation levels to 578
using distributed requests to
query 583
using pass-through to query 588
data structure
allocating for stored
procedures 198
manipulating for DB2DARI
stored procedure 767
SQLEDBDESC 508
user-defined, with multiple
threads 544
data structures, declaring 11
data transfer
updating 105
data type mappings 579
ALTER NICKNAME
statement 581
CREATE TYPE MAPPING
statement 580
creating for data sources 580
creating for specific
columns 580
default 579
data types
BLOBs 349
C/C++ 627, 628, 632
character conversion
overflow 533
class data members, declaring in
C/C++ 620
CLOB in C/C++ 633
CLOBs 349
conversion
between DB2 and
COBOL 696
between DB2 and
FORTRAN 712
Index
839
840
dereference operators
queries using 315
derived columns 176
DESCRIBE statement 801
double-byte character set
consideration 531
Extended UNIX Code
consideration 530
Extended UNIX Code
consideration with EUC
database 531
processing arbitrary
statements 152
structured types 348
support 801
descriptor handle 170
designing DB2 applications,
guidelines 21
DFT_SQLMATHWARN
configuration parameter 397
diagnostic-message, passing to
UDF 400
differences between different DB2
products 788
differences between host or AS/400
server and workstation 800, 801
differences in SQLCODEs and
SQLSTATEs 794
distinct type 375
distinct types
defining a distinct type 282
defining tables 283
manipulating
examples of 285
resolving unqualified distinct
types 282
strong typing 285
distributed environment 787
distributed requests
coding 583
optimizing 584
distributed subsection (DSS) 555
divid() UDF C program listing 453
DML (data manipulation
language) 789
Double-Byte Character Large
Objects 349
double-byte character set
Chinese (Traditional) code
sets 524
configuration parameters 520
considerations for collation 525
Japanese code sets 524
mixed code set
environments 525
E
easier maintenance using
triggers 484
EBCDIC
mixed-byte data 789
sort order 793
embedded SQL
access to data consideration 23
embedded SQL statement
comments, rules for 599
examples of 46
overview of 45
rules for, in C/C++ 599
rules for, in COBOL 683
rules for, in FORTRAN 706
syntax rules 46
embedded SQL statements
comments, rules for 683, 706
host variable, referencing in 75
encapsulation and distinct
types 281
END DECLARE SECTION 11
ending transactions 18
ending transactions implicitly 19
environment APIs
include file for C/C++ 596
include file for COBOL 681
include file for FORTRAN 703
environment,for programming 9
environment handle 170
error code 15
error detection in a buffered
insert 560
error handling
C/C++ language
precompiler 598
considerations in a partitioned
environment 569
during precompilation 51
examples (continued)
declaring BLOB locator using
COBOL 691
declaring BLOBs using
COBOL 690
declaring BLOBs using
FORTRAN 710
declaring CLOB file locator using
FORTRAN 711
declaring CLOBs using
COBOL 690
declaring CLOBs using
FORTRAN 710
declaring DBCLOBs using
COBOL 690
deferring the evaluation of a LOB
expression 359
DYNAMIC.CMD REXX program
listing 141
Dynamic.java Java program
listing 137
DYNAMIC.SQB COBOL program
listing 139
DYNAMIC.SQC C program
listing 135
extracting a document to a file
(CLOB elements in a
table) 368
inserting data into a CLOB
column 372
Java applets 647
LOBEVAL.SQB COBOL program
listing 363
LOBEVAL.SQC C program
listing 361
LOBFILE.SQB COBOL program
listing 370
LOBFILE.SQC C program
listing 369
LOBLOC.SQB COBOL program
listing 356
LOBLOC.SQC C program
listing 354
money using CREATE DISTINCT
TYPE 283
registering SQLEXEC, SQLDBS
and SQLDB2 719
registering SQLEXEC, SQLDBS
and SQLDB2 for REXX 718
resume using CREATE
DISTINCT TYPE 283
sales using CREATE TABLE 284
sample SQL declare section for
supported SQL data types 631
Index
841
examples (continued)
syntax for character host
variables in FORTRAN 708,
709
use of distinct types in
UNION 290
user-defined sourced functions
on distinct types 288
using a locator to work with a
CLOB value 353
using class data members in an
SQL statement 620
using parameter markers in
search and update 162
V5SPCLI.SQC C program
listing 781
V5SPSRV.SQC C program
listing 785
Varinp.java Java program
listing 166
VARINP.SQB COBOL program
listing 168
VARINP.SQC C program
listing 164
EXCSQLSTT command 801
EXEC SQL INCLUDE SQLCA
multithreading
considerations 544
EXEC SQL INCLUDE statement,
C/C++ language restrictions 598
EXECUTE IMMEDIATE statement,
summary of 128
EXECUTE statement, summary
of 128
execution requirements for
REXX 729
exit routines, use restrictions 119
expansion of data on the host or
AS/400 server 789
EXPLAIN, prototyping utility 41
Explain Snapshot 55
EXPLSNAP bind option 55
exponentiation and defining UDFs
example 380
extended dynamic SQL statements
not supported 801
extended UNIX code (EUC)
character sets 519
Extended UNIX Code (EUC)
character conversion
overflow 532
character conversions in stored
procedures 533
character string length
overflow 533
842
F
faster application development using
triggers 484
federated systems
column options 576
data integrity 578
data source functions 586
data source tables, views
cataloging information
about 574
considerations,
restrictions 575
nicknames for 574
data type mappings 579
distributed requests 583
function mapping options 587
function mappings 586
introduction 573
isolation levels 578
nicknames 574
pass-through 588
server options 584
FENCED option and UDFs 448
FETCH call 403
FETCH statement
host variables 131
repeated access, technique
for 102
scroll backwards, technique
for 102
using SQLDA structure
with 146
file extensions
sample programs 743
file reference declarations in
REXX 725
file reference variables
examples of using 368
for manipulating LOBs 349
input values 366
output values 367
final call, to a UDF 402
FINAL CALL clause 403
FINAL CALL keyword 402
finalize Java method 422
find the vowel, fold the CLOB for
UDFs example 457
findvwl() UDF C program
listing 457
FIPS 127-2 standard 15
FIRST call 403
first call, to a UDF 402
fixed or varying length data types
Extended UNIX Code
consideration 532
flagger utility, used in
precompiling 51
flexibility and distinct types 281
graphic constants
Chinese (Traditional) code
sets 524
Japanese code sets 524
graphic data
Chinese (Traditional) code
sets 521, 524
Japanese code sets 521, 524
graphic data types
selecting 622
graphic host variables
C/C++ 606
COBOL 688
GRAPHIC parameter to UDF 414
GRAPHIC SQL data type
C/C++ 628
COBOL 696
FORTRAN, not supported
in 712
Java 639
Java stored procedures
(DB2GENERAL) 770
OLE DB table function 436
REXX 726
graphic strings
character conversion 518
GRAPHIC type 672
graphical objects
considerations for Java 672
GROUP BY clause
sort order 793
group of rows
in buffered insert 560
guideline
access to data 23
application logic at server 29
data relationship control 27
data value control 25
DB2 application design 21
handle
connection handle 170
descriptor handle 170
environment handle 170
statement handle 170
handlers
example 253
overview 251
hierarchy
structured types 293
holdability in SQLJ iterators 655
host or AS/400
accessing host servers 542
Index
843
host variables
allocating in stored
procedures 198
class data members, handling in
C/C++ 620
clearing LOB host variables in
REXX 726
considerations for stored
procedures 229
declaring 71
in COBOL 685
in FORTRAN 707
declaring, examples of 74
declaring, rules for 71
declaring, sample programs 105
declaring as pointer to data
type 619
declaring graphic
in COBOL 688
declaring graphic in C/C++ 606
declaring in C/C++ 601
declaring structured types 348
declaring using db2dclgn 73
declaring using variable list
statement 153
definition 71
determining how to define for
use with a column 14
file reference declarations in
C/C++ 612
file reference declarations in
COBOL 691
file reference declarations in
FORTRAN 711
file reference declarations in
REXX 725
FORTRAN, overview of 707
graphic data 621
in REXX 721
initializing for stored
procedure 198
initializing in C/C++ 613
LOB data declarations in
C/C++ 608
LOB data declarations in
COBOL 689
LOB data declarations in
FORTRAN 710
LOB data in REXX 724
LOB locator declarations in
C/C++ 611
LOB locator declarations in
COBOL 690
LOB locator declarations in
FORTRAN 711
844
I
IBM DB2 Universal Database Project
Add-In for Microsoft Visual
C++ 30, 32
IBM DB2 Universal Database Tools
Add-In for Microsoft Visual C++,
activating 33
identity columns 176
identity sequence 504
implementing a UDF 378
implicit connect 790
IN stored procedure
parameters 199, 212
include file
C/C++ requirements for 595
COBOL requirements for 680
FORTRAN requirements for 702
SQL
COBOL 680
FORTRAN 702
SQL for C/C++ 595
SQL1252A
COBOL 682
J
Japanese and traditional Chinese
EUC code sets
COBOL considerations 699
FORTRAN considerations 715
Japanese code sets 521
C/C++ considerations 626
developing applications
using 524
Japanese EUC code sets
REXX considerations 734
Java
applet support 643
application support 642
comparison of SQLJ with
JDBC 637
comparison with other
languages 638
connection pooling 650
debugging 641
distributing and running
applets 647
distributing and running
applications 647
embedding SQL statements 45
installing JAR files 668, 669
Index
845
Java (continued)
JDBC 2.0 Optional Package API
support 649
JDBC example program 644
JDBC specification 642
JNDI support 649
overview 637
overview of DB2 support
for 642
SQLCODE 641
SQLJ (Embedded SQL for Java)
calling stored procedures 660
example program using 656
host variables 660
SQLJ (Embedded SQLJ for
Java) 651
applets 652
db2profc 652
db2profp 652
declaring cursors 655
declaring iterators 655
embedding SQL statments
in 654
example clauses 654
holdability 655
positioned DELETE
statement 655
positioned UPDATE
statement 655
profconv 652
restrictions 652
returnability 655
translator 652
SQLJ specification 642
SQLMSG 641
SQLSTATE 641
stored procedures 668, 669
examples 670
Transaction API (JTA) 651
UDFs (user-defined
functions) 668, 669
examples 670
Java application
SCRATCHPAD
consideration 422
signature for UDFs 420
using graphical and large
objects 672
Java class files
CLASSPATH environment
variable 664
java_heap_sz configuration
parameter 664
jdk11_path configuration
parameter 664
846
K
keys
foreign 794
primary 794
L
LABEL ON statement 800
LANGLEVEL precompile option
MIA 632
SAA1 632
using SQL92E and SQLSTATE or
SQLCODE variables 635, 699,
714
LANGLEVEL SQL92E precompile
option 793
language identifier
books 817
LANGUAGE OLE clause 425
large object descriptor 349
large object value 349
latch
status with multiple threads 543
late-breaking information 818
limitations
stored procedures
(DB2DARI) 767
linking
overview of 52
linking a UDF 378
LOB data type
supported by DB2 Connect
Version 7 789
LOB locator APIs, used in UDFs
sqludf_append API 443
sqludf_create_locator API 443
sqludf_free_locator API 443
sqludf_length API 443
sqludf_substr API 443
LOB locator example program
listing 471
LOB locators
scenarios for using 447
used in UDFs 443
lob-options-clause of the CREATE
TABLE statement 351
LOBEVAL.SQB COBOL program
listing 363, 370
LOBEVAL.SQC C program
listing 361, 369
LOBLOC.SQB COBOL program
listing 356
LOBLOC.SQC C program
listing 354
LOBs (Large Objects)
and DB2 object extensions 275
considerations for Java 672
file reference variables 349
examples of using 368
input values 366
output values 367
M
macro processing for the C/C++
language 593
macros in sqludf.h 419
mail OLE automation UDF object
example in BASIC 478
manipulating large objects 275
maxdari configuration
parameter 663
maximum size for large object
columns, defining 350
member operator, C/C++
restriction 621
memory
decreasing requirement using
LOB locators 443
memory, allocating dynamic in the
UDF 448
memory allocation for unequal code
pages 526
memory size, shared for UDFs 450
message file, definition of 51
method invocation
for OLE automation UDFs 426
methods
definition 373
implementing 374
invocation operator 298
invoking 298
methods (continued)
rationale 374
registering 379
writing 379, 393
MIA 632
Microsoft Exchange, used in mail
example 478
Microsoft specifications
access to data consideration 25
ADO (ActiveX Data Object) 25
MTS (Microsoft Transaction
Server) 25
RDO (Remote Data Object) 25
Visual Basic 25
Visual C++ 25
Microsoft Transaction Server
specification
access to data consideration 25
Microsoft Visual C++
IBM DB2 Universal Database
Project Add-In 30
mixed-byte data 789
mixed code set environments
application design 525
mixed Extended UNIX Code
considerations 522
MODE DB2SQL clause 292
model for DB2 programming 20
modelling entities as independent
objects 275
money using CREATE DISTINCT
TYPE example 283
moving large objects using a file
reference variable 349
multi-byte character support
code points for special
characters 512
multi-byte code pages
Chinese (Traditional) code
sets 521, 524
Japanese code sets 521, 524
multi-byte considerations
Chinese (Traditional) code sets in
C/C++ 626
Chinese (Traditional) EUC code
sets in REXX 734
Japanese and traditional Chinese
EUC code sets
in COBOL 699
in FORTRAN 715
Japanese code sets in
C/C++ 626
Japanese EUC code sets in
REXX 734
Index
847
N
national language support (NLS)
character conversion 515
code page 515
considerations 503
mixed-byte data 789
nested stored procedures 214
parameter passing 256
recursion 257
restrictions 257
returning result sets 257
SQL procedures 256
Netscape browser
installing 823
nicknames
cataloging related
information 574
considerations, restrictions 575
CREATE NICKNAME
statement 575
using with views 577
848
NOCONVERT
WCHARTYPE
in stored procedures 229
NOLINEMACRO
PREP option 598
nonexecutable SQL statements
DECLARE CURSOR 16
INCLUDE 16
INCLUDE SQLDA 16
normal call, to a UDF 402
NOT ATOMIC compound SQL 799
NOT DETERMINISTIC option and
UDFs 448
NOT FENCED LOB locator
UDFs 443
NOT FENCED stored procedures
considerations 233
precompiling 232
working with 231
NOT NULL CALL clause 397
NOT NULL CALL option and
UDFs 448
null-terminated character form
C/C++ type 628
null-terminator 632
NULL value
receiving, preparing for 75
numeric conversion overflows 795
numeric data types 789
numeric host variables
C/C++ 602
COBOL 685
FORTRAN 707
NUMERIC parameter to UDF 412
NUMERIC SQL data type 428
C/C++ 628
COBOL 696
FORTRAN 712
Java 639
Java stored procedures
(DB2GENERAL) 770
OLE DB table function 436
REXX 726
O
object identifier columns 295, 296
naming 304
object identifiers
choosing representation type
for 308
creating constraints on 320
generating automatically 319
object instances
for OLE automation UDFs 426
OLE DB (continued)
table functions (continued)
creating 432
defining a user mapping 436
EXTERNAL NAME
clause 434
fully qualified names 434
identifying servers 435
using connection string 433
using CONNECTSTRING
option 433
using server name 433
OLE keyword 425
OLE messaging example 478
OLE programmatic ID (progID) 425
online help 820
online information
searching 826
viewing 822
ONLY clause
restricting returned types
with 317
open state, buffered insert 561
OPENFTCH.SQB COBOL program
listing 100
OPENFTCH.SQC C program
listing 95
Openftch.sqlj Java program
listing 97
ORDER BY clause
sort order 793
OUT stored procedure
parameters 199, 212
OUTER keyword
returning subtype attributes
with 318
output and input to screen and
keyboard and UDFs 450
output file extensions, C/C++
language 594
overloading
function names 377
stored procedure names 199
owner attributes
package 792
P
package
attributes 792
creating 53
creating for compiled
applications 49
renaming 53
support to REXX
applications 729
package (continued)
timestamp errors 58
package attributes
creator 792
owner 792
qualifier 792
page-level locking 794
parameter markers 170
in functions example 387
in processing arbitrary
statements 152
programming example 162
SQLVAR entries 161
use in dynamic SQL 161
partitioned environment
error handling
considerations 569
identifying when errors
occur 570
improving performance 555
severe error considerations 569
pass-through
COMMIT statement 589
considerations, restrictions 589
SET PASSTHRU RESET
statement 589
SET PASSTHRU statement 589
SQL processing 588
passing contexts between
threads 543
PDF 818
performance
dynamic SQL caching 62
factors affecting, static SQL 62
improving
using stored procedures 194
improving in partitioned
environments 555
improving with buffered
inserts 557
improving with directed
DSS 555
improving with local
bypass 556
improving with read only
cursors 92
improving with READ ONLY
cursors 555
increasing using LOB
locators 443
large objects 351
NOT FENCED stored
procedure 231
optimizing with packages 57
passing blocks of data 552
performance (continued)
precompiling static SQL
statements 57
UDFs 374
performance advantages
with buffered insert 560
performance and distinct types 281
performance configuration
wizard 825
Perl
access to data consideration 25
PICTURE (PIC) clause in COBOL
types 696
portability 171
porting applications 787
precompile option
WCHARTYPE
NOCONVERT 232
precompiler
C/C++ #include macro 593
C/C++ character set 593
C/C++ language 621
C/C++ language debugging 598
C/C++ macro processing 593
C/C++ symbol substitution 593
C/C++ trigraph sequences 593
COBOL 679
DB2 Connect support 791
FORTRAN 701
options 49
overview of 46
support 788
supported languages 10
types of output 49
precompiling 51
accessing host or AS/400
application server through DB2
Connect 51
accessing multiple servers 51
example of 49
flagger utility 51
options, updatable cursor 93
overview of 49
supporting dynamic SQL
statements 127
PREP option
NOLINEMACRO 598
PREPARE statement
processing arbitrary
statements 152
summary of 128
support 801
preprocessor functions and the SQL
precompiler 613
prerequisites, for programming 9
Index
849
Q
QSQ (DB2 Universal Database for
AS/400) 790
qualification and member operators
in C/C++ 621
qualifier attributes
different platforms 792
package 792
query products, access to data
consideration 25
QUERYOPT bind option 55
R
RAISE_ERROR built-in
function 493
RDO specification
supported in DB2 25
re-entrant
stored procedures 232
UDFs 439
re-use and UDFs 374
REAL*2 FORTRAN type 712
850
REXX (continued)
calling the DB2 CLP from
application 730
Chinese (Traditional)
considerations 734
clearing LOB host variables 726
cursor identifiers 720
data types supported 726
execution requirements 729
indicator variables 722, 728
initializing variables 732
Japanese considerations 734
LOB data 724
LOB file reference
declarations 725
LOB locator declarations 724
predefined variables 722
programming
considerations 718
registering routines in 718
registering SQLEXEC, SQLDBS
and SQLDB2 718
restrictions in 718
stored procedures in 732
supported SQL statements 720
REXX and C++ data types 726
REXX APIs
SQLDB2 717, 730
SQLDBS 717
SQLEXEC 717
ROLLBACK statement 11, 791
association with cursor 82
backing out changes 19
ending transactions 19
restoring data 19
rolling back changes 19
ROLLBACK TO SAVEPOINT
statement 187
ROLLBACK WORK RELEASE
not supported 801
rolling back changes 19
root types 294
row
order of, controlling 103
order of in table,
positioning 104
retrieving multiple with
cursor 92
retrieving one 63
retrieving using SQLDA 146
selecting one, with SELECT INTO
statement 63
row blocking
customizing for
performance 552
S
SAA1 632
SAFEARRAY OLE automation
type 428
sales using CREATE TABLE
example 284
sample programs
Application Program Interface
(API) 743
cross-platform 817
embedded SQL statements 743
HTML 817
Java stored procedures 663
Java UDFs 420
location of 743
savepoint, buffered insert
consideration 558
SAVEPOINT statement 187
savepoints 183
atomic compound SQL 187
blocking cursors 189
buffered inserts 189
Data Definition Language 188
nested 187
restrictions 187
SET INTEGRITY statement 187
triggers 187
XA transaction managers 190
scalar functions 378
contents of call-type
argument 402
schema-name and UDFs 377
scoped references
comparison to referential
integrity 311
scoping references 306
scratchpad, passing to UDF 395,
401
scratchpad and UDFs 422, 439
SCRATCHPAD clause 403
scratchpad considerations
for OLE automation UDFs 426
SCRATCHPAD keyword 401, 402,
422, 439
SCRATCHPAD option
for OLE automation UDFs 426
searching
online information 824, 826
section number 801
SELECT INTO statement
overview of 63
SELECT statement
association with EXECUTE
statement 128
buffered insert
consideration 560
declaring an SQLDA 143
dereference operators in 315
describing, after allocating
SQLDA 146
in DECLARE CURSOR
statement 81
inheriting privileges from
supertables 305
retrieving data a second
time 102
retrieving multiple rows with 81
scoped references in 315
support 789
typed tables 314
updating retrieved data 105
varying-list, overview of 153
self-referencing tables 794
self-referencing typed tables 309
semantic behavior of stored
objects 275
semaphores 545
sequences, description 177
serialization of data structures 544
serialization of SQL statement
execution 543
server options 584
SET CURRENT FUNCTION PATH
statement 378
SET CURRENT PACKAGESET
statement 54
SET CURRENT statement
support 801
SET PASSTHRU RESET
statement 589
SET PASSTHRU statement 589
SET SERVER OPTION
statement 585
setAsciiStream JDBC method 672
setString JDBC method 672
Index
851
852
SQL_WCHART_CONVERT
preprocessor macro 624
SQL1252A include file
for COBOL applications 682
for FORTRAN applications 704
SQL1252B include file
for COBOL applications 682
for FORTRAN applications 704
SQL92 793
SQLADEF include file
for C/C++ applications 595
SQLAPREP include file
for C/C++ applications 595
for COBOL applications 680
for FORTRAN applications 702
SQLCA
avoiding multiple definitions 15
error reporting in buffered
insert 561
incomplete insert when error
occurs 561
multithreading
considerations 544
SQLERRMC field 790, 799
SQLERRP field 790
SQLCA_92 include file
for COBOL applications 680
for FORTRAN applications 703
SQLCA_92 structure
include file
for FORTRAN
applications 703
SQLCA_CN include file 702
SQLCA_CS include file 702
SQLCA include file
for C/C++ applications 595
for COBOL applications 680
for FORTRAN applications 702
SQLCA predefined variable 722
SQLCA.SQLERRD settings on
CONNECT 528
SQLCA structure
defining, sample programs 105
include file
for COBOL applications 680
for FORTRAN
applications 702
include file for C/C++ 595
merged multiple structures 570
multiple definitions 117
overview 116
reporting errors 570
requirements 116
sqlerrd 570
SQLERRD(6) field 570
Index
853
854
Index
855
856
syntax (continued)
embedded SQL statement
avoiding line breaks 600
embedded SQL statement
comments in C/C++ 599
embedded SQL statement
comments in REXX 721
embedded SQL statement in
C/C++ 599
embedded SQL statement
substitution of white space
characters 600
graphic host variable in
C/C++ 606
processing SQL statement in
REXX 719
syntax for referring to
functions 385
SYSCAT.FUNCMAPOPTIONS
catalog view 586
SYSCAT.FUNCTIONS catalog
view 587
SYSIBM.SYSPROCEDURES catalog
(OS/390) 796
SYSSTAT.FUNCTIONS catalog
view 587
system catalog
dropping view implications 314
using 795
system catalog views
prototyping utility 41
system configuration parameter for
shared memory size 450
System.err Java I/O stream 420
System.in Java I/O stream 420
System.out Java I/O stream 420
T
table
committing changes 18
data source tables 574
lob-options-clause of the CREATE
TABLE statement 351
positioning cursor at end 104
tablespace-options-clause of the
CREATE TABLE statement 351
table check constraint, data value
control consideration 26
table function 396
SQL-result argument 396
table function example 384
table functions 378
application design
considerations 441
type conversion
between SQL types and OLE
automation types 427
type decoration
in stored procedures 229
in UDFs 451
type decoration consideration
C++ 594
TYPE_ID function 316
type mapping
OLE automation types and
BASIC types 428
OLE automation types and C++
types 428
type mappings 579
dropping restrictions 313
TYPE_NAME function 316
TYPE predicate
restricting returned types
with 317
TYPE_SCHEMA function 316
typed tables
accessing subtypes in type
hierarchy 300
controlling privileges on 305
creating 304
creating subtables 299
defining relationships
between 300, 309
defining the scope of 306
definition of 299
determining hierarchy
position 305
inserting object identifiers 308
inserting objects into 306
object identifier column 304
returning subtype attributes 318
selecting data from 314
self-referencing 309
typed views
assigning scope to reference
columns in 313
body of 312
creating 311
creating on root types 311
creating on subtypes 311
types
ROWID 790
types or arguments, promotions in
UDFs 410
U
UCS-2 521
Index
857
858
V
V5SPCLI.SQC C program
listing 781
V5SPSRV.SQC C program
listing 785
VALIDATE RUN
DB2 Connect support 791
VALUES clause
on INSERT statement 559
VARCHAR 399, 400
VARCHAR FOR BIT DATA
parameter to UDF 414
VARCHAR SQL data type 77, 428
C/C++ 628
C or C++ 633
COBOL 696
FORTRAN 712
Java 639
Java stored procedures
(DB2GENERAL) 770
OLE DB table function 436
REXX 726
VARCHAR structured form C/C++
type 628
VARGRAPHIC data 632
VARGRAPHIC parameter to
UDF 415
VARGRAPHIC SQL data type 77,
428
C/C++ 628
COBOL 696
FORTRAN 712
Java 639
Java stored procedures
(DB2GENERAL) 770
OLE DB table function 436
REXX 726
variable-length strings 789
variables
SQLCODE 635, 699, 714
SQLSTATE 635, 699, 714
variables, declaring 11
variables, predefined in REXX 722
Varinp.java Java program
listing 166
VARINP.SQB COBOL program
listing 168
VARINP.SQC C program listing 164
view
altering 314
data source views 574
view (continued)
data value control
consideration 27
dropping 314
dropping implications for system
catalogs 314
restrictions 314
viewing
online information 822
views
system catalogs 795
Visual Basic
supported in DB2 25
Visual C++
IBM DB2 Universal Database
Project Add-In 30
supported in DB2 25
W
warning message, truncation 77
wchar_t and sqldbchar, selecting
data types 622
wchar_t C/C++ type 628
wchar_t data type 414, 415, 417,
622
WCHARTYPE
guidelines 624
in stored procedures 229
WCHARTYPE precompiler
option 232, 623
weight, definition of 504
WHENEVER SQLERROR
CONTINUE statement 15
WHENEVER statement
caution in using with SQL
statements 15
error indicators with SQLCA 15
in error handling 117
wild moves, DB2 checking of 480
Windows code pages
DB2CODEPAGE registry
variable 509
supported code pages 509
Windows registration database
for OLE automation UDFs 425
WITH OPTIONS clause
defining column options
with 306
defining reference column
scope 306
wizards
add database 824, 825
backup database 824
completing tasks 824
configure multisite update 824
wizards (continued)
create database 825
create table 825
create table space 825
index 825
performance configuration 825
restore database 825
work environment
setting up 37
test databases, guidelines for
creating 37
X
X/Open XA Interface 549
API restrictions 551
characteristics of transaction
processing 549
CICS environment 549
COMMIT and ROLLBACK 550
cursors declared WITH
HOLD 550
DISCONNECT 549
multithreaded application 552
savepoints 190
SET CONNECTION 549
single-threaded application 552
SQL CONNECT 550
transactions 549
XA environment 551
XASerialize 552
Index
859
860
Contacting IBM
If you have a technical problem, please review and carry out the actions
suggested by the Troubleshooting Guide before contacting DB2 Customer
Support. This guide suggests information that you can gather to help DB2
Customer Support to serve you better.
For information or to order any of the DB2 Universal Database products
contact an IBM representative at a local branch office or contact any
authorized IBM software remarketer.
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-237-5511 for customer support
v 1-888-426-4343 to learn about available service options
Product Information
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-IBM-CALL (1-800-426-2255) or 1-800-3IBM-OS2 (1-800-342-6672) to
order products or get general information.
v 1-800-879-2755 to order publications.
http://www.ibm.com/software/data/
The DB2 World Wide Web pages provide current DB2 information
about news, product descriptions, education schedules, and more.
http://www.ibm.com/software/data/db2/library/
The DB2 Product and Service Technical Library provides access to
frequently asked questions, fixes, books, and up-to-date DB2 technical
information.
Note: This information may be in English only.
http://www.elink.ibmlink.ibm.com/pbl/pbl/
The International Publications ordering Web site provides information
on how to order books.
http://www.ibm.com/education/certify/
The Professional Certification Program from the IBM Web site
provides certification test information for a variety of IBM products,
including DB2.
861
ftp.software.ibm.com
Log on as anonymous. In the directory /ps/products/db2, you can
find demos, fixes, information, and tools relating to DB2 and many
other products.
comp.databases.ibm-db2, bit.listserv.db2-l
These Internet newsgroups are available for users to discuss their
experiences with DB2 products.
On Compuserve: GO IBMDB2
Enter this command to access the IBM DB2 Family forums. All DB2
products are supported through these forums.
For information on how to contact IBM outside of the United States, refer to
Appendix A of the IBM Software Support Handbook. To access this document,
go to the following Web page: http://www.ibm.com/support/, and then
select the IBM Software Support Handbook link near the bottom of the page.
Note: In some countries, IBM-authorized dealers should contact their dealer
support structure instead of the IBM Support Center.
862
SC09-2949-01