DB2 - Application Programming & SQL Guide - Ver 7
DB2 - Application Programming & SQL Guide - Ver 7
Application Programming
and SQL Guide
Version 7
SC26-9933-01
DB2 Universal Database for OS/390 and z/OS
Application Programming
and SQL Guide
Version 7
SC26-9933-01
Note
Before using this information and the product it supports, be sure to read the
general information under “Notices” on page 949.
Contents v
Determining compatibility of SQL and FORTRAN data types. . . . . . . 172
Using indicator variables . . . . . . . . . . . . . . . . . . . . 172
Handling SQL error return codes . . . . . . . . . . . . . . . . . 173
Coding SQL statements in a PL/I application . . . . . . . . . . . . . 174
Defining the SQL communication area . . . . . . . . . . . . . . . 174
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 174
Embedding SQL statements . . . . . . . . . . . . . . . . . . 175
Using host variables . . . . . . . . . . . . . . . . . . . . . 177
Declaring host variables . . . . . . . . . . . . . . . . . . . . 178
Using host structures . . . . . . . . . . . . . . . . . . . . . 181
Determining equivalent SQL and PL/I data types . . . . . . . . . . . 182
Determining compatibility of SQL and PL/I data types . . . . . . . . . 186
Using indicator variables . . . . . . . . . . . . . . . . . . . . 187
Handling SQL error return codes . . . . . . . . . . . . . . . . . 188
Coding SQL statements in a REXX application. . . . . . . . . . . . . 189
Defining the SQL communication area . . . . . . . . . . . . . . . 189
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 190
Accessing the DB2 REXX Language Support application programming
interfaces . . . . . . . . . . . . . . . . . . . . . . . . 190
Embedding SQL statements in a REXX procedure . . . . . . . . . . 192
Using cursors and statement names . . . . . . . . . . . . . . . 194
Using REXX host variables and data types . . . . . . . . . . . . . 194
Using indicator variables . . . . . . . . . . . . . . . . . . . . 197
Setting the isolation level of SQL statements in a REXX procedure . . . . 198
Contents vii
The statement LOCK TABLE . . . . . . . . . . . . . . . . . . 352
Access paths . . . . . . . . . . . . . . . . . . . . . . . . 353
LOB locks . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Relationship between transaction locks and LOB locks . . . . . . . . . 355
Hierarchy of LOB locks . . . . . . . . . . . . . . . . . . . . 356
LOB and LOB table space lock modes. . . . . . . . . . . . . . . 357
Duration of locks . . . . . . . . . . . . . . . . . . . . . . . 357
Instances when locks on LOB table space are not taken . . . . . . . . 358
The LOCK TABLE statement . . . . . . . . . . . . . . . . . . 358
Contents ix
Program preparation considerations . . . . . . . . . . . . . . . . . 484
Precompiling . . . . . . . . . . . . . . . . . . . . . . . . 484
Binding . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Link-editing . . . . . . . . . . . . . . . . . . . . . . . . . 485
Loading and running . . . . . . . . . . . . . . . . . . . . . 485
Restart and recovery . . . . . . . . . . . . . . . . . . . . . . 486
JCL example of a batch backout . . . . . . . . . . . . . . . . . 486
JCL example of restarting a DL/I batch job . . . . . . . . . . . . . 487
Finding the DL/I batch checkpoint ID . . . . . . . . . . . . . . . 488
Contents xi
General rules about predicate evaluation . . . . . . . . . . . . . . . 631
Order of evaluating predicates . . . . . . . . . . . . . . . . . . 632
Summary of predicate processing . . . . . . . . . . . . . . . . 632
Examples of predicate properties . . . . . . . . . . . . . . . . . 636
Predicate filter factors . . . . . . . . . . . . . . . . . . . . . 637
DB2 predicate manipulation . . . . . . . . . . . . . . . . . . . 642
Column correlation . . . . . . . . . . . . . . . . . . . . . . 645
Using host variables efficiently . . . . . . . . . . . . . . . . . . . 648
Using REOPT(VARS) to change the access path at run time . . . . . . 648
Rewriting queries to influence access path selection. . . . . . . . . . 649
Writing efficient subqueries . . . . . . . . . . . . . . . . . . . . 652
Correlated subqueries . . . . . . . . . . . . . . . . . . . . . 653
Noncorrelated subqueries . . . . . . . . . . . . . . . . . . . 654
Subquery transformation into join. . . . . . . . . . . . . . . . . 655
Subquery tuning . . . . . . . . . . . . . . . . . . . . . . . 657
| Using scrollable cursors efficiently . . . . . . . . . . . . . . . . . 658
Writing efficient queries on views with UNION operators . . . . . . . . . 659
Special techniques to influence access path selection . . . . . . . . . . 660
Obtaining information about access paths . . . . . . . . . . . . . 661
Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS 661
| Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . . . 663
Reducing the number of matching columns . . . . . . . . . . . . . 664
Adding extra local predicates . . . . . . . . . . . . . . . . . . 665
Creating indexes for efficient star schemas . . . . . . . . . . . . . 666
Rearranging the order of tables in a FROM clause . . . . . . . . . . 668
Updating catalog statistics . . . . . . . . . . . . . . . . . . . 668
Using a subsystem parameter . . . . . . . . . . . . . . . . . . 670
Chapter 29. Programming for the call attachment facility (CAF) . . . . . 733
Call attachment facility capabilities and restrictions . . . . . . . . . . . 733
Capabilities when using CAF . . . . . . . . . . . . . . . . . . 733
CAF requirements . . . . . . . . . . . . . . . . . . . . . . 734
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . 736
Summary of connection functions . . . . . . . . . . . . . . . . 737
Accessing the CAF language interface. . . . . . . . . . . . . . . 739
General properties of CAF connections . . . . . . . . . . . . . . 740
CAF function descriptions . . . . . . . . . . . . . . . . . . . 741
CONNECT: Syntax and usage . . . . . . . . . . . . . . . . . . 743
OPEN: Syntax and usage . . . . . . . . . . . . . . . . . . . 747
CLOSE: Syntax and usage . . . . . . . . . . . . . . . . . . . 749
DISCONNECT: Syntax and usage . . . . . . . . . . . . . . . . 750
TRANSLATE: Syntax and usage . . . . . . . . . . . . . . . . . 751
Summary of CAF behavior . . . . . . . . . . . . . . . . . . . 753
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . 754
A single task with implicit connections . . . . . . . . . . . . . . . 754
A single task with explicit connections . . . . . . . . . . . . . . . 754
Contents xiii
Several tasks . . . . . . . . . . . . . . . . . . . . . . . . 754
Exits from your application . . . . . . . . . . . . . . . . . . . . 755
Attention exits . . . . . . . . . . . . . . . . . . . . . . . . 755
Recovery routines . . . . . . . . . . . . . . . . . . . . . . 755
Error messages and dsntrace . . . . . . . . . . . . . . . . . . . 756
CAF return codes and reason codes . . . . . . . . . . . . . . . . 756
Subsystem support subcomponent codes (X'00F3') . . . . . . . . . . 757
Program examples . . . . . . . . . . . . . . . . . . . . . . . 757
Sample JCL for using CAF . . . . . . . . . . . . . . . . . . . 757
Sample assembler code for using CAF . . . . . . . . . . . . . . 757
Loading and deleting the CAF language interface. . . . . . . . . . . 758
Establishing the connection to DB2 . . . . . . . . . . . . . . . . 758
Checking return codes and reason codes. . . . . . . . . . . . . . 760
Using dummy entry point DSNHLI . . . . . . . . . . . . . . . . 763
Variable declarations . . . . . . . . . . . . . . . . . . . . . 764
Contents xv
Example DB2 REXX application . . . . . . . . . . . . . . . . . . 866
Sample COBOL program using DRDA access . . . . . . . . . . . . . 880
Sample COBOL program using DB2 private protocol access . . . . . . . 888
Examples of using stored procedures . . . . . . . . . . . . . . . . 894
Calling a stored procedure from a C program . . . . . . . . . . . . 895
Calling a stored procedure from a COBOL program . . . . . . . . . . 898
Calling a stored procedure from a PL/I program . . . . . . . . . . . 901
C stored procedure: GENERAL . . . . . . . . . . . . . . . . . 903
C stored procedure: GENERAL WITH NULLS . . . . . . . . . . . . 905
COBOL stored procedure: GENERAL . . . . . . . . . . . . . . . 907
COBOL stored procedure: GENERAL WITH NULLS. . . . . . . . . . 910
PL/I stored procedure: GENERAL . . . . . . . . . . . . . . . . 912
PL/I stored procedure: GENERAL WITH NULLS . . . . . . . . . . . 913
Appendix J. Summary of changes to DB2 for OS/390 and z/OS Version 7 945
Enhancements for managing data . . . . . . . . . . . . . . . . . 945
Enhancements for reliability, scalability, and availability. . . . . . . . . . 945
Easier development and integration of e-business applications . . . . . . . 946
Improved connectivity . . . . . . . . . . . . . . . . . . . . . . 947
Features of DB2 for OS/390 and z/OS . . . . . . . . . . . . . . . . 948
Migration considerations . . . . . . . . . . . . . . . . . . . . . 948
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 953
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 971
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
.
Contents xvii
xviii Application Programming and SQL Guide
About this book
This book discusses how to design and write application programs that access
DB2® for OS/390® (DB2), a highly flexible relational database management system
(DBMS).
Important
In this version of DB2 for OS/390 and z/OS, some utility functions are
available as optional products. You must separately order and purchase a
license to such utilities, and discussion of those utility functions in this
publication is not intended to otherwise imply that you have a license to them.
When referring to a DB2 product other than DB2 for OS/390 and z/OS, this book
uses the product’s full name to avoid ambiguity.
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in all lowercase letters (for example, column-name).
They represent user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
http://www.ibm.com/software/db2os390
The Web site has a feedback page that you can use to send comments.
v Complete the readers’ comment form at the back of the book and return it by
mail, by fax (800-426-7773 for the United States and Canada), or by giving it to
an IBM representative.
For more advanced topics on using SELECT statements, see “Chapter 4. Using
subqueries” on page 43, “Chapter 19. Planning to access distributed data” on
page 369, and Chapter 4 of DB2 SQL Reference.
Examples of SQL statements illustrate the concepts that this chapter discusses.
Consider developing SQL statements similar to these examples and then execute
them dynamically using SPUFI or Query Management Facility (QMF).
Result tables
The data retrieved through SQL is always in the form of a table, which is called a
result table. Like the tables from which you retrieve the data, a result table has rows
and columns. A program fetches this data one row at a time.
Example: SELECT statement: This SELECT statement retrieves the last name,
first name, and phone number of employees in department D11 from the sample
employee table:
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8710.EMP
WHERE WORKDEPT = 'D11'
ORDER BY LASTNAME;
The result table displays in this form after SPUFI fetches and formats it. The format
of your results might be different.
Data types
When you create a DB2 table, you define each column to have a specific data type.
The data type can be a built-in data type or a distinct type. This section discusses
built-in data types. For information on distinct types, see “Chapter 15. Creating and
using distinct types” on page 301. The data type of a column determines what you
can and cannot do with it. When you perform operations on columns, the data must
be compatible with the data type of the referenced column. For example, you
cannot insert character data, like a last name, into a column whose data type is
numeric. Similarly, you cannot compare columns containing incompatible data
types.
For more detailed information on each data type, see Chapter 2 of DB2 SQL
Reference.
Table 1 on page 5 shows whether operands of any two data types are compatible
(Yes) or incompatible (No).
Because the example does not specify a WHERE clause, the statement retrieves
data from all rows.
The dashes for MGRNO and LOCATION in the result table indicate null values.
SELECT * is recommended mostly for use with dynamic SQL and view definitions.
You can use SELECT * in static SQL, but this is not recommended; if you add a
column to the table to which SELECT * refers, the program might reference
columns for which you have not defined receiving host variables. For more
information on host variables, see “Accessing data using host variables and host
structures” on page 67.
If you list the column names in a static SELECT statement instead of using an
asterisk, you can avoid the problem just mentioned. You can also see the
relationship between the receiving host variables and the columns in the result
table.
Example: SELECT column-name: This SQL statement selects only the MGRNO
and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO
FROM DSN8710.DEPT;
With a single SELECT statement, you can select data from one column or as many
as 750 columns.
For example, if you want to execute a DB2 built-in function on host variable, you
can use an SQL statement like this:
SELECT RAND(:HRAND)
FROM SYSIBM.SYSDUMMY1;
If you want to order the rows of data in the result table, use the ORDER BY clause
described in “Putting the rows in order: ORDER BY” on page 9.
Example: CREATE VIEW with AS clause: You can specify result column names in
the select-clause of a CREATE VIEW statement. You do not need to supply the
column list of CREATE VIEW, because the AS keyword names the derived column.
The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS
SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL
FROM DSN8710.EMP;
Example: UNION ALL with AS clause: You can use the AS clause to give the
same name to corresponding columns of tables in a union. The third result column
from the union of the two tables has the name TOTAL_VALUE, even though it
contains data derived from columns with different names:
SELECT 'On hand' AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE
FROM PART_ON_HAND
UNION ALL
SELECT 'Ordered' AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE
FROM ORDER_PART
ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name
in the first and second result tables, and are combined in the union of the two result
tables:
STATUS PARTNO TOTAL_VALUE
----------- ------ -----------
On hand 00557 345.60
Ordered
. 00557 150.50
.
.
For information on unions, see “Merging lists of values: UNION” on page 12.
Example: FROM clause with AS clause: Use the AS clause in a FROM clause to
assign a name to a derived column that you want to refer to in a GROUP BY
clause. Using the AS clause in the first SELECT clause causes an error, because
the names assigned in the AS clause do not yet exist when the GROUP BY
executes. However, you can use an AS clause of a subselect in the outer GROUP
BY clause, because the subselect is at a lower level than the GROUP BY that
references the name. This SQL statement names HIREYEAR in the nested table
expression, which lets you use the name of that result column in the GROUP BY
clause:
SELECT HIREYEAR, AVG(SALARY)
FROM (SELECT YEAR(HIREDATE) AS HIREYEAR, SALARY
FROM DSN8710.EMP) AS NEWEMP
GROUP BY HIREYEAR;
If a search condition contains a column of a distinct type, the value to which that
column is compared must be of the same distinct type, or you must cast the value
to the distinct type. See “Chapter 15. Creating and using distinct types” on page 301
for more information.
The next sections illustrate different comparison operators that you can use in a
predicate in a WHERE clause. The following table lists the comparison operators.
Table 2. Comparison operators used in conditions
Type of Specified with... Example
comparison
Equal to null IS NULL PHONENO IS NULL
Equal to = DEPTNO = 'X01'
Not equal to <> DEPTNO <> 'X01'
Less than < AVG(SALARY) < 30000
Less than or equal to <= AGE <= 25
Not less than >= AGE >= 21
Greater than > SALARY > 2000
Greater than or equal to >= SALARY >= 5000
Not greater than <= SALARY <= 5000
Similar to another value LIKE NAME LIKE '%SMITH%' or STATUS LIKE 'N_'
At least one of two OR HIREDATE < '1965-01-01' OR SALARY < 16000
conditions
Both of two conditions AND HIREDATE < '1965-01-01' AND SALARY < 16000
Between two values BETWEEN SALARY BETWEEN 20000 AND 40000
Equals a value in a set IN (X, Y, Z) DEPTNO IN ('B01', 'C01', 'D01')
You can also search for rows that do not satisfy one of the above conditions, by
using the NOT keyword before the specified condition.
You can list the rows in ascending or descending order. Null values appear last in
an ascending sort and first in a descending sort.
Example: ORDER BY clause with a column name as the sort key: Retrieve the
employee numbers, last names, and hire dates of employees in department A00 in
ascending order of hire dates:
SELECT EMPNO, LASTNAME, HIREDATE
FROM DSN8710.EMP
WHERE WORKDEPT = 'A00'
ORDER BY HIREDATE ASC;
Example: ORDER BY clause with an expression as the sort key: Retrieve the
employee numbers, salaries, commissions, and total compensation (salary plus
commission) for employees with a total compensation of greater than 40000. Order
the results by total compensation:
SELECT EMPNO, SALARY, COMM, SALARY+COMM AS "TOTAL COMP"
FROM DSN8710.EMP
WHERE SALARY+COMM = 40000
ORDER BY SALARY+COMM;
Except for the columns named in the GROUP BY clause, the SELECT statement
must specify any other selected columns as an operand of one of the column
functions.
If a column you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
When it is used, the GROUP BY clause follows the FROM clause and any WHERE
clause, and precedes the ORDER BY clause.
You can also group the rows by the values of more than one column. For example,
the following statement finds the average salary for men and women in departments
A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY
FROM DSN8710.EMP
WHERE WORKDEPT IN ('A00', 'C01')
GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and next (within each department)
by sex before DB2 derives the average SALARY value for each group.
Compare the preceding example with the second example shown in “Summarizing
group values: GROUP BY” on page 10. The HAVING COUNT(*) > 1 clause ensures
The HAVING clause tests a property of the group. For example, you could use it to
retrieve the average salary and minimum education level of women in each
department in which all female employees have an education level greater than or
equal to 16. Assuming you only want results from departments A00 and D11, the
following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY,
MIN(EDLEVEL) AS MIN_EDLEVEL
FROM DSN8710.EMP
WHERE SEX = 'F' AND WORKDEPT IN ('A00', 'D11')
GROUP BY WORKDEPT
HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow
the GROUP BY clause. A function in a HAVING clause can include DISTINCT if you
have not used DISTINCT anywhere else in the same SELECT statement. You can
also connect multiple predicates in a HAVING clause with AND and OR, and you
can use NOT for any predicate of a search condition.
When you use the UNION statement, the SQLNAME field of the SQLDA contains
the column names of the first operand.
If you have an ORDER BY clause, it must appear after the last SELECT statement
that is part of the union. In this example, the first column of the final result table
determines the final order of the rows.
# Avoiding decimal arithmetic errors: For static SQL statements, the simplest way
# to avoid a division error is to override DEC31 rules by specifying the precompiler
# option DEC(15). In some cases it is possible to avoid a division error by specifying
# D31.s. That reduces the probability of errors for statements embedded in the
# program. (The number s is between 1 and 9 and represents the minimum scale to
# be used for division operations.)
If the dynamic SQL statements have bind, define, or invoke behavior and the value
of the installation option for USE FOR DYNAMICRULES on panel DSNTIPF is NO,
# you can use the precompiler option DEC(15), DEC15, or D15.s to override DEC31
rules.
For a dynamic statement, or for a single static statement, use the scalar function
DECIMAL to specify values of the precision and scale for a result that causes no
errors.
For a dynamic statement, before you execute the statement, set the value of
# special register CURRENT PRECISION to DEC15 or D15.s.
Even if you use DEC31 rules, multiplication operations can sometimes cause
overflow because the precision of the product is greater than 31. To avoid overflow
from multiplication of large numbers, use the MULTIPLY_ALT built-in function
instead of the multiplication operator.
The contents of the DB2 system catalog tables can be a useful reference tool when
you begin to develop an SQL statement or an application program.
If the table about which you display column information includes LOB or ROWID
columns, the LENGTH field for those columns contains the number of bytes those
column occupy in the base table, rather than the length of the LOB or ROWID data.
To determine the maximum length of data for a LOB or ROWID column, include the
LENGTH2 column in your query. For example:
SELECT NAME, COLTYPE, LENGTH, LENGTH2
FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME = 'EMP_PHOTO_RESUME'
AND TBCREATOR = 'DSN8710';
You must separate each column description from the next with a comma, and
enclose the entire list of column descriptions in parentheses.
Each example shown in this chapter assumes you logged on using your own
authorization ID. The authorization ID qualifies the name of each object you create.
For example, if your authorization ID is SMITH, and you create table YDEPT, the
name of the table is SMITH.YDEPT. If you want to access table DSN8710.DEPT,
you must refer to it by its complete name. If you want to access your own table
YDEPT, you need only to refer to it as “YDEPT”.
If you want DEPTNO to be a primary key as in the sample table, explicitly define
the key. Use an ALTER TABLE statement:
ALTER TABLE YDEPT
PRIMARY KEY(DEPTNO);
You can use an INSERT statement with a SELECT clause to copy rows from one
table to another. The following statement copies all of the rows from
DSN8710.DEPT to your own YDEPT work table.
For information on the INSERT statement, see “Modifying DB2 data” on page 25.
This statement also creates a referential constraint between the foreign key in
YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all
phone numbers to unique numbers.
If you want to change a table definition after you create it, use the statement ALTER
TABLE.
If you want to change a table name after you create it, use the statement RENAME
TABLE. For details on the ALTER TABLE and RENAME TABLE statements, see
Chapter 5 of DB2 SQL Reference. You cannot drop a column from a table or
change a column definition. However, you can add and drop constraints on columns
in a table.
Example: You can also create a definition by copying the definition of a base table:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions, even
though table PROD contains two columns, DESCRIPTION and CURDATE, that are
defined as NOT NULL WITH DEFAULT. Because created temporary tables do not
support WITH DEFAULT, DB2 changes the definitions of DESCRIPTION and
CURDATE to NOT NULL when you use the second method to define TEMPPROD.
After you execute one of the two CREATE statements, the definition of TEMPPROD
exists, but no instances of the table exist. To drop the definition of TEMPPROD, you
must execute this statement:
DROP TABLE TEMPPROD;
An instance of a created temporary table exists at the current server until one of the
following actions occurs:
v The remote server connection under which the instance was created terminates.
v The unit of work under which the instance was created completes.
When you execute a ROLLBACK statement, DB2 deletes the instance of the
created temporary table. When you execute a COMMIT statement, DB2 deletes
the instance of the created temporary table unless a cursor for accessing the
created temporary table is defined WITH HOLD and is open.
v The application process ends.
For example, suppose that you create a definition of TEMPPROD and then run an
application that contains these statements:
EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TEMPPROD;
EXEC SQL INSERT INTO TEMPPROD SELECT * FROM PROD;
EXEC SQL OPEN C1;
EXEC
. SQL COMMIT;
.
.
When you execute the INSERT statement, DB2 creates an instance of TEMPPROD
and populates that instance with rows from table PROD. When the COMMIT
statement is executed, DB2 deletes all rows from TEMPPROD. If, however, you
change the declaration of C1 to:
EXEC SQL DECLARE C1 CURSOR WITH HOLD
FOR SELECT * FROM TEMPPROD;
DB2 does not delete the contents of TEMPPROD until the application ends
because C1, a cursor defined WITH HOLD, is open when the COMMIT statement is
executed. In either case, DB2 drops the instance of TEMPPROD when the
application ends.
Before you can define declared temporary tables, you must create a special
database and table spaces for them. You do that by executing the CREATE
DATABASE statement with the AS TEMP clause, and then creating segmented
table spaces in that database. A DB2 subsystem can have only one database for
declared temporary tables, but that database can contain more than one table
space.
Example: These statements create a database and table space for declared
temporary tables:
CREATE DATABASE DTTDB AS TEMP;
CREATE TABLESPACE DTTTS IN DTTDB
SEGSIZE 4;
You can define a declared temporary table in any of the following ways:
v Specify all the columns in the table.
Unlike columns of created temporary tables, columns of declared temporary
tables can include the WITH DEFAULT clause.
v Use a LIKE clause to copy the definition of a base table, created temporary
table, or view.
If the base table or created temporary table that you copy has identity columns,
you can specify that the corresponding columns in the declared temporary table
are also identity columns. Do that by specifying the INCLUDING IDENTITY
COLUMN ATTRIBUTES clause when you define the declared temporary table.
v Use a fullselect to choose specific columns from a base table, created temporary
table, or view.
If the base table, created temporary table, or view from which you select columns
has identity columns, you can specify that the corresponding columns in the
declared temporary table are also identity columns. Do that by specifying the
INCLUDING IDENTITY COLUMN ATTRIBUTES clause when you define the
declared temporary table.
DB2 creates an empty instance of a declared temporary table when it executes the
DECLARE GLOBAL TEMPORARY TABLE statement. You can populate the
declared temporary table using INSERT statements, modify the table using
searched or positioned UPDATE or DELETE statements, and query the table using
SELECT statements. You can also create indexes on the declared temporary table.
For example, suppose that you execute these statement in an application program:
EXEC
. SQL COMMIT;
.
.
When DB2 executes the DECLARE GLOBAL TEMPORARY TABLE statement, DB2
creates an empty instance of TEMPPROD. The INSERT statement populates that
instance with rows from table BASEPROD. The qualifier, SESSION, must be
specified in any statement that references TEMPPROD. When DB2 executes the
COMMIT statement, DB2 keeps all rows in TEMPPROD because TEMPPROD is
defined with ON COMMIT PRESERVE ROWS. When the program ends, DB2 drops
TEMPPROD.
Use the DROP TABLE statement with care: Dropping a table is NOT equivalent
to deleting all its rows. When you drop a table, you lose more than both its data
and its definition. You lose all synonyms, views, indexes, and referential and check
constraints associated with that table. You also lose all authorities granted on the
table.
For more information on the DROP statement, see Chapter 5 of DB2 SQL
Reference.
Use the CREATE VIEW statement to define a view and give the view a name, just
as you do for a table.
CREATE VIEW VDEPTM AS
SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT
FROM DSN8710.DEPT, DSN8710.EMP
WHERE DSN8710.EMP.EMPNO = DSN8710.DEPT.MGRNO;
This view shows each department manager’s name with the department data in the
DSN8710.DEPT table.
When you create a view, you can reference the USER and CURRENT SQLID
special registers in the CREATE VIEW statement. When referencing the view, DB2
uses the value of the USER or CURRENT SQLID that belongs to the user of the
SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator of
the view. In other words, a reference to a special register in a view definition refers
to its run-time value.
You can use views to limit access to certain kinds of data, such as salary
information. You can also use views to do the following:
v Make a subset of a table’s data available to an application. For example, a view
based on the employee table might contain rows for a particular department only.
v Combine columns from two or more tables and make the combined data
available to an application. By using a SELECT statement that matches values in
one table with those in another table, you can create a view that presents data
from both tables. However, you can only select data from this type of view. You
cannot update, delete, or insert data using a view that joins two or more tables.
| v Combine rows from two or more tables and make the combined data available to
| an application. By using two or more subselects that are connected by UNION or
| UNION ALL operators, you can create a view that presents data from several
| tables. However, you can only select data from this type of view. You cannot
| update, delete, or insert data using a view that contains UNION operations.
v Present computed data, and make the resulting data available to an application.
You can compute such data using any function or operation that you can use in a
SELECT statement.
In either case, for every row you insert, you must provide a value for any column
that does not have a default value. For a column that meets one of these
conditions, you can specify DEFAULT to tell DB2 to insert the default value for that
column:
v Is nullable.
v Is defined with a default value.
v Has data type ROWID. ROWID columns always have default values.
v Is an identity column. Identity columns always have default values.
The values that you can insert into a ROWID column or identity column depend on
whether the column is defined with GENERATED ALWAYS or GENERATED BY
DEFAULT. See “Inserting data into a ROWID column” on page 27 and “Inserting
data into an identity column” on page 27 for more information.
You can name all columns for which you are providing values. Alternatively, you can
omit the column name list.
For static insert statements, it is a good idea to name all columns for which you are
providing values because:
v Your insert statement is independent of the table format. (For example, you do
not have to change the statement when a column is added to the table.)
v You can verify that you are giving the values in order.
v Your source statements are more self-descriptive.
If you do not name the columns in a static insert statement, and a column is added
to the table being inserted into, an error can occur if the insert statement is
rebound. An error will occur after any rebind of the insert statement unless you
change the insert statement to include a value for the new column. This is true,
even if the new column has a default value.
For example,
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION)
VALUES ('E31', 'DOCUMENTATION', '000010', 'E01', ' ');
After inserting a new department row into your YDEPT table, you can use a
SELECT statement to see what you have loaded into the table. This SQL
statement:
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE 'E%'
ORDER BY DEPTNO;
shows you all the new department rows that you have inserted:
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
====== ==================================== ====== ======== ===========
E01 SUPPORT SERVICES 000050 A00 -----------
E11 OPERATIONS 000090 E01 -----------
E21 SOFTWARE SUPPORT 000100 E01 -----------
E31 DOCUMENTATION 000010 E01 -----------
This statement copies data from DSN8710.EMP into the newly created table:
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8710.EMP
WHERE WORKDEPT = 'D21';
The two previous statements create and fill a table, TELE, that looks like this:
NAME2 NAME1 PHONE
=============== ============ =====
PULASKI EVA 7831
JEFFERSON JAMES 2094
MARINO SALVATORE 3780
SMITH DANIEL 0961
JOHNSON SYBIL 8953
PEREZ MARIA 9001
MONTEVERDE ROBERT 3780
The INSERT statement fills the newly created table with data selected from the
DSN8710.EMP table: the names and phone numbers of employees in Department
D21.
Before you insert data into a ROWID column, you must know how the ROWID
column is defined. ROWID columns can be defined as GENERATED ALWAYS or
GENERATED BY DEFAULT. GENERATED ALWAYS means that DB2 generates a
value for the column, and you cannot insert data into that column. If the column is
defined as GENERATED BY DEFAULT, you can insert a value, and DB2 provides a
default value if you do not supply one. For example, suppose that tables T1 and T2
have two columns: an integer column and a ROWID column. For the following
statement to execute successfully, ROWIDCOL2 must be defined as GENERATED
BY DEFAULT.
INSERT INTO T2 (INTCOL2,ROWIDCOL2)
SELECT INTCOL1, ROWIDCOL1 FROM T1;
Before you insert data into an identity column, you must know whether the column
is defined as GENERATED ALWAYS or GENERATED BY DEFAULT. If you try to
insert a value into an identity column that is defined as GENERATED ALWAYS, the
insert operation fails.
| The values that DB2 generates for an identity column depend on how the column is
| defined. The START WITH parameter determines the first value that DB2
| generates. The MINVALUE and MAXVALUE parameters determine the minimum
| and maximum values that DB2 generates. The CYCLE or NO CYCLE parameter
| determines whether DB2 wraps values when it has generated all values between
| the START WITH value and MAXVALUE, if the values are ascending, or between
| the START WITH value and MINVALUE, if the values are descending.
| Identity columns that are defined with GENERATED ALWAYS and NO CYCLE are
| guaranteed to have unique values. For identity columns that are defined as
| GENERATED BY DEFAULT and NO CYCLE, only the values that DB2 generates
| are guaranteed to be unique among each other. To guarantee unique values in an
| identity column, you need to create a unique index on the identity column.
| Now suppose that you execute the following INSERT statement six times:
| INSERT INTO T1 (CHARCOL1) VALUES ('A');
| When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1
| until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for
| the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this
| after the six INSERTs are executed:
| CHARCOL1 IDENTCOL1
| ======== =========
| A -1
| A 0
| A 1
| A 2
| A 3
| A -3
Examples: This statement inserts information about a new employee into the
YEMP table. Because YEMP has a foreign key WORKDEPT referencing the
primary key DEPTNO in YDEPT, the value inserted for WORKDEPT (E31) must be
a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP
VALUES ('000400', 'RUTHERFORD', 'B', 'HAYES', 'E31',
'5678', '1983-01-01', 'MANAGER', 16, 'M', '1943-07-10', 24000,
500, 1900);
The following statement also inserts a row into the YEMP table. However, the
statement does not specify a value for every column. Because the unspecified
columns allow nulls, DB2 inserts null values into the columns not specified.
Because YEMP has a foreign key WORKDEPT referencing the primary key
DEPTNO in YDEPT, the value inserted for WORKDEPT (D11) must be a value of
DEPTNO in YDEPT or null.
INSERT INTO YEMP
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB)
VALUES ('000410', 'MILLARD', 'K', 'FILLMORE', 'D11', '4888', 'MANAGER');
The SET clause names the columns that you want to update and provides the
values you want to assign to those columns. You can replace a column value with
any of the following items:
v A null value
The column to which you assign the null value must not be defined as NOT
NULL.
v An expression
An expression can be any of the following items:
– A column
– A constant
| – A fullselect that returns a scalar or a row
– A host variable
– A special register
If you omit the WHERE clause; DB2 updates every row in the table or view with the
values you supply.
If DB2 finds an error while executing your UPDATE statement (for instance, an
update value that is too large for the column), it stops updating and returns error
codes in the SQLCODE and SQLSTATE host variables or related fields in the
SQLCA. No rows in the table change (rows already changed, if any, are restored to
their previous values). If the UPDATE statement is successful, SQLERRD(3) is set
to the number of rows updated.
Examples: The following statement supplies a missing middle initial and changes
the job for employee 000200.
UPDATE YEMP
SET MIDINIT = 'H', JOB = 'FIELDREP'
WHERE EMPNO = '000200';
The following statement gives everyone in department D11 a $400 raise. The
statement can update several rows.
UPDATE YEMP
SET SALARY = SALARY + 400.00
WHERE WORKDEPT = 'D11';
The following statement sets the salary and bonus for employee 000190 to the
average salary and minimum bonus for all employees.
UPDATE YEMP
SET (SALARY, BONUS) =
(SELECT AVG(SALARY), MIN(BONUS)
FROM EMP)
WHERE EMPNO = '000190';
You can use DELETE to remove all rows from a created temporary table or
declared temporary table. However, you can use DELETE with a WHERE clause to
remove only selected rows from a declared temporary table.
This DELETE statement deletes each row in the YEMP table that has an employee
number 000060.
DELETE FROM YEMP
WHERE EMPNO = '000060';
When this statement executes, DB2 deletes any row from the YEMP table that
meets the search condition.
If DB2 finds an error while executing your DELETE statement, it stops deleting data
and returns error codes in the SQLCODE and SQLSTATE host variables or related
fields in the SQLCA. The data in the table does not change.
deletes every row in the YDEPT table. If the statement executes, the table
continues to exist (that is, you can insert rows into it) but it is empty. All existing
views and authorizations on the table remain intact when using DELETE. By
comparison, using DROP TABLE drops all views and authorizations, which can
invalidate plans and packages. For information on the DROP statement, see
“Dropping tables: DROP TABLE” on page 23.
DB2 supports these types of joins: inner join, left outer join, right outer join, and full
outer join.
You can specify joins in the FROM clause of a query: Figure 2 below shows the
ways to combine tables using outer join functions.
The result table contains data joined from all of the tables, for rows that satisfy the
search conditions.
The result columns of a join have names if the outermost SELECT list refers to
base columns. But, if you use a function (such as COALESCE or VALUE) to build a
column of the result, then that column does not have a name unless you use the
AS clause in the SELECT list.
To distinguish the different types of joins, the examples in this section use the
following two tables:
The PARTS table The PRODUCTS table
PART PROD# SUPPLIER PROD# PRODUCT PRICE
======= ===== ============ ===== =========== =====
WIRE 10 ACWF 505 SCREWDRIVER 3.70
OIL 160 WESTERN_CHEM 30 RELAY 7.55
MAGNETS 10 BATEMAN 205 SAW 18.90
PLASTIC 30 PLASTIK_CORP 10 GENERATOR 45.75
BLADES 205 ACE_STEEL
In the simplest type of inner join, the join condition is column1=column2. For
example, you can join the PARTS and PRODUCTS tables on the PROD# column to
get a table of parts with their suppliers and the products that use the parts.
or
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS INNER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
In either case, the number of rows in the result table is the product of the number
of rows in each table.
You can specify more complicated join conditions to obtain different sets of results.
For example, to eliminate the suppliers that begin with the letter A from the table of
parts, suppliers, product numbers and products, write a query like this:
The result of the query is all rows that do not have a supplier that begins with A:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
MAGNETS BATEMAN 10 GENERATOR
PLASTIC PLASTIK_CORP 30 RELAY
Example of joining a table to itself using an inner join: The following example
joins table DSN8710.PROJ to itself and returns the number and name of each
“major” project followed by the number and name of the project that is part of it. In
this example, A indicates the first instance of table DSN8710.PROJ and B indicates
a second instance of this table. The join condition is such that the value in column
PROJNO in table DSN8710.PROJ A must be equal to a value in column MAJPROJ
in table DSN8710.PROJ B.
In this example, the comma in the FROM clause implicitly specifies an inner join,
and acts the same as if the INNER JOIN keywords had been used. When you use
the comma for an inner join, you must specify the join condition on the WHERE
clause. When you use the INNER JOIN keywords, you must specify the join
condition on the ON clause.
The join condition for a full outer join must be a simple search condition that
compares two columns or cast functions that contain columns.
For example, the following query performs a full outer join of the PARTS and
PRODUCTS tables:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
You probably noticed that the result of the example for “Full outer join” on page 35
is null for SCREWDRIVER, even though the PRODUCTS table contains a product
number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is
null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result
contains two columns, with both columns contain some null values. We can merge
data from both columns into a single column, eliminating the null values, using the
COALESCE function.
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE
function.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
For example, to include rows from the PARTS table that have no matching values in
the PRODUCTS table and include only prices greater than 10.00, execute this
query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT, PRICE
FROM PARTS LEFT OUTER JOIN PRODUCTS
ON PARTS.PROD#=PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
# Because the PARTS table can have nonmatching rows, and the PRICE column is
# not in the PARTS table, rows in which PRICE is less than or equal to 10.00 are not
# included in the result of the join.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
For example, to include rows from the PRODUCTS table that have no matching
values in the PARTS table and include prices greater than 10.00, execute this
query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT, PRICE
FROM PARTS RIGHT OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
# Because the PRODUCTS table can have nonmatching rows, and the PRICE
# column is in the PRODUCTS table, rows in which PRICE is less than or equal to
# 10.00 are also included in the result of the join. The predicate PRODUCTS.PRICE
# is greater than 10.00 does not eliminate any rows. When PRODUCTS.PRICE is
# less than or equal to 10.00, the PARTS columns in the result table contain null.
A join operation is part of a FROM clause; therefore, for the purpose of predicting
which rows will be returned from a SELECT statement containing a join operation,
assume that the join operation is performed first.
which is not the desired result. DB2 performs the join operation first, then applies
the WHERE clause. The WHERE clause excludes rows where PROD# has a null
value, so the result is the same as if you had specified an inner join.
In this case, DB2 applies the WHERE clause to each table separately, so that no
rows are eliminated because PROD# is null. DB2 then performs the full outer join
operation, and the desired table is obtained:
PART SUPPLIER PRODNUM PRODUCT
======= ============ ======= ===========
OIL WESTERN_CHEM 160 -----------
BLADES ACE_STEEL 205 SAW
PLASTIC PLASTIK_CORP 30 RELAY
------- ------------ 505 SCREWDRIVER
You can join the results of a user-defined table function with a table, just as you can
join two tables. For example, suppose CVTPRICE is a table function that converts
the prices in the PRODUCTS table to the currency you specify and returns the
PRODUCTS table with the prices in those units. You can obtain a table of parts,
suppliers, and product prices with the prices in your choice of currency by executing
a query like this:
SELECT PART, SUPPLIER, PARTS.PROD#, Z.PRODUCT, Z.PRICE
FROM PARTS, TABLE(CVTPRICE(:CURRENCY)) AS Z
WHERE PARTS.PROD# = Z.PROD#;
The correlated reference D.DEPTNO is valid because the nested table expression
within which it appears is preceded by TABLE and the table specification D appears
to the left of the nested table expression in the FROM clause. If you remove the
keyword TABLE, D.DEPTNO is invalid.
Conceptual overview
Suppose you want a list of the employee numbers, names, and commissions of all
employees working on a particular project, say project number MA2111. The first
part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8710.EMP
WHERE EMPNO
.
.
.
But you cannot go further because the DSN8710.EMP table does not include
project number data. You do not know which employees are working on project
MA2111 without issuing another SELECT statement against the
DSN8710.EMPPROJACT table.
To better understand what results from this SQL statement, imagine that DB2 goes
through the following process:
1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM DSN8710.EMPPROJACT
WHERE PROJNO = 'MA2111');
000200
000200
000220
2. The interim result table then serves as a list in the search condition of the outer
SELECT. Effectively, DB2 executes this statement:
This kind of subquery is uncorrelated. In the previous query, for example, the
content of the subquery is the same for every row of the table DSN8710.EMP.
Subqueries that vary in content from row to row or group to group are correlated
subqueries. For information on correlated subqueries, see “Using correlated
subqueries” on page 47. All of the information preceding that section applies to both
correlated and uncorrelated subqueries.
Subqueries can also appear in the predicates of other subqueries. Such subqueries
are nested subqueries at some level of nesting. For example, a subquery within a
subquery within an outer SELECT has a level of nesting of 2. DB2 allows nesting
down to a level of 15, but few queries require a nesting level greater than 1.
The relationship of a subquery to its outer SELECT is the same as the relationship
of a nested subquery to a subquery, and the same rules apply, except where
otherwise noted.
| Except for a subquery of a basic predicate, the result table can contain more than
| one row.
Basic predicate
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. DB2 compares that value with the
value to the left of the comparison operator.
For example, the following SQL statement returns the employee numbers, names,
and salaries for employees whose education level is higher than the average
company-wide education level.
SELECT EMPNO, LASTNAME, SALARY
FROM DSN8710.EMP
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP);
Use ALL to indicate that the operands on the left side of the comparison must
compare in the same way with all the values the subquery returns. For example,
suppose you use the greater-than comparison operator with ALL:
WHERE column > ALL (subquery)
To satisfy this WHERE clause, the column value must be greater than all the values
that the subquery returns. A subquery that returns an empty result table satisfies the
predicate.
| Now suppose that you use the <> operator with ALL in a WHERE clause like this:
| WHERE column1, column2, ... columnn <> ALL (subquery)
| To satisfy this WHERE clause, each column value must be unequal to all the values
| in the corresponding column of the result table that the subquery returns. A
| subquery that returns an empty result table satisfies the predicate.
Use ANY or SOME to indicate that the values on the left side of the operator must
compare in the indicated way to at least one of the values that the subquery
returns. For example, suppose you use the greater-than comparison operator with
ANY:
WHERE expression > ANY (subquery)
To satisfy this WHERE clause, the value in the expression must be greater than at
least one of the values (that is, greater than the lowest value) that the subquery
returns. A subquery that returns an empty result table does not satisfy the predicate.
| Now suppose that you use the = operator with SOME in a WHERE clause like this:
| WHERE column1, column1, ... columnn = SOME (subquery)
| To satisfy this WHERE clause, each column value must be equal to at least one of
| the values in the corresponding column of the result table that the subquery returns.
| A subquery that returns an empty result table does not satisfy the predicate.
If a subquery that returns one or more null values gives you unexpected results,
see the description of quantified predicates in Chapter 2 of DB2 SQL Reference.
In the example, the search condition is true if any project represented in the
DSN8710.PROJ table has an estimated start date which is later than 1 January
1986. This example does not show the full power of EXISTS, because the result is
always the same for every row examined for the outer SELECT. As a consequence,
either every row appears in the results, or none appear. A correlated subquery is
more powerful, because the subquery would change from row to row.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use
the EXISTS keyword with the NOT keyword in order to select rows when the data
or condition you specify does not exist; that is, you can code
WHERE NOT EXISTS (SELECT ...);
In the subquery, you tell DB2 to compute the average education level for the
department number in the current row. A query that does this follows:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN8710.EMP X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP
WHERE WORKDEPT = X.WORKDEPT);
Consider what happens when the subquery executes for a given row of
DSN8710.EMP. Before it executes, X.WORKDEPT receives the value of the
WORKDEPT column for that row. Suppose, for example, that the row is for
CHRISTINE HAAS. Her work department is A00, which is the value of WORKDEPT
for that row. The subquery executed for that row is therefore:
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP
WHERE WORKDEPT = 'A00');
The subquery produces the average education level of Christine’s department. The
outer subselect then compares this to Christine’s own education level. For some
other row for which WORKDEPT has a different value, that value appears in the
subquery in place of A00. For example, in the row for MICHAEL L THOMPSON, this
value is B01, and the subquery for his row delivers the average education level for
department B01.
The result table produced by the query has the following values:
(From EMP)
When you use a correlation name in a subquery, the subquery can be the
outer-level SELECT, or any of the subqueries that contain the reference. Suppose,
for example, that a query contains subqueries A, B, and C, and that A contains B
and B contains C. Then C could use a correlation name defined in B, A, or the
outer SELECT.
You can define a correlation name for each table name appearing in a FROM
clause. Append the correlation name after its table name. Leave one or more
blanks between a table name and its correlation name. You can include the word
AS between the table name and the correlation name to increase the readability of
The following example demonstrates the use of a correlation name in the select list
of a subquery:
UPDATE BP1TBL T1
SET (KEY1, CHAR1, VCHAR1) =
(SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1), VALUE(T2.VCHAR1,T1.VCHAR1)
FROM BP2TBL T2
WHERE (T2.KEY1 = T1.KEY1))
WHERE KEY1 IN
(SELECT KEY1
FROM BP2TBL T3
WHERE KEY2 > 0);
To process this statement, DB2 determines for each project (represented by a row
in the DSN8710.PROJ table) whether or not the combined staffing for that project is
less than 0.5. If it is, DB2 deletes that row from the DSN8710.PROJ table.
To continue this example, suppose DB2 deletes a row in the DSN8710.PROJ table.
You must also delete rows related to the deleted project in the DSN8710.PROJACT
table. To do this, use:
DELETE FROM DSN8710.PROJACT X
WHERE NOT EXISTS
(SELECT *
FROM DSN8710.PROJ
WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8710.PROJACT table, whether a row with
the same project number exists in the DSN8710.PROJ table. If not, DB2 deletes the
row in DSN8710.PROJACT.
| This example uses copies of the employee and department table that do not have
| referential constraints.
DB2 restricts delete operations for dependent tables that are involved in referential
constraints. If a DELETE statement has a subquery that references a table involved
in the deletion, the last delete rule in the path to that table must be RESTRICT or
NO ACTION. For example, without referential constraints, the following statement
deletes departments from the department table whose managers are not listed
correctly in the employee table:
DELETE FROM DSN8710.DEPT THIS
WHERE NOT DEPTNO =
(SELECT WORKDEPT
FROM DSN8710.EMP
WHERE EMPNO = THIS.MGRNO);
With the referential constraints defined for the sample tables, the statement causes
an error. The deletion involves the table referred to in the subquery (DSN8710.EMP
is a dependent table of DSN8710.DEPT) and the last delete rule in the path to EMP
is SET NULL, not RESTRICT or NO ACTION. If the statement could execute, its
results would again depend on the order in which DB2 accesses the rows.
To use SPUFI, select SPUFI from the DB2I Primary Option Menu as shown in
Figure 3.
From then on, when the SPUFI panel displays, the data entry fields on the panel
contain the values that you previously entered. You can specify data set names and
processing options each time the SPUFI panel displays, as needed. Values you do
not change remain in effect.
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... ===> RESULT
If you use this panel a second time, the name of the data set you
previously used displays in the field DATA SET NAME. To create a new
member of an existing partitioned data set, change only the member name.
4 OUTPUT DATA SET NAME
Enter the name of a data set to receive the output of the SQL statement.
You do not need to allocate the data set before you do this.
If the data set exists, the new output replaces its content. If the data set
does not exist, DB2 allocates a data set on the device type specified on the
CURRENT SPUFI DEFAULTS panel and then catalogs the new data set.
The device must be a direct-access storage device, and you must be
authorized to allocate space on that device.
Attributes required for the output data set are:
Specify values for the following options on the CURRENT SPUFI DEFAULTS panel.
All fields must contain a value.
1 SQL TERMINATOR
Allows you to specify the character that you use to end each SQL
statement. You can specify any character except one of those listed in
Table 3. A semicolon is the default.
Table 3. Invalid special characters for the SQL terminator
Hexadecimal
Name Character Representation
blank X'40'
comma , X'5E'
double quote " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quote ' X'7D'
underscore _ X'6D'
Be careful to choose a character for the SQL terminator that is not used
within the statement.
You can also set or change the SQL terminator within a SPUFI input data
set using the --#SET TERMINATOR statement. See “Entering SQL statements”
on page 56 for details.
2 ISOLATION LEVEL
Allows you to specify the isolation level for your SQL statements. See “The
ISOLATION option” on page 343 for more information.
3 MAX SELECT LINES
The maximum number of output lines that a SELECT statement can return.
To limit the number of rows retrieved, enter another maximum number
greater than 1.
4 RECORD LENGTH
The record length must be at least 80 bytes. The maximum record length
depends on the device type you use. The default value allows a 4092-byte
record.
Each record can hold a single line of output. If a line is longer than a
record, the last fields in the line truncate. SPUFI discards fields beyond the
record length.
5 BLOCKSIZE
Follow the normal rules for selecting the block size. For record format F, the
block size is equal to record length. For FB and FBA, choose a block size
that is an even multiple of LRECL. For VB and VBA only, the block size
must be 4 bytes larger than the block size for FB or FBA.
6 RECORD FORMAT
Specify F, FB, FBA, V, VB, or VBA. FBA and VBA formats insert a printer
control character after the number of lines specified in the LINES/PAGE OF
LISTING field on the DB2I Defaults panel. The record format default is VB
(variable-length blocked).
7 DEVICE TYPE
Allows you to specify a standard MVS name for direct-access storage
device types. The default is SYSDA. SYSDA specifies that MVS is to select
an appropriate direct access storage device.
8 MAX NUMERIC FIELD
The maximum width of a numeric value column in your output. Choose a
value greater than 0. The IBM-supplied default is 20. For more information,
see “Format of SELECT statement results” on page 58.
9 MAX CHAR FIELD
The maximum width of a character value column in your output. DATETIME
and GRAPHIC data strings are externally represented as characters, and
Column names are the column identifiers that you can use in SQL
statements. If an SQL statement has an AS clause for a column, SPUFI
displays the contents of the AS clause in the heading, rather than the
column name. You define column labels with LABEL ON statements.
When you have entered your SPUFI options, press the ENTER key to continue.
SPUFI then processes the next processing option for which you specified YES. If all
other processing options are NO, SPUFI displays the SPUFI panel.
If you press the END key, you return to the SPUFI panel, but you lose all the
changes you made on the SPUFI Defaults panel. If you press ENTER, SPUFI
saves your changes.
On the panel, use the ISPF EDIT program to enter SQL statements that you want
to execute, as shown in Figure 6 on page 57.
Move the cursor to the first input line and enter the first part of an SQL statement.
You can enter the rest of the SQL statement on subsequent lines, as shown in
Figure 6 on page 57. Indenting your lines and entering your statements on several
lines make your statements easier to read, and do not change how your statements
process.
You can put more than one SQL statement in the input data set. You can put an
SQL statement on one line of the input data set or on more than one line. DB2
executes the statements in the order you placed them in the data set. Do not put
more than one SQL statement on a single line. The first one executes, but DB2
ignores the other SQL statements on the same line.
In your SPUFI input data set, end each SQL statement with the statement
terminator that you specified in the CURRENT SPUFI DEFAULTS panel.
When you have entered your SQL statements, press the END PF key to save the
file and to execute the SQL statements.
Pressing the END PF key saves the data set. You can save the data set and
continue editing it by entering the SAVE command. In fact, it is a good practice to
save the data set after every 10 minutes or so of editing.
Figure 6 shows what the panel looks like if you enter the sample SQL statement,
followed by a SAVE command.
You can bypass the editing step by resetting the EDIT INPUT processing option:
EDIT INPUT ... ===> NO
You can put comments about SQL statements either on separate lines or on the
same line. In either case, use two hyphens (--) to begin a comment. Specify any
text other than #SET TERMINATOR after the comment. DB2 ignores everything to
the right of the two hyphens.
Use the text --SET TERMINATOR character in a SPUFI input data set as an
instruction to SPUFI to interpret character as a statement terminator. You can
specify any single-byte character except one of the characters that are listed in
Table 3 on page 54. The terminator that you specify overrides a terminator that you
specified in option 1 of the CURRENT SPUFI DEFAULTS panel or in a previous
--SET TERMINATOR statement.
You can bypass the DB2 processing step by resetting the EXECUTE processing
option:
EXECUTE ..... ===> NO
Your SQL statement might take a long time to execute, depending on how large a
table DB2 has to search, or on how many rows DB2 has to process. To interrupt
DB2’s processing, press the PA1 key and respond to the prompting message that
asks you if you really want to stop processing. This cancels the executing SQL
statement and returns you to the ISPF-PDF menu.
What happens to the output data set? This depends on how much of the input data
set DB2 was able to process before you interrupted its processing. DB2 might not
have opened the output data set yet, or the output data set might contain all or part
of the results data produced so far.
At the end of the data set are summary statistics that describe the processing of the
input data set as a whole.
For all other types of SQL statements executed with SPUFI, the message
“SQLCODE IS 0” indicates an error-free result.
You can change the amount of data displayed for numeric and character columns
by changing values on the CURRENT SPUFI DEFAULTS panel, as described in
“Changing SPUFI defaults (optional)” on page 54.
v A null value displays as a series of hyphens (-).
v A ROWID or BLOB column value displays in hexadecimal.
v A CLOB column value displays in the same way as a VARCHAR column value.
v A DBCLOB column value displays in the same way as a VARGRAPHIC column
value.
v A heading identifies each selected column, and repeats at the top of each output
page. The contents of the heading depend on the value you specified in field
COLUMN HEADING of the CURRENT SPUFI DEFAULTS panel.
Other messages that you could receive from the processing of SQL statements
include:
v The number of rows that DB2 processed, that either:
– Your SELECT statement retrieved
– Your UPDATE statement modified
– Your INSERT statement added to a table
– Your DELETE statement deleted from a table
v Which columns display truncated data because the data was too wide
ODBC lets you access data through ODBC function calls in your application. You
execute SQL statements by passing them to DB2 through a ODBC function call.
ODBC eliminates the need for precompiling and binding your application and
increases the portability of your application by using the ODBC interface.
If you are writing your applications in Java, you can use JDBC application
support to access DB2. JDBC is similar to ODBC but is designed specifically for
use with Java and is therefore a better choice than ODBC for making DB2 calls
from Java applications.
For more information on using JDBC, see DB2 ODBC Guide and Reference.
v Delimit SQL statements, as described in “Delimiting an SQL statement” on
page 66.
v Declare the tables you use, as described in “Declaring table and view definitions”
on page 67. (This is optional.)
v Declare the data items used to pass data between DB2 and a host language, as
described in “Accessing data using host variables and host structures” on
page 67.
v Code SQL statements to access DB2 data. See “Accessing data using host
variables and host structures” on page 67.
For information about using the SQL language, see “Part 1. Using SQL queries”
on page 1 and in DB2 SQL Reference. Details about how to use SQL
statements within an application program are described in “Chapter 9.
Embedding SQL statements in host languages” on page 107.
v Declare a communications area (SQLCA), or handle exceptional conditions that
DB2 indicates with return codes, in the SQLCA. See “Checking the execution of
SQL statements” on page 74 for more information.
In addition to these basic requirements, you should also consider several special
topics:
v “Chapter 7. Using a cursor to retrieve a set of rows” on page 81 discusses how to
use a cursor in your application program to select a set of rows and then process
the set one row at a time.
This section includes information about using SQL in application programs written in
assembler, C, COBOL, FORTRAN, PL/I, and REXX. You can also use SQL in
application programs written in Ada, APL2®, BASIC, and Prolog. See the following
publications for more information about these languages:
Ada IBM Ada/370 SQL Module Processor for DB2 Database Manager
User's Guide
APL2 APL2 Programming: Using Structured Query Language (SQL)
BASIC IBM BASIC/MVS Language Reference
Prolog/MVS & VM
IBM SAA AD/Cycle® Prolog/MVS & VM Programmer's Guide
Some of the examples vary from these conventions. Exceptions are noted where
they occur.
For example, use EXEC SQL and END-EXEC to delimit an SQL statement in a
COBOL program:
EXEC SQL
an SQL statement
END-EXEC.
You do not have to declare tables or views, but there are advantages if you do. One
advantage is documentation. For example, the DECLARE statement specifies the
structure of the table or view you are working with, and the data type of each
column. You can refer to the DECLARE statement for the column names and data
types in the table or view. Another advantage is that the DB2 precompiler uses your
declarations to make sure you have used correct column names and data types in
your SQL statements. The DB2 precompiler issues a warning message when the
column names and data types do not correspond to the SQL DECLARE statements
in your program.
For example, the DECLARE TABLE statement for the DSN8710.DEPT table looks
like this:
EXEC SQL
DECLARE DSN8710.DEPT TABLE
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) )
END-EXEC.
When you declare a table or view that contains a column with a distinct type, it is
best to declare that column with the source type of the distinct type, rather than the
distinct type itself. When you declare the column with the source type, DB2 can
check embedded SQL statements that reference that column at precompile time.
A host structure is a group of host variables that an SQL statement can refer to
using a single name. You can use host structures in all languages except REXX.
Use host language statements to define the host structures.
To optimize performance, make sure the host language declaration maps as closely
as possible to the data type of the associated data in the database; see “Chapter 9.
Embedding SQL statements in host languages” on page 107. For more performance
suggestions, see “Part 6. Additional programming techniques” on page 489.
You can use a host variable to represent a data value, but you cannot use it to
represent a table, view, or column name. (You can specify table, view, or column
names at run time using dynamic SQL. See “Chapter 23. Coding dynamic SQL in
application programs” on page 497 for more information.)
Host variables follow the naming conventions of the host language. A colon (:) must
precede host variables used in SQL to tell DB2 that the variable is not a column
name. A colon must not precede host variables outside of SQL statements.
For more information about declaring host variables, see the appropriate language
section:
v Assembler: “Using host variables” on page 111
v C: “Using host variables” on page 124
v COBOL: “Using host variables” on page 146
v FORTRAN: “Using host variables” on page 167
v PL/I: “Using host variables” on page 177.
v REXX: “Using REXX host variables and data types” on page 194.
Retrieving a single row of data: The INTO clause of the SELECT statement
names one or more host variables to contain the column values returned. The
named variables correspond one-to-one with the list of column names in the
SELECT list.
In the DATA DIVISION of the program, you must declare the host variables
CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in the
columns EMPNO, LASTNAME, and WORKDEPT of the DSN8710.EMP table.
If the SELECT statement returns more than one row, this is an error, and any data
returned is undefined and unpredictable.
Retrieving Multiple Rows of Data: If you do not know how many rows DB2 will
return, or if you expect more than one row to return, then you must use an
alternative to the SELECT ... INTO statement.
The DB2 cursor enables an application to process a set of rows and retrieve one
row at a time from the result table. For information on using cursors, see
“Chapter 7. Using a cursor to retrieve a set of rows” on page 81.
Specifying a list of items in a select clause: When you specify a list of items in
the SELECT clause, you can use more than the column names of tables and views.
You can request a set of column values mixed with host variable values and
constants. For example:
MOVE 4476 TO RAISE.
MOVE '000220' TO PERSON.
EXEC SQL
SELECT EMPNO, LASTNAME, SALARY, :RAISE, SALARY + :RAISE
INTO :EMP-NUM, :PERSON-NAME, :EMP-SAL, :EMP-RAISE, :EMP-TTL
FROM DSN8710.EMP
WHERE EMPNO = :PERSON
END-EXEC.
The results shown below have column headings that represent the names of the
host variables:
EMP-NUM PERSON-NAME EMP-SAL EMP-RAISE EMP-TTL
======= =========== ======= ========= =======
000220 LUTZ 29840 4476 34316
Retrieving data into host variables: If the value for the column you retrieve is
null, DB2 puts a negative value in the indicator variable. If it is null because of a
numeric or character conversion error, or an arithmetic expression error, DB2 sets
the indicator variable to -2. See “Handling arithmetic or conversion errors” on
page 75 for more information.
If you do not use an indicator variable and DB2 retrieves a null value, an error
results.
When DB2 retrieves the value of a column, you can test the indicator variable. If the
indicator variable’s value is less than zero, the column value is null. When the
column value is null, the value of the host variable does not change from its
previous value.
You can also use an indicator variable to verify that a retrieved character string
value is not truncated. If the indicator variable contains a positive integer, the
integer is the original length of the string.
You can specify an indicator variable, preceded by a colon, immediately after the
host variable. Optionally, you can use the word INDICATOR between the host
variable and its indicator variable. Thus, the following two examples are equivalent:
You can then test INDNULL for a negative value. If it is negative, the corresponding
value of PHONENO is null, and you can disregard the contents of CBLPHONE.
Inserting null values into columns using host variables: You can use an
indicator variable to insert a null value from a host variable into a column. When
DB2 processes INSERT and UPDATE statements, it checks the indicator variable (if
it exists). If the indicator variable is negative, the column value is null. If the
indicator variable is greater than -1, the associated host variable contains a value
for the column.
For example, suppose your program reads an employee ID and a new phone
number, and must update the employee table with the new number. The new
number could be missing if the old number is incorrect, but a new number is not yet
available. If it is possible that the new value for column PHONENO might be null,
you can code:
EXEC SQL
UPDATE DSN8710.EMP
SET PHONENO = :NEWPHONE:PHONEIND
WHERE EMPNO = :EMPID
END-EXEC.
When NEWPHONE contains other than a null value, set PHONEIND to zero by
preceding the statement with:
MOVE 0 TO PHONEIND.
Use IS NULL to test for a null column value: You cannot determine whether a
column value is null by comparing a host variable with an indicator variable that is
set -1 to the column. Two DB2 null values are not equal to each other. To test
whether a column has a null value, use the IS NULL comparison operator. For
example, the following code does not select the employees who do no have a
phone number:
MOVE -1 TO PHONE-IND.
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8710.EMP
WHERE PHONENO = :PHONE-HV:PHONE-IND
END-EXEC.
| When you use a DECLARE VARIABLE statement in a program, put the DECLARE
| VARIABLE statement after the corresponding host variable declaration and before
| you refer to that host variable.
| Because the application encoding scheme for the subsystem is EBCDIC, the
| retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE
| VARIABLE statements to specify that the data that is retrieved from these columns
| is encoded in the default Unicode CCSIDs for the subsystem. Suppose that you
| want to retrieve the character data in Unicode CCSID 1208 and the graphic data in
| Unicode CCSID 1200. Use DECLARE VARIABLE statements like these:
| EXEC SQL BEGIN DECLARE SECTION;
| char hvpartnum[11];
| EXEC SQL DECLARE :hvpartnum VARIABLE CCSID 1208;
| wchar_t hvjpnname[11];
| EXEC SQL DECLARE :hvjpnname VARIABLE CCSID 1200;
| struct {
| short len;
| char d[30];
| } hvengname;
| EXEC SQL DECLARE :hvengname VARIABLE CCSID 1208;
| EXEC SQL END DECLARE SECTION;
If you want to avoid listing host variables, you can substitute the name of a
structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME,
and :WORKDEPT. The example then reads:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
INTO :PEMP
FROM DSN8710.VEMP
WHERE EMPNO = :EMPID
END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a
COBOL record description, PL/I structure declaration, or C structure declaration that
corresponds to the columns of a table. For more details about coding a host
structure in your program, see “Chapter 9. Embedding SQL statements in host
languages” on page 107. For more information on using DCLGEN and the
restrictions that apply to the C language, see “Chapter 8. Generating declarations
for your tables using DCLGEN” on page 95.
MOVE
. '000230' TO EMPNO.
.
.
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, BIRTHDATE
INTO :PEMP-ROW:EMP-IND
FROM DSN8710.EMP
WHERE EMPNO = :EMPNO
END-EXEC.
In this example, EMP-IND is an array containing six values, which you can test for
negative values. If, for example, EMP-IND(6) contains a negative value, the
corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null
value.
The meaning of SQLCODEs other than 0 and 100 varies with the particular product
implementing SQL.
An advantage to using the SQLCODE field is that it can provide more specific
information than the SQLSTATE. Many of the SQLCODEs have associated tokens
in the SQLCA that indicate, for example, which object incurred an SQL error.
To conform to the SQL standard, you can declare SQLCODE and SQLSTATE
(SQLCOD and SQLSTA in FORTRAN) as stand-alone host variables. If you specify
the STDSQL(YES) precompiler option, these host variables receive the return
codes, and you should not include an SQLCA in your program.
The WHENEVER statement is not supported for REXX. For information on REXX
error handling, see “Embedding SQL statements in a REXX procedure” on
page 192.
The WHENEVER statement must precede the first SQL statement it is to affect.
However, if your program checks SQLCODE directly, it must check SQLCODE after
the SQL statement executes.
You can find the programming language specific syntax and details for calling
DSNTIAR on the following pages:
For assembler programs, see page 119
For C programs, see page 139
For COBOL programs, see page 161
For FORTRAN programs, see page 173
For PL/I programs, see page 188
DSNTIAR takes data from the SQLCA, formats it into a message, and places the
result in a message output area that you provide in your application program. Each
time you use DSNTIAR, it overwrites any previous messages in the message output
area. You should move or print the messages before using DSNTIAR again, and
before the contents of the SQLCA change, to get an accurate view of the SQLCA.
You must define the message output area in VARCHAR format. In this varying
character format, a two-byte length field precedes the data. The length field tells
DSNTIAR how many total bytes are in the output message area; its minimum value
is 240.
Figure 8 on page 77 shows the format of the message output area, where length is
the two-byte total length field, and the length of each line matches the logical record
length (lrecl) you specify to DSNTIAR.
When you call DSNTIAR, you must name an SQLCA and an output message area
in its parameters. You must also provide the logical record length (lrecl) as a value
between 72 and 240 bytes. DSNTIAR assumes the message area contains
fixed-length records of length lrecl.
When loading DSNTIAR from another program, be careful how you branch to
DSNTIAR. For example, if the calling program is in 24-bit addressing mode and
DSNTIAR is loaded above the 16-megabyte line, you cannot use the assembler
BALR instruction or CALL macro to call DSNTIAR, because they assume that
You can dynamically link (load) and call DSNTIAR directly from a language that
does not handle 31-bit addressing (OS/VS COBOL, for example). To do this, link a
second version of DSNTIAR with the attributes AMODE(24) and RMODE(24) into
another load module library. Or, you can write an intermediate assembler language
program that calls DSNTIAR in 31-bit mode; then call that intermediate program in
24-bit mode from your application.
For more information on the allowed and default AMODE and RMODE settings for a
particular language, see the application programming guide for that language. For
details on how the attributes AMODE and RMODE of an application are determined,
see the linkage editor and loader user’s guide for the language in which you have
written the application.
In your error routine, you write a section that checks for SQLCODE -911 or -913.
You can receive either of these SQLCODEs when there is a deadlock or timeout.
When one of these errors occurs, the error routine closes your cursors by issuing
the statement:
EXEC SQL CLOSE cursor-name
An SQLCODE of 0 or -501 from that statement indicates that the close was
successful.
You can use DSNTIAR in the error routine to generate the complete message text
associated with the negative SQLCODEs.
1. Choose a logical record length (lrecl) of the output lines. For this example,
assume lrecl is 72, to fit on a terminal screen, and is stored in the variable
named ERROR-TEXT-LEN.
2. Define a message area in your COBOL application. Assuming you want an area
for up to 10 lines of length 72, you should define an area of 720 bytes, plus a
2-byte area that specifies the length of the message output area.
01 ERROR-MESSAGE.
02 ERROR-LEN PIC S9(4) COMP VALUE +720.
02 ERROR-TEXT PIC X(72) OCCURS 10 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
To display the contents of the SQLCA when SQLCODE is 0 or -501, you should first
format the message by calling DSNTIAR after the SQL statement that produces
SQLCODE 0 or -501:
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable.
Your message might look like the following:
Your program can have several cursors, each of which performs the previous steps.
The following example shows a simple form of the DECLARE CURSOR statement:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP
END-EXEC.
More complicated cursors might include WHERE clauses or joins of several tables.
For example, suppose that you want to use a cursor to list employees who work on
a certain project. Declare a cursor like this to identify those employees:
EXEC SQL
DECLARE C2 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
| Updating a column: You can update columns in the rows that you retrieve.
| Updating a row after you use a cursor to retrieve it is called a positioned update. If
| you intend to perform any positioned updates on the identified table, include the
| FOR UPDATE clause. The FOR UPDATE clause has two forms. The first form is
| FOR UPDATE OF column-list. Use this form when you know in advance which
| columns you need to update. The second form of the FOR UPDATE clause is FOR
| UPDATE, with no column list. Use this form when you might use the cursor to
| update any of the columns of the table.
For example, you can use this cursor to update only the SALARY column of the
employee table:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8710.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE OF SALARY;
If you might use the cursor to update any column of the employee table, define the
cursor like this:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8710.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE;
| DB2 must do more processing when you use the FOR UPDATE clause without a
| column list than when you use the FOR UPDATE OF clause with a column list.
| Therefore, if you intend to update only a few columns of a table, your program can
| run more efficiently if you include a column list.
The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE
clause in static SQL statements. For information on these options, see Table 48 on
| page 403. If you do not specify the FOR UPDATE clause in a DECLARE CURSOR
| statement, and you do not specify the STDSQL(YES) option or the NOFOR
| precompiler options, you receive an error if you execute a positioned UPDATE
| statement.
You can update a column of the identified table even though it is not part of the
result table. In this case, you do not need to name the column in the SELECT
statement. When the cursor retrieves a row (using FETCH) that contains a column
value you want to update, you can use UPDATE ... WHERE CURRENT OF to
identify the row that is to be updated.
Two factors that influence the amount of time that DB2 requires to process the
OPEN statement are:
v Whether DB2 must perform any sorts before it can retrieve rows from the result
table
v Whether DB2 uses parallelism to process the SELECT statement associated with
the cursor
For more information, see “The effect of sorts on OPEN CURSOR” on page 710.
Your program must anticipate and handle an end-of-data whenever you use a
cursor to fetch a row. For further information about the WHENEVER NOT FOUND
statement, see “Checking the execution of SQL statements” on page 74.
The SELECT statement within DECLARE CURSOR statement identifies the result
table from which you fetch rows, but DB2 does not retrieve any data until your
application program executes a FETCH statement.
When your program executes the FETCH statement, DB2 uses the cursor to point
to a row in the result table. That row is called the current row. DB2 then copies the
current row contents into the program host variables that you specified on the INTO
clause of FETCH. This sequence repeats each time you issue FETCH, until you
have processed all rows in the result table.
The row that DB2 points to when you execute a FETCH statement depends on
whether the cursor is declared as a scrollable or non-scrollable. See “Scrollable and
non-scrollable cursors” on page 85 for more information.
When you query a remote subsystem with FETCH, consider using block fetch for
better performance. For more information see “Use block fetch” on page 385. Block
fetch processes rows ahead of the current row. You cannot use a block fetch when
you perform a positioned update or delete operation.
A positioned UPDATE statement updates the row that the cursor points to.
A positioned DELETE statement deletes the row that cursor-name points to.
If you finish processing the rows of the result table, and you do not want to use the
cursor, you can let DB2 automatically close the cursor when your program
terminates.
Types of cursors
Cursors can be scrollable or not scrollable. They can also be held or not held. The
following sections discuss these characteristics in more detail.
| If you want to order the rows of the cursor's result set, and you also want the cursor
| to be updatable, you need to declare the cursor as scrollable, even if you use it
| only to retrieve rows sequentially. You can use the ORDER BY clause in the
| declaration of an updatable cursor only if you declare the cursor as scrollable.
|
| EXEC SQL DECLARE C1 INSENSITIVE SCROLL CURSOR FOR
| SELECT DEPTNO, DEPTNAME, MGRNO
| FROM DSN8710.DEPT
| ORDER BY DEPTNO
| END-EXEC.
|
| Figure 9. Declaration for an insensitive scrollable cursor
|
| Declaring a scrollable cursor with the INSENSITIVE keyword has the following
| effects:
| v The size, the order of the rows, and the values for each row of the result table do
| not change after you open the cursor.
| v The result table is read-only. Therefore, you cannot declare the cursor with the
| FOR UPDATE clause, and you cannot use the cursor for positioned update or
| delete operations.
|
| EXEC SQL DECLARE C2 SENSITIVE STATIC SCROLL CURSOR FOR
| SELECT DEPTNO, DEPTNAME, MGRNO
| FROM DSN8710.DEPT
| ORDER BY DEPTNO
| END-EXEC.
|
| Figure 10. Declaration for a sensitive scrollable cursor
|
| Declaring a cursor as SENSITIVE has the following effects:
| v When you execute positioned UPDATE and DELETE statements with the cursor,
| those updates are visible in the result table.
| v When the current value of a row no longer satisfies the SELECT statement for
| the cursor, that row is no longer visible in the result table.
| v When a row of the result table is deleted from the underlying table, the row is no
| longer visible in the result table.
| v Changes that are made to the underlying table by other cursors or other
| application processes can be visible in the result table, depending on whether the
| FETCH statements that you use with the cursor are FETCH INSENSITIVE or
| FETCH SENSITIVE statements.
| If the OPEN statement executes with no errors or warnings, DB2 does not set
| SQLWARN0 when it sets SQLWARN1, SQLWARN4, or SQLWARN5. See Appendix
| C of DB2 SQL Reference for specific information on fields in the SQLCA.
| Retrieving rows with a scrollable cursor: When you open any cursor, the cursor
| is positioned before the first row of the result table. You move a scrollable cursor
| around in the result table by specifying a fetch orientation keyword in a FETCH
| statement. A fetch orientation keyword indicates the absolute or relative position of
| the cursor when the FETCH statement is executed. Table 4 lists the fetch
| orientation keywords that you can specify and their meanings.
| Table 4. Positions for a scrollable cursor
| Keyword in FETCH statement Cursor position when the FETCH is executed
| BEFORE Before the first row
| FIRST or ABSOLUTE +1 At the first row
| LAST or ABSOLUTE −1 At the last row
| AFTER After the last row
1
| ABSOLUTE To an absolute row number, from before the first
| row forward or from after the last row backward
| RELATIVE1 Forward or backward a relative number of rows
| CURRENT At the current row
| PRIOR or RELATIVE −1 To the previous row
| NEXT or RELATIVE +1 To the next row (default)
| Note:
| 1. ABSOLUTE and RELATIVE are described in greater detail in the discussion of FETCH in
| Chapter 5 of DB2 SQL Reference.
|
| For example, to use the cursor that is declared in Figure 9 on page 86 to fetch the
| fifth row of the result table, use a FETCH statement like this:
# To fetch the fifth row from the end of the result table, use this FETCH statement:
# EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
| Determining the number of rows in the result table for a scrollable cursor:
| You can determine how many rows are in the result table of an INSENSITIVE or
| SENSITIVE STATIC scrollable cursor. To do that, execute a FETCH statement, such
| as FETCH AFTER, that positions the cursor after the last row. Then examine the
| SQLCA. Fields SQLERRD(1) and SQLERRD(2) (fields sqlerrd[0] and sqlerrd[1] for
| C and C⁺⁺) contain the number of rows in the result table.
| Holes in the result table: Scrollable cursors that are declared as INSENSITIVE
| or SENSITIVE STATIC follow a static model, which means that DB2 determines the
| size of the result table and the order of the rows when you open the cursor.
| Updating or deleting rows from the underlying table after the cursor is open can
| result in holes in the result table. A hole in the result table occurs when a delete or
| update operation results in a difference between the result table and the underlying
| base table.
| Example: Creating a delete hole: Suppose that table A consists of one integer
| column, COL1, which has the following values:
|
|
| The positioned delete statement creates a delete hole, as shown in Figure 11.
|
|
|
| Figure 11. Creating a delete hole
|
| After you execute the positioned delete statement, the third row is deleted from the
| result table, but the result table does not shrink to fill the space that the deleted row
| creates.
| Example: Creating an update hole: Suppose that you declare the following cursor,
| which you use to update rows in A:
| EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR
| SELECT COL1
| FROM A
| WHERE COL1<6;
| The searched UPDATE statement creates an update hole, as shown in Figure 12.
|
|
|
| Figure 12. Creating an update hole
|
| If you try to fetch from a delete hole, DB2 issues an SQL warning. If you try to
| update or delete the delete hole, DB2 issues an SQL error. You can remove a
| delete hole only by opening the scrollable cursor, setting a savepoint, executing a
| positioned DELETE statement with the scrollable cursor, and rolling back to the
| savepoint.
| If you try to fetch from an update hole, DB2 issues an SQL warning. If you try to
| delete the update hole, DB2 issues an SQL error. However, you can convert an
| update hole back to a result table row by updating the row in the base table, as
| shown in Figure 13. You can update the base table with a searched UPDATE
| statement in the same application process, or a searched or positioned UPDATE
| statement in another application process. After you update the base table, if the row
| qualifies for the result table, the update hole disappears.
|
|
|
| Figure 13. Removing an update hole
|
| A hole becomes visible to a cursor when a cursor operation returns a non-zero
| SQLCODE. The point at which a hole becomes visible depends on the following
| factors:
| v Whether the scrollable cursor creates the hole
| v Whether the FETCH statement is FETCH SENSITIVE or FETCH INSENSITIVE
| If the scrollable cursor creates the hole, the hole is visible when you execute a
| FETCH statement for the row that contains the hole. The FETCH statement can be
| FETCH INSENSITIVE or FETCH SENSITIVE.
| If an update or delete operation outside the scrollable cursor creates the hole, the
| hole is visible at the following times:
| v If you execute a FETCH SENSITIVE statement for the row that contains the hole,
| the hole is visible when you execute the FETCH statement.
| v If you execute a FETCH INSENSITIVE statement, the hole is not visible when
| you execute the FETCH statement. DB2 returns the row as it was before the
| update or delete operation occurred. However, if you follow the FETCH
| INSENSITIVE statement with a positioned UPDATE or DELETE statement, the
| hole becomes visible.
| The page size of the TEMP table space must be large enough to hold the longest
| row in the declared temporary table. See Part 2 of DB2 Installation Guide for
| information on calculating the page size for TEMP table spaces that are used for
| scrollable cursors.
After a commit operation, a held cursor is positioned after the last row retrieved and
before the next logical row of the result table to be returned.
If the program abnormally terminates, the cursor position is lost. To prepare for
restart, your program must reposition the cursor.
The following restrictions apply to cursors that are declared WITH HOLD:
v Do not use DECLARE CURSOR WITH HOLD with the new user signon from a
DB2 attachment facility, because all open cursors are closed.
v Do not declare a WITH HOLD cursor in a thread that could become inactive. If
you do, its locks are held indefinitely.
IMS
You should always close cursors that you no longer need. If you let DB2 close
a CICS attachment cursor, the cursor might not close until the CICS
attachment facility reuses or terminates the thread.
The following cursor declaration causes the cursor to maintain its position in the
DSN8710.EMP table after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
FROM DSN8710.EMP
WHERE WORKDEPT < 'D11'
ORDER BY EMPNO
END-EXEC.
**************************************************
* Close the cursor *
**************************************************
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
**************************************************
* Close the cursor *
**************************************************
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
You must use DCLGEN before you precompile your program. Supply DCLGEN with
the table or view name before you precompile your program. To use the
declarations generated by DCLGEN in your program, use the SQL INCLUDE
statement.
DB2 must be active before you can use DCLGEN. You can start DCLGEN in
several different ways:
v From ISPF through DB2I. Select the DCLGEN option on the DB2I Primary Option
Menu panel. Next, fill in the DCLGEN panel with the information it needs to build
the declarations. Then press ENTER.
v Directly from TSO. To do this, sign on to TSO, issue the TSO command DSN,
and then issue the subcommand DCLGEN.
v From a CLIST, running in TSO foreground or background, that issues DSN and
then DCLGEN.
v With JCL. Supply the required information, using JCL, and run DCLGEN in batch.
If you wish to start DCLGEN in the foreground, and your table names include
DBCS characters, you must input and display double-byte characters. If you do
not have a terminal that displays DBCS characters, you can enter DBCS
characters using the hex mode of ISPF edit.
If you do not specify a location, then this option defaults to the local location
name. This field applies to DB2 private protocol access only (that is, the
location you name must be another DB2 for OS/390 and z/OS).
4 DATA SET NAME
Is the name of the data set you allocated to contain the declarations that
DCLGEN produces. You must supply a name; there is no default.
The data set must already exist, be accessible to DCLGEN, and can be
either sequential or partitioned. If you do not enclose the data set name in
apostrophes, DCLGEN adds a standard TSO prefix (user ID) and suffix
(language). DCLGEN knows what the host language is from the DB2I
defaults panel.
For example, for library name LIBNAME(MEMBNAME), the name becomes:
userid.libname.language(membname)
If this data set is password protected, you must supply the password in the
DATA SET PASSWORD field.
5 DATA SET PASSWORD
Is the password for the data set in the DATA SET NAME field, if the data
set is password protected. It does not display on your terminal, and is not
recognized if you issued it from a previous session.
6 ACTION
Tells DCLGEN what to do with the output when it is sent to a partitioned
data set. (The option is ignored if the data set you specify in DATA SET
NAME field is sequential.)
ADD indicates that an old version of the output does not exist, and
creates a new member with the specified data set name. This is the
default.
REPLACE replaces an old version, if it already exists. If the member
does not exist, this option creates a new member.
7 COLUMN LABEL
Tells DCLGEN whether to include labels declared on any columns of the
table or view as comments in the data declarations. (The SQL statement
LABEL ON creates column labels to use as supplements to column names.)
Use:
YES to include column labels.
NO to ignore column labels. This is the default.
8 STRUCTURE NAME
Is the name of the generated data structure. The name can be up to 31
characters. If the name is not a DBCS string, and the first character is not
The default is NO, which does not generate an indicator variable array.
If you are using an SQL reserved word as an identifier, you must edit the DCLGEN
output in order to add the appropriate SQL delimiters.
DCLGEN produces output that is intended to meet the needs of most users, but
occasionally, you will need to edit the DCLGEN output to work in your specific case.
For example, DCLGEN is unable to determine whether a column defined as NOT
NULL also contains the DEFAULT clause, so you must edit the DCLGEN output to
add the DEFAULT clause to the appropriate column definitions.
For further details about the DCLGEN subcommand, see Chapter 2 of DB2
Command Reference.
Fill in the COBOL defaults panel as necessary. Press Enter to save the new
defaults, if any, and return to the DB2I Primary Option menu.
Figure 18. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on
the DB2I Defaults panel is COBOL, COB2, or IBMCOB.
Fill in the fields as shown in Figure 19 on page 103, and then press Enter.
Figure 19. DCLGEN panel—selecting source table and destination data set
If the operation succeeds, a message displays at the top of your screen as shown
in Figure 20.
DB2 then displays the screen as shown in Figure 21 on page 104. Press Enter to
return to the DB2I Primary Option menu.
For information on reading the syntax diagrams in this chapter, see “How to read
the syntax diagrams” on page xix.
For information on writing embedded SQL application programs in Java, see DB2
Application Programming Guide and Reference for Java.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables values to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
If your program is reentrant, you must include the SQLCA within a unique data area
acquired for your task (a DSECT). For example, at the beginning of your program,
specify:
PROGAREA DSECT
EXEC SQL INCLUDE SQLCA
As an alternative, you can create a separate storage area for the SQLCA and
provide addressability to that area.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor unless you use the precompiler option TWOPASS. See Chapter
5 of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
Each SQL statement in an assembler program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for assembler statements, except that you must specify
EXEC SQL within one line. Any part of the statement that does not fit on one line
can appear on subsequent lines, beginning at the continuation margin (column 16,
the default). Every line of the statement, except the last, must have a continuation
character (a non-blank character) immediately after the right margin in column 72.
Declaring tables and views: Your assembler program should include a DECLARE
statement to describe each table and view the program accesses.
Margins: The precompiler option MARGINS allows you to set a left margin, a right
margin, and a continuation margin. The default values for these margins are
columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left
margin, the DB2 precompiler does not recognize the SQL statement. If you use the
default margins, you can place an SQL statement anywhere between columns 2
and 71.
Names: You can use any valid assembler name for a host variable. However, do
not use external entry names or access plan names that begin with 'DSN' or host
variable names that begin with 'SQL'. These names are reserved for DB2.
Statement labels: You can prefix an SQL statement with a label. The first line of an
SQL statement can use a label beginning in the left margin (column 1). If you do
not use a label, leave column 1 blank.
CICS
TSO
The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example
of how to acquire storage for the SQLDSECT in a program that runs in a
TSO environment.
CICS
You can precede the assembler statements that define host variables with the
statement BEGIN DECLARE SECTION, and follow the assembler statements with
the statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
You can declare host variables in normal assembler style (DC or DS), depending on
the data type and the limitations on that data type. You can specify a value on DC
or DS declarations (for example, DC H'5'). The DB2 precompiler examines only
packed decimal declarations.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
For floating point data types (E, EH, EB, D, DH, and DB), DB2 uses the FLOAT
precompiler option to determine whether the host variable is in IEEE floating point
or System/390 floating point format. If the precompiler option is FLOAT(S390), you
need to define your floating point host variables as E, EH, D, or DH. If the
precompiler option is FLOAT(IEEE), you need to define your floating point host
variables as EB or DB. DB2 converts all floating point input data to System/390
floating point before storing it.
variable-name DC H
DS 1 L2
F
L4
P ’value’
Ln
E
L4
EH
L4
EB
L4
D
L8
DH
L8
DB
L8
Character host variables: There are three valid forms for character host variables:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 30
on page 114 for the syntax of CLOBs.
variable-name DC C
DS 1 Ln
variable-name DC H , CLn
DS 1 L2 1
Graphic host variables: There are three valid forms for graphic host variables:
The following figures show the syntax for forms other than DBCLOBs. See
Figure 30 on page 114 for the syntax of DBCLOBs. In the syntax diagrams, value
denotes one or more DBCS characters, and the symbols < and > represent shift-out
and shift-in characters.
variable-name DC G
DS Ln
’<value>’
Ln’<value>’
variable-name DS H , GLn
DC L2 ’m’ ’<value>’
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
variable-name DC F
DS 1 L4
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
LOB variables and locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators.
If you specify the length of the LOB in terms of KB, MB, or GB, you must leave no
spaces between the length and K, M, or G.
See “Chapter 13. Programming for large objects (LOBs)” on page 229 for a
discussion of how to use these host variables.
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
Table 8 helps you define host variables that receive output from the database. You
can use Table 8 to determine the assembler data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
| Table 8 shows direct conversions between DB2 data types and host data types.
| However, a number of DB2 data types are compatible. When you do assignments
| or comparisons of data that have compatible data types, DB2 does conversions
| between those compatible data types. See Table 1 on page 5 for information on
| compatible data types.
Table 8. SQL data types mapped to typical assembler declarations
SQL Data Type Assembler Equivalent Notes
SMALLINT DS HL2
INTEGER DS F
Host graphic data type: You can use the assembler data type “host graphic” in
SQL statements when the precompiler option GRAPHIC is in effect. However, you
cannot use assembler DBCS literals in SQL statements, even when GRAPHIC is in
effect.
Floating point host variables: All floating point data is stored in DB2 in
System/390 floating point format. However, your host variable data can be in
System/390 floating point format or IEEE floating point format. DB2 uses the
FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
variables are in IEEE floating point format or System/390 floating point format. DB2
does no checking to determine whether the host variable declarations or format of
the host variable contents match the precompiler option. Therefore, you need to
ensure that your floating point host variable types and contents match the
precompiler option.
Special Purpose Assembler Data Types: The locator data types are assembler
language data types as well as SQL data types. You cannot use locators as column
types. For information on how to use these data types, see the following sections:
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information on indicator variables, see “Using indicator variables with host
variables” on page 70 or Chapter 2 of DB2 SQL Reference.
Example:
The following figure shows the syntax for a valid indicator variable.
variable-name DC H
DS 1 L2
DSNTIAR syntax
CALL DSNTIAR,(sqlca, message, lrecl),MF=(E,PARM)
MESSAGE DS H,CL(LINES*LRECL)
ORG MESSAGE
MESSAGEL DC AL2(LINES*LRECL)
MESSAGE1 DS CL(LRECL) text line 1
MESSAGE2 DS CL(LRECL) text line 2
.
.
.
where MESSAGE is the name of the message output area, LINES is the
number of lines in the message output area, and LRECL is the length of each
line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see member DSN8FRDO in the data set
prefix.SDSNSAMP.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are also in the data set
prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
A standard declaration includes both a structure definition and a static data area
named 'sqlca'. See Chapter 5 of DB2 SQL Reference for more information about
the INCLUDE statement and Appendix C of DB2 SQL Reference for a complete
description of SQLCA fields.
Unlike the SQLCA, more than one SQLDA can exist in a program, and an SQLDA
can have any valid name. You can code an SQLDA in a C program either directly or
by using the SQL INCLUDE statement. The SQL INCLUDE statement requests a
standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
A standard declaration includes only a structure definition with the name 'sqlda'.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLDA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor, unless you use the precompiler option TWOPASS. You can
place an SQLDA declaration wherever C allows a structure definition. Normal C
scoping rules apply.
Each SQL statement in a C program must begin with EXEC SQL and end with a
semi-colon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
# In general, because C is case sensitive, use uppercase letters to enter SQL words.
# However, if you use the FOLD precompiler suboption, DB2 folds lowercase letters
# in SBCS SQL ordinary identifiers to uppercase. For information on host language
# precompiler options, see Table 48 on page 403.
You must keep the case of host variable names consistent throughout the program.
For example, if a host variable name is lowercase in its declaration, it must be
lowercase in all SQL statements. You might code an UPDATE statement in a C
program as follows:
EXEC SQL
UPDATE DSN8710.DEPT
SET MGRNO = :mgr_num
WHERE DEPTNO = :int_dept;
Comments: You can include C comments (/* ... */) within SQL statements wherever
you can use a blank, except between the keywords EXEC and SQL. You can use
single-line comments (starting with //) in C language statements, but not in
embedded SQL. You cannot nest comments.
Declaring tables and views: Your C program should use the statement DECLARE
TABLE to describe each table and view the program accesses. You can use the
DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 8. Generating declarations for your tables
using DCLGEN” on page 95.
You cannot nest SQL INCLUDE statements. Do not use C #include statements to
include SQL statements or C host variable declarations.
Margins: Code SQL statements in columns 1 through 72, unless you specify other
margins to the DB2 precompiler. If EXEC SQL is not within the specified margins,
the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid C name for a host variable, subject to the following
restrictions:
Nulls and NULs: C and SQL differ in the way they use the word null. The C
language has a null character (NUL), a null pointer (NULL), and a null statement
(just a semicolon). The C NUL is a single character which compares equal to 0. The
C NULL is a special reserved pointer value that does not point to any valid data
object. The SQL null value is a special value that is distinct from all nonnull values
and denotes the absence of a (nonnull) value. In this chapter, NUL is the null
character in C and NULL is the SQL null value.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can precede SQL statements with a label, if you wish.
Trigraphs: Some characters from the C character set are not available on all
keyboards. You can enter these characters into a C source program using a
sequence of three characters called a trigraph. The trigraphs that DB2 supports are
the same as those that the C/370 compiler supports.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be within the scope of any SQL statements that the statement
WHENEVER affects.
Special C considerations:
v Use of the C/370 multi-tasking facility, where multiple tasks execute SQL
statements, causes unpredictable results.
v You must run the DB2 precompiler before running the C preprocessor.
v The DB2 precompiler does not support C preprocessor directives.
v If you use conditional compiler directives that contain C code, either place them
after the first C token in your application program, or include them in the C
program using the #include preprocessor directive.
Please refer to the appropriate C documentation for further information on C
preprocessor directives.
Precede C statements that define the host variables with the statement BEGIN
DECLARE SECTION, and follow the C statements with the statement END
DECLARE SECTION. You can have more than one host variable declaration
section in your program.
The names of host variables must be unique within the program, even if the host
variables are in different blocks, classes, or procedures. You can qualify the host
variable names with a structure name to make them unique.
Host variables must be scalar variables or host structures; they cannot be elements
of vectors or arrays (subscripted variables) unless you use the character arrays to
hold strings. You can use an array of indicator variables when you associate the
array with a host structure.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
float
auto const double
extern volatile int
static short
sqlint32
int
long
decimal ( integer )
, integer
variable-name ;
=expression
Character host variables: There are four valid forms for character host variables:
v Single-character form
v NUL-terminated character form
v VARCHAR structured form
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 42
on page 129 for the syntax of CLOBs.
char variable-name ;
auto const unsigned =expression
extern volatile
static
Figure 34. Single-character form
char
auto const unsigned
extern volatile
static
variable-name [ length ] ;
=expression
Notes:
1. On input, the string contained by the variable must be NUL-terminated.
2. On output, the string is NUL-terminated.
3. A NUL-terminated character host variable maps to a varying length character
string (except for the NUL).
int
struct { short var-1 ;
auto const tag
extern volatile
static
char var-2 [ length ] ; }
unsigned
variable-name ;
={ expression, expression }
Notes:
v var-1 and var-2 must be simple variable references. You cannot use them as host
variables.
v You can use the struct tag to define other data areas, which you cannot use as
host variables.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARCHAR {
short len;
char s[10];
} vstring;
Graphic host variables: There are four valid forms for graphic host variables:
You can use the C data type wchar_t to define a host variable that inserts, updates,
deletes, and selects data from GRAPHIC or VARGRAPHIC columns.
The following figures show the syntax for forms other than DBCLOBs. See
Figure 42 on page 129 for the syntax of DBCLOBs.
wchar_t variable-name ;
auto const =expression
extern volatile
static
Figure 37. Single-graphic form
Notes:
1. length must be a decimal integer constant greater than 1 and not greater than
16352.
2. On input, the string in variable-name must be NUL-terminated.
3. On output, the string is NUL-terminated.
4. The NUL-terminated graphic form does not accept single byte characters into
variable-name.
int
struct { short var-1 ;
auto const tag
extern volatile
static
,
Notes:
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARGRAPH {
short len;
wchar_t d[10];
} vgraph;
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
variable-name ;
= init-value
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
variable-name ;
init-value
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
SQL TYPE IS
auto const
extern volatile
static
register
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
In this example, target is the name of a host structure consisting of the c1, c2, and
c3 fields. c1 and c3 are character arrays, and c2 is the host variable equivalent to
the SQL VARCHAR data type. The target host structure can be part of another host
structure but must be the deepest level of the nested structure.
The following figure shows the syntax for valid host structures.
struct {
auto const packed tag
extern volatile
static
float var-1 ; }
double
int
short
sqlint32
int
long
decimal ( integer )
, integer
varchar structure
vargraphic structure
SQL TYPE IS ROWID
LOB data type
char var-2 ;
unsigned [ length ]
wchar_t var-5 ;
[ length ]
variable-name ;
= expression
int
struct { short var-3 ;
tag signed
char var-4 [ length ] ; }
unsigned
int
struct { short var-6 ; wchar_t var-7 [ length ] ; }
tag signed
Table 10 on page 133 helps you define host variables that receive output from the
database. You can use the table to determine the C data type that is equivalent to a
given SQL data type. For example, if you retrieve TIMESTAMP data, you can use
the table to define a suitable host variable in the program that receives the data
value.
| Table 10 on page 133 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
C data types with no SQL equivalent: C supports some data types and storage
classes with no SQL equivalents, for example, register storage class, typedef, and
the pointer.
SQL data types with no C equivalent: If your C compiler does not have a decimal
data type, then there is no exact equivalent for the SQL DECIMAL data type. In this
case, to hold the value of such a variable, you can use:
v An integer or floating-point variable, which converts the value. If you choose
integer, you will lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer, or if you want to preserve a fractional
value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. Hence, when you assign a decimal number to a
floating point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to get a string
representation of a decimal number.
Floating point host variables: All floating point data is stored in DB2 in
System/390 floating point format. However, your host variable data can be in
System/390 floating point format or IEEE floating point format. DB2 uses the
FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
variables are in IEEE floating point or System/390 floating point format. DB2 does
no checking to determine whether the contents of a host variable match the
precompiler option. Therefore, you need to ensure that your floating point data
format matches the precompiler option.
Special Purpose C Data Types: The locator data types are C data types as well
as SQL data types. You cannot use locators as column types. For information on
how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
If you assign a string of length n to a NUL-terminated variable with a length that is:
v less than or equal to n, then DB2 inserts the characters into the host variable as
long as the characters fit up to length (n-1) and appends a NUL at the end of the
string. DB2 sets SQLWARN[1] to W and any indicator variable you provide to the
original length of the source string.
v equal to n+1, then DB2 inserts the characters into the host variable and appends
a NUL at the end of the string.
v greater than n+1, then the rules depend on whether the source string is a value
of a fixed-length string column or a varying-length string column. See Chapter 2
of DB2 SQL Reference for more information.
PREPARE or DESCRIBE statements: You cannot use a host variable that is of the
NUL-terminated form in either a PREPARE or DESCRIBE statement.
Truncation: Be careful of truncation. Ensure the host variable you declare can
contain the data and a NUL terminator, if needed. Retrieving a floating-point or
decimal column value into a long integer host variable removes any fractional part
of the value.
In SQL, you can use quotes to delimit identifiers and apostrophes to delimit string
constants. The following examples illustrate the use of apostrophes and quotes in
SQL.
Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
SELECT COL1 FROM TBL1 WHERE COL2 = 'BELL';
Character data in SQL is distinct from integer data. Character data in C is a subtype
of integer data.
Varying-length strings: For varying-length BIT data, use the VARCHAR structured
form. Some C string manipulation functions process NUL-terminated strings and
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
Example:
The following figure shows the syntax for a valid indicator variable.
,
int
short variable-name ;
auto const signed
extern volatile
static
Figure 48. Indicator variable
The following figure shows the syntax for a valid indicator array.
int
short
auto const signed
extern volatile
static
,
variable-name [ dimension ] ;
= expression
Note:
DSNTIAR syntax
rc = dsntiar(&sqlca, &message, &lrecl);
where error_message is the name of the message output area, data_dim is the
number of lines in the message output area, and data_len is length of each line.
&lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
For C, include:
#pragma linkage (dsntiar,OS)
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
&eib EXEC interface block
&commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Using C⁺⁺ data types as host variables: You can use class members as host
variables. Class members used as host variables are accessible to any SQL
statement within the class.
Except where noted otherwise, this information pertains to all COBOL compilers
supported by DB2 for OS/390 and z/OS.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
When you use the precompiler option STDSQL(YES), you must declare an
SQLCODE variable. DB2 declares an SQLCA area for you in the
WORKING-STORAGE SECTION. DB2 controls that SQLCA, so your application
programs should not make assumptions about its structure or location.
You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you
can specify a 77 level or a record description entry in the WORKING-STORAGE
SECTION. You can declare a stand-alone SQLCODE variable in either the
WORKING-STORAGE SECTION or LINKAGE SECTION.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. The DB2 SQL INCLUDE statement does not
provide an SQLDA mapping for COBOL. You can define the SQLDA using one of
the following two methods:
v For COBOL programs compiled with any compiler except the OS/VS COBOL
compiler, you can code the SQLDA declarations in your program. For more
information, see “Using dynamic SQL in COBOL” on page 526. You must place
SQLDA declarations in the WORKING-STORAGE SECTION or LINKAGE
SECTION of your program, wherever you can specify a record description entry
in that section.
v For COBOL programs compiled with any COBOL compiler, you can call a
subroutine (written in C, PL/I, or assembler language) that uses the DB2
INCLUDE SQLDA statement to define the SQLDA. The subroutine can also
include SQL statements for any dynamic SQL functions you need. You must use
this method if you compile your program using OS/VS COBOL. The SQLDA
definition includes the POINTER data type, which OS/VS COBOL does not
support. For more information on using dynamic SQL, see “Chapter 23. Coding
dynamic SQL in application programs” on page 497.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor. An SQL statement that uses a host variable must be within the
scope of the statement that declares the variable.
Each SQL statement in a COBOL program must begin with EXEC SQL and end
with END-EXEC. If the SQL statement appears between two COBOL statements,
the period is optional and might not be appropriate. If the statement appears in an
IF...THEN set of COBOL statements, leave off the ending period to avoid
inadvertently ending the IF statement. The EXEC and SQL keywords must appear
on one line, but the remainder of the statement can appear on subsequent lines.
In addition, you can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
Continuation for SQL statements: The rules for continuing a character string
constant from one line to the next in an SQL statement embedded in a COBOL
program are the same as those for continuing a non-numeric literal in COBOL.
However, you can use either a quotation mark or an apostrophe as the first
nonblank character in area B of the continuation line. The same rule applies for the
continuation of delimited identifiers and does not depend on the string delimiter
option.
Declaring tables and views: Your COBOL program should include the statement
DECLARE TABLE to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. You should include the DCLGEN members in the DATA DIVISION. For
details, see “Chapter 8. Generating declarations for your tables using DCLGEN” on
page 95.
You cannot nest SQL INCLUDE statements. Do not use COBOL verbs to include
SQL statements or COBOL host variable declarations, or use the SQL INCLUDE
statement to include CICS preprocessor related code. In general, use the SQL
INCLUDE only for SQL-related coding.
Margins: Code SQL statements in columns 12 through 72. If EXEC SQL starts
before column 12, the DB2 precompiler does not recognize the SQL statement.
The precompiler option MARGINS allows you to set new left and right margins
between 1 and 80. However, you must not code the statement EXEC SQL before
column 12.
Names: You can use any valid COBOL name for a host variable. Do not use
external entry names or access plan names that begin with 'DSN' and host variable
names that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
WHENEVER statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a section name or unqualified paragraph name in the
PROCEDURE DIVISION.
You can specify the option DYNAM when compiling a COBOL program if
you use VS COBOL II or COBOL/370™, or if you use OS/VS COBOL with
the VS COBOL II or COBOL/370 run-time libraries.
IMS and DB2 share a common alias name, DSNHLI, for the language
interface module. You must do the following when you concatenate your
libraries:
– If you use IMS with the COBOL option DYNAM, be sure to concatenate
the IMS library first.
– If you run your application program only under DB2, be sure to
concatenate the DB2 library first.
You must specify the option NODYNAM when you compile a COBOL
program that includes SQL statements. You cannot use DYNAM.
Because stored procedures use CAF, you must also compile COBOL stored
procedures with the option NODYNAM.
DB2 assigns values to COBOL binary integer host variables as if you had
specified the COBOL compiler option TRUNC(BIN).
v If a COBOL program contains several entry points or is called several times, the
USING clause of the entry statement that executes before the first SQL
statement executes must contain the SQLCA and all linkage section entries that
any SQL statement uses as host variables.
v The REPLACE statement has no effect on SQL statements. It affects only the
COBOL statements that the precompiler generates.
v Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic
characters, reference modification, and subscripts within SQL statements.
v Observe the rules in Chapter 2 of DB2 SQL Reference when you name SQL
# identifiers. However, for COBOL only, the names of SQL identifiers can follow the
# rules for naming COBOL words, if the names do not exceed the allowable length
# for the DB2 object. For example, the name 1ST-TIME is a valid cursor name
# because it is a valid COBOL word, but the name 1ST_TIME is not valid because
# it is not a valid SQL identifier or a valid COBOL word.
v Observe these rules for hyphens:
– Surround hyphens used as subtraction operators with spaces. DB2 usually
interprets a hyphen with no spaces around it as part of a host variable name.
If you pass host variables with address changes into a program more than once,
then the called program must reset SQL-INIT-FLAG. Resetting this flag indicates
that the storage must initialize when the next SQL statement executes. To reset the
flag, insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called program’s
PROCEDURE DIVISION, ahead of any executable SQL statements that use the
host variables.
End of Product-sensitive Programming Interface
You can precede COBOL statements that define the host variables with the
statement BEGIN DECLARE SECTION, and follow the statements with the
statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
The names of host variables should be unique within the source data set or
member, even if the host variables are in different blocks, classes, or procedures.
You can qualify the host variable names with a structure name to make them
unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You cannot define host variables, other than indicator variables, as arrays. You can
specify OCCURS only when defining an indicator structure. You cannot specify
OCCURS for any other type of host variable.
Numeric host variables: The following figures show the syntax for valid numeric
host variable declarations.
01 variable-name COMPUTATIONAL-1
77 IS COMP-1
level-1 USAGE COMPUTATIONAL-2
COMP-2
.
IS
VALUE numeric-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. COMPUTATIONAL-1 and COMP-1 are equivalent.
3. COMPUTATIONAL-2 and COMP-2 are equivalent.
IS
01 variable-name PICTURE S9(4)
77 PIC S9999 IS
level-1 S9(9) USAGE
S999999999
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. BINARY, COMP, COMPUTATIONAL, COMPUTATIONAL-4, COMP-4 ,
COMPUTATIONAL-5, COMP-5 are equivalent.
3. Any specification for scale is ignored.
IS
01 variable-name PICTURE picture-string
77 PIC IS
level-1 USAGE
PACKED-DECIMAL
COMPUTATIONAL-3 IS
COMP-3 VALUE numeric-constant
IS CHARACTER
DISPLAY SIGN LEADING SEPARATE
.
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string associated with these types must have the form S9(i)V9(d) (or
S9...9V9...9, with i and d instances of 9) or S9(i)V.
3. The picture-string associated with SIGN LEADING SEPARATE must have the
form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i
instances of 9).
Character host variables: There are three valid forms of character host variables:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 59
on page 151 for the syntax of CLOBs.
IS
01 variable-name PICTURE picture-string
77 PIC
level-1
.
DISPLAY IS
IS VALUE character-constant
USAGE
Note:
01 variable-name .
level-1
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
.
IS
VALUE numeric-constant
IS
49 var-2 PICTURE picture-string
PIC DISPLAY
IS
USAGE
.
IS
VALUE character-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be X(m) (or XX...X, with m
instances of X), with 1 <= m <= 255 for fixed-length strings; for other strings, m
cannot be greater than the maximum size of a varying-length character string.
DB2 uses the full length of the S9(4) variable even though IBM COBOL for MVS
and VM only recognizes values up to 9999. This can cause data truncation
errors when COBOL statements execute and might effectively limit the
maximum length of variable-length character strings to 9999. Consider using the
TRUNC(OPT) or NOTRUNC COBOL compiler option (whichever is appropriate)
to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
# 4. You cannot use an intervening REDEFINE at level 49.
Graphic character host variables: There are three valid forms for graphic
character host variables:
v Fixed-length strings
v Varying-length strings
v DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 59 on page 151 for the syntax of DBCLOBs.
IS
01 variable-name PICTURE picture-string
level-1 PIC
77
IS
USAGE DISPLAY-1
.
IS
VALUE graphic-constant
Note:
IS
01 variable-name . 49 var-1 PICTURE S9(4)
level-1 PIC S9999 IS
USAGE
BINARY . 49 var-2 PICTURE
COMPUTATIONAL-4 IS PIC
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
IS IS
picture-string USAGE DISPLAY-1 .
IS
VALUE graphic-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be G(m) (or GG...G, with m
instances of G), with 1 <= m <= 127 for fixed-length strings. You can use N in
place of G for COBOL graphic variable declarations. If you use N for graphic
variable declarations, USAGE DISPLAY-1 is optional. For strings other than
fixed-length, m cannot be greater than the maximum size of a varying-length
graphic string.
DB2 uses the full size of the S9(4) variable even though some COBOL
implementations restrict the maximum length of varying-length graphic string to
9999. This can cause data truncation errors when COBOL statements execute
and might effectively limit the maximum length of variable-length graphic strings
to 9999. Consider using the TRUNC(OPT) or NOTRUNC COBOL compiler
option (which ever is appropriate) to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
AS LOCATOR .
Note:
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
Note:
Note:
A host structure name can be a group name whose subordinate levels name
elementary data items. In the following example, B is the name of a host structure
consisting of the elementary items C1 and C2.
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
When you write an SQL statement using a qualified host variable name (perhaps to
identify a field within a structure), use the name of the structure followed by a
period and the name of the field. For example, specify B.C1 rather than C1 OF B or
C1 IN B.
The precompiler does not recognize host variables or host structures on any
subordinate levels after one of these items:
v A COBOL item that must begin in area A
v Any SQL statement (except SQL INCLUDE)
v Any SQL statement within an included member
When the precompiler encounters one of the above items in a host structure, it
therefore considers the structure to be complete.
Figure 61 on page 153 shows the syntax for valid host structures.
level-1 variable-name .
BINARY
IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS
DISPLAY SIGN LEADING SEPARATE
CHARACTER
IS
VALUE constant
IS
PICTURE picture-string
PIC DISPLAY
IS
USAGE
IS
VALUE constant
IS
49 var-2 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
IS
. 49 var-3 PICTURE picture-string
IS PIC
VALUE numeric-constant
.
DISPLAY IS
IS VALUE constant
USAGE
IS
49 var-4 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
IS
. 49 var-5 PICTURE picture-string
IS PIC
VALUE numeric-constant
.
DISPLAY-1 IS
IS VALUE graphic-constant
USAGE
Notes:
1. level-1 indicates a COBOL level between 1 and 47.
2. level-2 indicates a COBOL level between 2 and 48.
3. For elements within a structure use any level 02 through 48 (rather than 01 or
77), up to a maximum of two levels.
4. Using a FILLER or optional FILLER item within a host structure declaration can
invalidate the whole structure.
5. You cannot use picture-string for floating point elements but must use it for other
data types.
Table 13 helps you define host variables that receive output from the database. You
can use the table to determine the COBOL data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
| Table 13 shows direct conversions between DB2 data types and host data types.
| However, a number of DB2 data types are compatible. When you do assignments
| or comparisons of data that have compatible data types, DB2 does conversions
| between those compatible data types. See Table 1 on page 5 for information on
| compatible data types.
Table 13. SQL data types mapped to typical COBOL declarations
SQL Data Type COBOL Data Type Notes
SMALLINT S9(4) COMP-4,
S9(4) COMP-5,
or BINARY
INTEGER S9(9) COMP-4,
S9(9) COMP-5,
or BINARY
DATE Fixed-length character string of length n. If you are using a date exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 10.
TIME Fixed-length character string of length n. If you are using a time exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 6; to include seconds, n
must be at least 8.
TIMESTAMP Fixed-length character string of length of n must be at least 19. To include
length n. For example, microseconds, n must be 26; if n is less
01 VAR-NAME PIC X(n). than 26, truncation occurs on the
microseconds part.
Result set locator SQL TYPE IS Use this data type only for
RESULT-SET-LOCATOR receiving result sets.
Do not use this data type as a
column type.
Table locator SQL TYPE IS Use this data type only in a user-defined
TABLE LIKE table-name function or stored procedure to receive
AS LOCATOR rows of a transition table. Do not use this
data type as a column type.
BLOB locator USAGE IS Use this data type only to manipulate data
SQL TYPE IS in BLOB columns. Do not use this data
BLOB-LOCATOR type as a column type.
CLOB locator USAGE IS Use this data type only to manipulate data
SQL TYPE IS in CLOB columns. Do not use this data
CLOB-LOCATOR type as a column type.
SQL data types with no COBOL equivalent: If you are using a COBOL compiler
that does not support decimal numbers of more than 18 digits, use one of the
following data types to hold values of greater than 18 digits:
v A decimal variable with a precision less than or equal to 18, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, then the fractional part of the
value could be truncated.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number could
exceed the maximum value for an integer or, if you want to preserve a fractional
value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. Hence, when you assign a decimal number to a
floating point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special Purpose COBOL Data Types: The locator data types are COBOL data
types as well as SQL data types. You cannot use locators as column types. For
information on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
Level 77 data description entries: One or more REDEFINES entries can follow
any level 77 data description entry. However, you cannot use the names in these
entries in SQL statements. Entries with the name FILLER are ignored.
SMALLINT and INTEGER data types: In COBOL, you declare the SMALLINT and
INTEGER data types as a number of decimal digits. DB2 uses the full size of the
integers (in a way that is similar to processing with the COBOL options
TRUNC(OPT) or NOTRUNC) and can place larger values in the host variable than
would be allowed in the specified number of digits in the COBOL declaration.
For small integers that can exceed 9999, use S9(5) COMP. For large integers that
can exceed 999,999,999, use S9(10) COMP-3 to obtain the decimal data type. If
you use COBOL for integers that exceed the COBOL PICTURE, then specify the
column as decimal to ensure that the data types match and perform well.
# Similarly, retrieving a column value with DECIMAL data type into a COBOL decimal
# variable with a lower precision could truncate the value.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. You
can define indicator variables as scalar variables or as array elements in a structure
form or as an array variable using a single level OCCURS clause. For more
information about indicator variables, see “Using indicator variables with host
variables” on page 70.
The following figure shows the syntax for a valid indicator variable.
IS
01 variable-name PICTURE S9(4) BINARY
77 PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE constant
The following figure shows the syntax for valid indicator array declarations.
IS
level-1 variable-name PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY OCCURS dimension .
COMPUTATIONAL-4 TIMES IS
COMP-4 VALUE constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
DSNTIAR syntax
CALL ’DSNTIAR’ USING sqlca message lrecl.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL 'DSNTIAC' USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Where to Place SQL Statements in Your Application: An IBM COBOL for MVS &
VM source data set or member can contain the following elements:
v Multiple programs
v Multiple class definitions, each of which contains multiple methods
You can put SQL statements in only the first program or class in the source data
set or member. However, you can put SQL statements in multiple methods within a
class. If an application consists of multiple data sets or members, each of the data
sets or members can contain SQL statements.
Where to Place the SQLCA, SQLDA, and Host Variable Declarations: You can
put the SQLCA, SQLDA, and SQL host variable declarations in the
WORKING-STORAGE SECTION of a program, class, or method. An SQLCA or
SQLDA in a class WORKING-STORAGE SECTION is global for all the methods of
the class. An SQLCA or SQLDA in a method WORKING-STORAGE SECTION is
local to that method only.
If a class and a method within the class both contain an SQLCA or SQLDA, the
method uses the SQLCA or SQLDA that is local.
DB2 sets the SQLCOD and SQLSTA (or SQLSTATE) values after each SQL
statement executes. An application can check these variables value to determine
whether the last SQL statement was successful. All SQL statements in the program
must be within the scope of the declaration of the SQLCOD and SQLSTA (or
SQLSTATE) variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. DB2 does not support the INCLUDE SQLDA
statement for FORTRAN programs. If present, an error message results.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor.
You can code SQL statements in a FORTRAN program wherever you can place
executable statements. If the SQL statement is within an IF statement, the
precompiler generates any necessary THEN and END IF statements.
Each SQL statement in a FORTRAN program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
You cannot follow an SQL statement with another SQL statement or FORTRAN
statement on the same line.
FORTRAN does not require blanks to delimit words within a statement, but the SQL
language requires blanks. The rules for embedded SQL follow the rules for SQL
syntax, which require you to use one or more blanks as a delimiter.
Comments: You can include FORTRAN comment lines within embedded SQL
statements wherever you can use a blank, except between the keywords EXEC and
SQL. You can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
The DB2 precompiler does not support the exclamation point (!) as a comment
recognition character in FORTRAN programs.
Declaring tables and views: Your FORTRAN program should also include the
statement DECLARE TABLE to describe each table and view the program
accesses.
You can use a FORTRAN character variable in the statements PREPARE and
EXECUTE IMMEDIATE, even if it is fixed-length.
You cannot nest SQL INCLUDE statements. You cannot use the FORTRAN
INCLUDE compiler directive to include SQL statements or FORTRAN host variable
declarations.
Margins: Code the SQL statements between columns 7 through 72, inclusive. If
EXEC SQL starts before the specified left margin, the DB2 precompiler does not
recognize the SQL statement.
Names: You can use any valid FORTRAN name for a host variable. Do not use
external entry names that begin with 'DSN' and host variable names that begin with
'SQL'. These names are reserved for DB2.
Do not use the word DEBUG, except when defining a FORTRAN DEBUG packet.
Do not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to
define variables.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can specify statement numbers for SQL statements in
columns 1 to 5. However, during program preparation, a labelled SQL statement
generates a FORTRAN statement CONTINUE with that label before it generates
the code that executes the SQL statement. Therefore, a labelled SQL statement
should never be the last statement in a DO loop. In addition, you should not label
SQL statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur
before the first executable SQL statement because an error might occur.
WHENEVER statement: The target for the GOTO clause in the SQL statement
WHENEVER must be a label in the FORTRAN source and must refer to a
statement in the same subprogram. The statement WHENEVER only applies to
SQL statements in the same subprogram.
You can precede FORTRAN statements that define the host variables with a BEGIN
DECLARE SECTION statement and follow the statements with an END DECLARE
SECTION statement. You must use the statements BEGIN DECLARE SECTION
and END DECLARE SECTION when you use the precompiler option
STDSQL(YES).
The names of host variables should be unique within the program, even if the host
variables are in different blocks, functions, or subroutines.
When you declare a character host variable, you must not use an expression to
define the length of the character variable. You can use a character host variable
with an undefined length (for example, CHARACTER *(*)). The length of any such
variable is determined when its associated SQL statement executes.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You must be careful when calling subroutines that might change the attributes of a
host variable. Such alteration can cause an error while the program is running. See
Appendix C of DB2 SQL Reference for more information.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
INTEGER*2 variable-name
*4 / numeric-constant /
INTEGER
*4
REAL
REAL*8
DOUBLE PRECISION
Figure 69. Numeric host variables
Character host variables: The following figure shows the syntax for valid character
host variable declarations other than CLOBs. See Figure 72 for the syntax of
CLOBs.
CHARACTER variable-name
*n *n / character-constant /
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB and CLOB host variables and locators. See “Chapter 13.
Programming for large objects (LOBs)” on page 229 for a discussion of how to use
these host variables.
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
Table 15 on page 170 helps you define host variables that receive output from the
database. You can use the table to determine the FORTRAN data type that is
equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data,
you can use the table to define a suitable host variable in the program that receives
the data value.
| Table 15 on page 170 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
SQL data types with no FORTRAN equivalent: FORTRAN does not provide an
equivalent for the decimal data type. To hold the value of such a variable, you can
use:
v An integer or floating-point variables, which converts the value. If you choose
integer, however, you lose the fractional part of the number. If the decimal
number can exceed the maximum value for an integer or you want to preserve a
fractional value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a floating
point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special-purpose FORTRAN data types: The locator data types are FORTRAN
data types as well as SQL data types. You cannot use locators as column types.
For information on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
| Processing Unicode data: Because FORTRAN does not support graphic data
| types, FORTRAN applications can process only Unicode tables that use UTF-8
| encoding.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
The following figure shows the syntax for a valid indicator variable.
INTEGER*2 variable-name
/ numeric-constant /
DSNTIR syntax
CALL DSNTIR ( error-length, message, return-code )
where ERRLEN is the total length of the message output area, ERRTXT is the
name of the message output area, and ICODE is the return code.
return-code
Accepts a return code from DSNTIAR.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must declare an SQLDA before the first SQL statement that references that
data descriptor, unless you use the precompiler option TWOPASS. See Chapter 5
of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
You can code SQL statements in a PL/I program wherever you can use executable
statements.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for other PL/I statements, except that you must specify
EXEC SQL on one line.
Declaring tables and views: Your PL/I program should also include a DECLARE
TABLE statement to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 8. Generating declarations for your tables
using DCLGEN” on page 95.
Including code: You can use SQL statements or PL/I host variable declarations
from a member of a partitioned data set by using the following SQL statement in the
source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
Margins: Code SQL statements in columns 2 through 72, unless you have
specified other margins to the DB2 precompiler. If EXEC SQL starts before the
specified left margin, the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid PL/I name for a host variable. Do not use external
entry names or access plan names that begin with 'DSN' and host variable names
that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers. IEL0378 messages from the PL/I compiler
identify lines of code without sequence numbers. You can ignore these messages.
Statement labels: You can specify a statement label for executable SQL
statements. However, the statements INCLUDE text-file-name and END DECLARE
SECTION cannot have statement labels.
Whenever statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a label in the PL/I source code and must be within the scope
of any SQL statements that WHENEVER affects.
You can precede PL/I statements that define the host variables with the statement
BEGIN DECLARE SECTION, and follow the statements with the statement END
| A colon (:) must precede all host variables in an SQL statement, with the following
| exception. If the SQL statement meets the following conditions, a host variable in
| the SQL statement cannot be preceded by a colon:
| v The SQL statement is an EXECUTE IMMEDIATE or PREPARE statement.
| v The SQL statement is in a program that also contains a DECLARE VARIABLE
| statement.
| v The host variable is part of a string expression, but the host variable is not the
| only component of the string expression.
| The names of host variables should be unique within the program, even if the host
variables are in different blocks or procedures. You can qualify the host variable
names with a structure name to make them unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Host variables must be scalar variables or structures of scalars. You cannot declare
host variables as arrays, although you can use an array of indicator variables when
you associate the array with a host structure.
The precompiler uses only the names and data attributes of the variables; it ignores
the alignment, scope, and storage attributes. Even though the precompiler ignores
alignment, scope, and storage, if you ignore the restrictions on their use, you might
have problems compiling the PL/I source code that the precompiler generates.
These restrictions are as follows:
v A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
v If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
v Host variables can be STATIC, CONTROLLED, BASED, or AUTOMATIC storage
class, or options. However, CICS requires that programs be reentrant.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
DECLARE variable-name
DCL ,
( variable-name )
BINARY FIXED
BIN ( precision )
DECIMAL ,scale
DEC FLOAT ( precision )
Alignment and/or Scope and/or Storage
Notes:
1. You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED
BIN(31) are all acceptable.
2. You can specify a scale for only DECIMAL FIXED.
Character host variables: The following figure shows the syntax for valid character
host variable declarations, other than CLOBs. See Figure 80 on page 180 for the
syntax of CLOBs.
Alignment and/or Scope and/or Storage
Graphic host variables: The following figure shows the syntax for valid graphic
host variable declarations, other than DBCLOBs. See Figure 80 on page 180 for the
syntax of DBCLOBs.
Alignment and/or Scope and/or Storage
( variable-name )
Alignment and/or Scope and/or Storage
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
( variable-name )
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
( variable-name )
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
( variable-name )
In this example, B is the name of a host structure consisting of the scalars C1 and
C2.
You can use the structure name as shorthand notation for a list of scalars. You can
qualify a host variable with a structure name (for example, STRUCTURE.FIELD).
Host structures are limited to two levels. You can think of a host structure for DB2
data as a named group of host variables.
You must terminate the host structure variable by ending the declaration with a
semicolon. For example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
The following figure shows the syntax for valid host structures.
( var-2 )
BINARY FIXED
BIN ( precision )
DECIMAL ,scale
DEC FLOAT
( precision )
CHARACTER
CHAR ( integer ) VARYING
VARY
GRAPHIC
( integer ) VARYING
VARY
SQL TYPE IS ROWID
LOB data type
Table 17 on page 184 helps you define host variables that receive output from the
database. You can use the table to determine the PL/I data type that is equivalent
to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can
use the table to define a suitable host variable in the program that receives the data
value.
| Table 17 on page 184 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
PL/I Data Types with No SQL Equivalent: PL/I supports some data types with no
SQL equivalent (COMPLEX and BIT variables, for example). In most cases, you
can use PL/I statements to convert between the unsupported PL/I data types and
the data types that SQL supports.
# SQL data types with no PL/I equivalent: If the PL/I compiler you are using does
# not support a decimal data type with a precision greater than 15, use the following
# types of variables for decimal data:
v Decimal variables with precision less than or equal to 15, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, then the fractional part of the
value could truncate.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer or you want to preserve a fractional
value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a floating
point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
# Floating point host variables: All floating point data is stored in DB2 in
# System/390 floating point format. However, your host variable data can be in
# System/390 floating point format or IEEE floating point format. DB2 uses the
# FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
# variables are in IEEE floating point format or System/390 floating point format. If
# you use this option for a PL/I program, you must compile the program using IBM
# Enterprise PL/I for z/OS and OS/390 Version 3 Release 1 or later. DB2 does no
# checking to determine whether the host variable declarations or format of the host
# variable contents match the precompiler option. Therefore, you need to ensure that
# your floating point host variable types and contents match the precompiler option.
Special Purpose PL/I Data Types: The locator data types are PL/I data types as
well as SQL data types. You cannot use locators as column types. For information
on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
PL/I scoping rules: The precompiler does not support PL/I scoping rules.
# Similarly, retrieving a column value with a DECIMAL data type into a PL/I decimal
# variable with a lower precision could truncate the value.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
Example:
The following figure shows the syntax for a valid indicator variable.
The following figure shows the syntax for a valid indicator array.
( variable-name ( dimension ) )
FIXED(15) ;
Alignment and/or Scope and/or Storage
DSNTIAR syntax
CALL DSNTIAR ( sqlca, message, lrecl );
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful.
See Appendix C of DB2 SQL Reference for information on the fields in the REXX
SQLCA.
A REXX procedure can contain more than one SQLDA. Each SQLDA consists of a
set of REXX variables with a common stem. The stem must be a REXX variable
name that contains no periods and is the same as the value of descriptor-name that
you specify when you use the SQLDA in an SQL statement. DB2 does not support
the INCLUDE SQLDA statement in REXX.
See Appendix C of DB2 SQL Reference for information on the fields in a REXX
SQLDA.
(1)
'CONNECT' 'subsystem-ID'
ADDRESS DSNREXX REXX-variable
Notes:
1 CALL SQLDBS 'ATTACH TO' ssid is equivalent to ADDRESS DSNREXX 'CONNECT' ssid.
EXECSQL
Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
(1)
"EXECSQL" "SQL-statement"
ADDRESS DSNREXX REXX-variable
Notes:
1 CALL SQLEXEC is equivalent to EXECSQL.
See “Embedding SQL statements in a REXX procedure” on page 192 for more
information.
DISCONNECT
Disconnects the REXX procedure from a DB2 subsystem. You should execute
DISCONNECT to release resources that are held by DB2. The syntax of
DISCONNECT is:
(1)
'DISCONNECT'
ADDRESS DSNREXX
Notes:
1 CALL SQLDBS 'DETACH' is equivalent to DISCONNECT.
These application programming interfaces are available through the DSNREXX host
command environment. To make DSNREXX available to the application, invoke the
RXSUBCOM function. The syntax is:
The ADD function adds DSNREXX to the REXX host command environment table.
The DELETE function deletes DSNREXX from the REXX host command
environment table.
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
/* WHEN DONE WITH */
/* DSNREXX, REMOVE IT. */
Each SQL statement in a REXX procedure must begin with EXECSQL, in either
upper, lower, or mixed case. One of the following items must follow EXECSQL:
v An SQL statement enclosed in single or double quotation marks.
v A REXX variable that contains an SQL statement. The REXX variable must not
be preceded by a colon.
For example, you can use either of the following methods to execute the COMMIT
statement in a REXX procedure:
EXECSQL "COMMIT"
rexxvar="COMMIT"
EXECSQL rexxvar
An SQL statement follows rules that apply to REXX commands. The SQL statement
can optionally end with a semicolon and can be enclosed in single or double
quotation marks, as in the following example:
'EXECSQL COMMIT';
Continuation for SQL statements: SQL statements that span lines follow REXX
rules for statement continuation. You can break the statement into several strings,
each of which fits on a line, and separate the strings with commas or with
concatenation operators followed by commas. For example, either of the following
statements is valid:
EXECSQL ,
"UPDATE DSN8710.DEPT" ,
"SET MGRNO = '000010'" ,
"WHERE DEPTNO = 'D11'"
"EXECSQL " || ,
" UPDATE DSN8710.DEPT " || ,
" SET MGRNO = '000010'" || ,
" WHERE DEPTNO = 'D11'"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You
therefore cannot include externally defined SQL statements in a procedure.
Margins: Like REXX commands, SQL statements can begin and end anywhere on
a line.
Names: You can use any valid REXX name that does not end with a period as a
host variable. However, host variable names should not begin with 'SQL', 'RDI',
'DSN', 'RXSQL', or 'QRW'. Variable names can be at most 64 bytes.
Nulls: A REXX null value and an SQL null value are different. The REXX language
has a null string (a string of length 0) and a null clause (a clause that contains only
blanks and comments). The SQL null value is a special value that is distinct from all
nonnull values and denotes the absence of a value. Assigning a REXX null value to
a DB2 column does not make the column value null.
Statement labels: You can precede an SQL statement with a label, in the same
way that you label REXX commands.
Handling errors and warnings: DB2 does not support the SQL WHENEVER
statement in a REXX procedure. To handle SQL errors and warnings, use the
following methods:
v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and
the SQLWARN. values after each EXECSQL call. This method does not detect
errors in the REXX interface to DB2.
v To test for SQL errors or warnings or errors or warnings from the REXX interface
to DB2, test the REXX RC variable after each EXECSQL call. Table 18 lists the
values of the RC variable.
You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE
keyword instructions to detect negative values of the RC variable and transfer
control to an error routine.
Table 18. REXX return codes after SQL statements
Return code Meaning
0 No SQL warning or error occurred.
+1 An SQL warning occurred.
-1 An SQL error occurred.
Use only the predefined names for cursors and statements. When you associate a
cursor name with a statement name in a DECLARE CURSOR statement, the cursor
name and the statement must have the same number. For example, if you declare
cursor c1, you need to declare it for statement s1:
EXECSQL 'DECLARE C1 CURSOR FOR S1'
A REXX host variable can be a simple or compound variable. DB2 REXX Language
Support evaluates compound variables before DB2 processes SQL statements that
contain the variables. In the following example, the host variable that is passed to
DB2 is :x.1.2:
a=1
b=2
EXECSQL 'OPEN C1 USING :x.a.b'
When you assign input data to a DB2 table column, you can either let DB2
determine the type that your input data represents, or you can use an SQLDA to tell
DB2 the intended type of the input data.
If you do not assign a value to a host variable before you assign the host variable
to a column, DB2 returns an error code.
Table 19. SQL input data types and REXX data formats
SQL data type SQLTYPE for data REXX input data format
assigned by DB2 type
INTEGER 496/497 A string of numerics that does not contain a decimal point or
exponent identifier. The first character can be a plus (+) or minus (−)
sign. The number that is represented must be between -2147483647
and 2147483647, inclusive.
DECIMAL(p,s) 484/485 One of the following formats:
v A string of numerics that contains a decimal point but no exponent
identifier. p represents the precision and s represents the scale of
the decimal number that the string represents. The first character
can be a plus (+) or minus (−) sign.
v A string of numerics that does not contain a decimal point or an
exponent identifier. The first character can be a plus (+) or minus
(−) sign. The number that is represented is less than -2147483647
or greater than 2147483647.
FLOAT 480/481 A string that represents a number in scientific notation. The string
consists of a series of numerics followed by an exponent identifier
(an E or e followed by an optional plus (+) or minus (−) sign and a
series of numerics). The string can begin with a plus (+) or minus (−)
sign.
VARCHAR(n) 448/449 One of the following formats:
v A string of length n, enclosed in single or double quotation marks.
v The character X or x, followed by a string enclosed in single or
double quotation marks. The string within the quotation marks has
a length of 2*n bytes and is the hexadecimal representation of a
string of n characters.
v A string of length n that does not have a numeric or graphic
format, and does not satisfy either of the previous conditions.
VARGRAPHIC(n) 464/465 One of the following formats:
v The character G, g, N, or n, followed by a string enclosed in single
or double quotation marks. The string within the quotation marks
begins with a shift-out character (X'0E') and ends with a shift-in
character (X'0F'). Between the shift-out character and shift-in
character are n double-byte characters.
v The characters GX, Gx, gX, or gx, followed by a string enclosed in
single or double quotation marks. The string within the quotation
marks has a length of 4*n bytes and is the hexadecimal
representation of a string of n double-byte characters.
For example, when DB2 executes the following statements to update the MIDINIT
column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
Because the data that is assigned to HVMIDINIT has a format that fits a character
data type, DB2 REXX Language Support assigns a VARCHAR type to the input
data.
Enclosing the string in apostrophes is not adequate because REXX removes the
apostrophes when it assigns a literal to a variable. For example, suppose that you
want to pass the value in host variable stringvar to DB2. The value that you want to
pass is the string '100'. The first thing that you need to do is to assign the string to
the host variable. You might write a REXX command like this:
stringvar = '100'
After the command executes, stringvar contains the characters 100 (without the
apostrophes). DB2 REXX Language Support then passes the numeric value 100 to
DB2, which is not what you intended.
In this case, REXX assigns the string '100' to stringvar, including the single
quotation marks. DB2 REXX Language Support then passes the string '100' to DB2,
which is the desired result.
To indicate the data type of input data to DB2, use an SQLDA. For example,
suppose you want to tell DB2 that the data with which you update the MIDINIT
column of the EMP table is of type CHAR, rather than VARCHAR. You need to set
up an SQLDA that contains a description of a CHAR column, and then prepare and
execute the UPDATE statement using that SQLDA:
INSQLDA.SQLD = 1 /* SQLDA contains one variable */
INSQLDA.1.SQLTYPE = 453 /* Type of the variable is CHAR, */
/* and the value can be null */
INSQLDA.1.SQLLEN = 1 /* Length of the variable is 1 */
INSQLDA.1.SQLDATA = 'H' /* Value in variable is H */
INSQLDA.1.SQLIND = 0 /* Input variable is not null */
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING" ,
"DESCRIPTOR :INSQLDA"
Because you cannot use the SELECT INTO statement in a REXX procedure, to
retrieve data from a DB2 table you must prepare a SELECT statement, open a
cursor for the prepared statement, and then fetch rows into host variables or an
SQLDA using the cursor. The following example demonstrates how you can retrieve
data from a DB2 table using an SQLDA:
SQLSTMT= ,
'SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,' ,
' WORKDEPT, PHONENO, HIREDATE, JOB,' ,
' EDLEVEL, SEX, BIRTHDATE, SALARY,' ,
' BONUS, COMM' ,
' FROM EMP'
EXECSQL DECLARE C1 CURSOR FOR S1
EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT
EXECSQL OPEN C1
Do Until(SQLCODE ¬= 0)
EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA
If SQLCODE = 0 Then Do
Line = ''
Do I = 1 To OUTSQLDA.SQLD
Line = Line OUTSQLDA.I.SQLDATA
End I
Say Line
End
End
The way that you use indicator variables for input host variables in REXX
procedures is slightly different from the way that you use indicator variables in other
languages. When you want to pass a null value to a DB2 column, in addition to
putting a negative value in an indicator variable, you also need to put a valid value
in the corresponding host variable. For example, to set a value of WORKDEPT in
table EMP to null, use statements like these:
SQLSTMT="UPDATE EMP" ,
"SET WORKDEPT = ?"
HVWORKDEPT='000'
INDWORKDEPT=-1
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING :HVWORKDEPT :INDWORKDEPT"
After you retrieve data from a column that can contain null values, you should
always check the indicator variable that corresponds to the output host variable for
that column. If the indicator variable value is negative, the retrieved value is null, so
you can disregard the value in the host variable.
In the following program, the phone number for employee Haas is selected into
variable HVPhone. After the SELECT statement executes, if no phone number for
employee Haas is found, indicator variable INDPhone contains -1.
'SUBCOM DSNREXX'
IF RC THEN ,
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
ADDRESS DSNREXX
'CONNECT' 'DSN'
SQLSTMT = ,
"SELECT PHONENO FROM DSN8710.EMP WHERE LASTNAME='HAAS'"
"EXECSQL DECLARE C1 CURSOR FOR S1"
"EXECSQL PREPARE S1 FROM :SQLSTMT"
Say "SQLCODE from PREPARE is "SQLCODE
"EXECSQL OPEN C1"
Say "SQLCODE from OPEN is "SQLCODE
"EXECSQL FETCH C1 INTO :HVPhone :INDPhone"
Say "SQLCODE from FETCH is "SQLCODE
If INDPhone < 0 Then ,
Say 'Phone number for Haas is null.'
"EXECSQL CLOSE C1"
Say "SQLCODE from CLOSE is "SQLCODE
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
Constraints are rules that limit the values that you can insert, delete, or update in a
table. There are two types of constraints:
v Table check constraints determine the values that a column can contain. Table
check constraints are discussed in “Using table check constraints”.
v Referential constraints preserve relationships between tables. Referential
constraints are discussed in “Using referential constraints” on page 203.
Triggers are a series of actions that are invoked when a table is updated. Triggers
are discussed in “Chapter 11. Using triggers for active data” on page 209.
For example, you might want to make sure that no salary can be below 15000
dollars:
Using table check constraints makes your programming task easier, because you
do not need to enforce those constraints within application programs or with a
validation routine. Define table check constraints on one or more columns in a table
when that table is created or altered.
Constraint considerations
The syntax of a table check constraint is checked when the constraint is defined,
but the meaning of the constraint is not checked. The following examples show
mistakes that are not caught. Column C1 is defined as INTEGER NOT NULL.
A table check constraint is not checked for consistency with other types of
constraints. For example, a column in a dependent table can have a referential
constraint with a delete rule of SET NULL. You can also define a check constraint
that prohibits nulls in the column. As a result, an attempt to delete a parent row
fails, because setting the dependent row to null violates the check constraint.
Similarly, a table check constraint is not checked for consistency with a validation
routine, which is applied to a table before a check constraint. If the routine requires
a column to be greater than or equal to 10 and a check constraint requires the
same column to be less than 10, table inserts are not possible. Plans and packages
do not need to be rebound after table check constraints are defined on or removed
from a table.
Any constraint defined on columns of a base table applies to the views defined on
that base table.
When you use ALTER TABLE to add a table check constraint to already populated
tables, the enforcement of the check constraint is determined by the value of the
CURRENT RULES special register as follows:
v If the value is STD, the check constraint is enforced immediately when it is
defined. If a row does not conform, the table check constraint is not added to the
table and an error occurs.
v If the value is DB2, the check constraint is added to the table description but its
enforcement is deferred. Because there might be rows in the table that violate
the check constraint, the table is placed in check pending status.
Table check violations place a table space or partition in check pending status when
any of these conditions exist:
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
CASCADE ACT
PROJ
RESTRICT RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
Figure 89. Relationships among tables in the sample application. Arrows point from parent
tables to dependent tables.
When a table refers to an entity for which there is a master list, it should identify an
occurrence of the entity that actually appears in the master list; otherwise, either the
reference is invalid or the master list is incomplete. Referential constraints enforce
the relationship between a table and a master list.
Figure 90 shows part of the project table with the primary key column indicated.
Project table
Primary key column
Figure 91 shows a primary key containing more than one column; the primary key is
a composite key.
Figure 91. A composite primary key. The PROJNO, ACTNO, and ACSTDATE columns are all
parts of the key.
The primary key of a table, if one exists, uniquely identifies each occurrence of an
entity about which the table contains information. The PRIMARY KEY clause of the
CREATE TABLE or ALTER TABLE statements identifies the column or columns of
the primary key. Each identified column must be defined as NOT NULL.
Another way to allow only unique values in a column is to create a table using the
UNIQUE clause of the CREATE TABLE or ALTER TABLE statement. Like the
PRIMARY KEY clause, specifying a UNIQUE clause prevents use of the table until
you create an index to enforce the uniqueness of the key. And if you use the
UNIQUE clause in an ALTER TABLE statement, a unique index must already exist.
For more information about the UNIQUE clause, see Chapter 5 of DB2 SQL
Reference.
A table can have no more than one primary key. A primary key obeys the same
restrictions as do index keys:
v The key can include no more than 64 columns.
v No column can be named twice.
v The sum of the column length attributes cannot be greater than 255.
You define a list of columns as the primary key of a table with the PRIMARY KEY
clause in the CREATE TABLE statement.
To add a primary key to an existing table, use the PRIMARY KEY clause in an
ALTER TABLE statement. In this case, a unique index must already exist.
Incomplete definition
If a table is created with a primary key, its primary index is the first unique index
created on its primary key columns, with the same order of columns as the primary
key columns. The columns of the primary index can be in either ascending or
descending order. The table has an incomplete definition until you create an index
on the parent key. This incomplete definition status is recorded as a P in the
TABLESTATUS column of SYSIBM.SYSTABLES. Use of a table with an incomplete
definition is severely restricted: you can drop the table, create the primary index,
and drop or create other indexes; you cannot load the table, insert data, retrieve
data, update data, delete data, or create foreign keys that reference the primary
key.
Because of these restrictions, plan to create the primary index soon after creating
the table. For example, to create the primary index for the project activity table,
issue:
CREATE UNIQUE INDEX XPROJAC1
ON DSN8710.PROJACT (PROJNO, ACTNO, ACSTDATE);
Creating the primary index resets the incomplete definition status and its associated
restrictions. But if you drop the primary index, it reverts to incomplete definition
status; to reset the status, you must create the primary index or alter the table to
drop the primary key.
If the primary key is added later with ALTER TABLE, a unique index on the key
columns must already exist. If more than one unique index is on those columns,
DB2 chooses one arbitrarily to be the primary index.
You define a list of columns as a foreign key of a table with the FOREIGN KEY
clause in the CREATE TABLE statement.
A foreign key can refer to either a unique or a primary key of the parent table. If the
foreign key refers to a non-primary unique key, you must specify the column names
of the key explicitly. If the column names of the key are not specified explicitly, the
default is to refer to the column names of the primary key of the parent table.
The column names you specify identify the columns of the parent key. The privilege
set must include the ALTER or the REFERENCES privilege on the columns of the
parent key. A unique index must exist on the parent key columns of the parent
table.
The name is used in error messages, queries to the catalog, and DROP FOREIGN
KEY statements. Hence, you might want to choose one if you are experimenting
with your database design and have more than one foreign key beginning with the
same column (otherwise DB2 generates the name).
You can create an index on the columns of a foreign key in the same way you
create one on any other set of columns. Most often it is not a unique index. If you
do create a unique index on a foreign key, it introduces an additional constraint on
the values of the columns.
To let an index on the foreign key be used on the dependent table for a delete
operation on a parent table, the leading columns of the index on the foreign key
must be identical to and in the same order as the columns in the foreign key.
A foreign key can also be the primary key; then the primary index is also a unique
index on the foreign key. In that case, every row of the parent table has at most
one dependent row. The dependent table might be used to hold information that
pertains to only a few of the occurrences of the entity described by the parent table.
For example, a dependent of the employee table might contain information that
applies only to employees working in a different country.
The primary key can share columns of the foreign key if the first n columns of the
foreign key are the same as the primary key’s columns. Again, the primary index
serves as an index on the foreign key. In the sample project activity table, the
primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the
foreign key on PROJNO. It does not serve as an index on the foreign key on
ACTNO, because ACTNO is not the first column of the index.
When a foreign key is added to a populated table, the table space is put into check
pending status.
DB2 does not allow you to create a cycle in which a delete operation on a table
involves that same table. Enforcing that principle creates rules about adding a
foreign key to a table:
v In a cycle of two tables, neither delete rule can be CASCADE.
v In a cycle of more than two tables, two or more delete rules must not be
CASCADE. For example, in a cycle with three tables, two of the delete rules
must be other than CASCADE. This concept is illustrated in Figure 93 on
page 208.
Alternatively, a delete operation on a self-referencing table must involve the same
table, and the delete rule there must be CASCADE or NO ACTION.
Valid Invalid
cycle cycle
TABLE1 TABLE1
Figure 93. Valid and invalid delete cycles. The left cycle is valid because two or more delete
rules are not CASCADE. The cycle on the right is invalid because of the two cascading
deletes.
Triggers also move application logic into DB2, which can result in faster application
development and easier maintenance. For example, you can write applications to
control salary changes in the employee table, but each application program that
changes the salary column must include logic to check those changes. A better
method is to define a trigger that controls changes to the salary column. Then DB2
does the checking for any application that modifies salaries.
You create triggers using the CREATE TRIGGER statement. Figure 94 on page 210
shows an example of a CREATE TRIGGER statement.
When you execute this CREATE TRIGGER statement, DB2 creates a trigger
package called REORDER and associates the trigger package with table PARTS.
DB2 records the timestamp when it creates the trigger. If you define other triggers
on the PARTS table, DB2 uses this timestamp to determine which trigger to activate
first. The trigger is now ready to use.
When you no longer want to use trigger REORDER, you can delete the trigger by
executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger
package named REORDER.
If you drop table PARTS, DB2 also drops trigger REORDER and its trigger
package.
Trigger name: Use a short, ordinary identifier to name your trigger. You can use a
qualifier or let DB2 determine the qualifier. When DB2 creates a trigger package for
the trigger, it uses the qualifier for the collection ID of the trigger package. DB2
uses these rules to determine the qualifier:
v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in the bind option QUALIFIER for the plan or package that
contains the CREATE TRIGGER statement. If the bind command does not
include the QUALIFIER option, DB2 uses the owner of the package or plan.
v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in special register CURRENT SQLID.
Subject table: When you perform an insert, update, or delete operation on this
table, the trigger is activated. You must name a local table in the CREATE
TRIGGER statement. You cannot define a trigger on a catalog table or on a view.
Trigger activation time: The two choices for trigger activation time are NO
CASCADE BEFORE and AFTER. NO CASCADE BEFORE means that the trigger
is activated before DB2 makes any changes to the subject table, and that the
triggered action does not activate any other triggers. AFTER means that the trigger
is activated after DB2 makes changes to the subject table and can activate other
triggers. Triggers with an activation time of NO CASCADE BEFORE are known as
before triggers. Triggers with an activation time of AFTER are known as after
triggers.
A triggering event can also be an update or delete operation that occurs as the
result of a referential constraint with ON DELETE SET NULL or ON DELETE
CASCADE.
Triggers are not activated as the result of updates made to tables by DB2 utilities.
When the triggering event for a trigger is an update operation, the trigger is called
an update trigger. Similiarly, triggers for insert operations are called insert triggers,
and triggers for delete operations are called delete triggers.
The SQL statement that performs the triggering SQL operation is called the
triggering SQL statement.
Each triggering event is associated with one subject table and one SQL operation. If
the triggering SQL operation is an update operation, the event can be associated
with specific columns of the subject table. In this case, the trigger is activated only if
the update operation updates any of the specified columns.
For example, the following trigger, PAYROLL1, which invokes user-defined function
named PAYROLL_LOG, is activated only if an update operation is performed on
columns SALARY or BONUS of table PAYROLL:
CREATE TRIGGER PAYROLL1
AFTER UPDATE OF SALARY, BONUS ON PAYROLL
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(PAYROLL_LOG(USER, 'UPDATE', CURRENT TIME, CURRENT DATE));
END
Granularity: The triggering SQL statement might modify multiple rows in the table.
The granularity of the trigger determines whether the trigger is activated only once
for the triggering SQL statement or once for every row that the SQL statement
modifies. The granularity values are:
v FOR EACH ROW
The trigger is activated once for each row that DB2 modifies in the subject table.
If the triggering SQL statement modifies no rows, the trigger is not activated.
However, if the triggering SQL statement updates a value in a row to the same
value, the trigger is activated. For example, if an UPDATE trigger is defined on
table COMPANY_STATS, the following SQL statement will activate the trigger:
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT
The trigger is activated once when the triggering SQL statement executes. The
trigger is activated even if the triggering SQL statement modifies no rows.
Triggers with a granularity of FOR EACH ROW are known as row triggers. Triggers
with a granularity of FOR EACH STATEMENT are known as statement triggers.
Statement triggers can only be after triggers.
Trigger NEW_HIRE is activated once for every row inserted into the employee
table.
Transition variables: When you code a row trigger, you might need to refer to the
values of columns in each updated row of the subject table. To do this, specify
Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T
(ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100),
C2 SMALLINT,
C3 SMALLINT,
C4 SMALLINT);
CREATE TABLE S
(ID SMALLINT GENERATED ALWAYS AS IDENTITY,
C1 SMALLINT);
This statement inserts a row into S with a value of 5 for column C1 and a value of 1
for identity column ID. Next, suppose that you execute the following SQL statement,
which activates trigger TR1:
INSERT INTO T (C2)
VALUES (IDENTITY_VAL_LOCAL());
This insert statement, and the subsequent activation of trigger TR1, have the
following results:
v The INSERT statement obtains the most recent value that was assigned to an
identity column (1), and inserts that value into column C2 of table T. 1 is the
value that DB2 inserted into identity column ID of table S.
v When the INSERT statement executes, DB2 inserts the value 100 into identity
column ID column of C2.
v The first statement in the body of trigger TR1 inserts the value of transition
variable N.ID (100) into column C3. N.ID is the value that identity column ID
contains after the INSERT statement executes.
Transition tables: If you want to refer to the entire set of rows that a triggering
SQL statement modifies, rather than to individual rows, use a transition table. Like
transition variables, transition tables can appear in the REFERENCING clause of a
CREATE TRIGGER statement. Transition tables are valid for both row triggers and
statement triggers. The two types of transition tables are:
v Old transition tables, specified with the OLD TABLE transition-table-name clause,
capture the values of columns before the triggering SQL statement updates them.
You can define old transition tables for update and delete triggers.
v New transition tables, specified with the NEW TABLE transition-table-name
clause, capture the values of columns after the triggering SQL statement updates
them. You can define new transition variables for update and insert triggers.
The scope of old and new transition table names is the trigger body. If another table
exists that has the same name as a transition table, any unqualified reference to
that name in the trigger body points to the transition table. To reference the other
table in the trigger body, you must use the fully qualified table name.
The following example uses a new transition table to capture the set of rows that
are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR
AFTER INSERT ON INVOICE
REFERENCING NEW TABLE AS N_TABLE
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
SELECT LARGE_ORDER_ALERT(CUST_NO,
TOTAL_PRICE, DELIVERY_DATE)
FROM N_TABLE WHERE TOTAL_PRICE > 10000;
END
Trigger condition: If you want the triggered action to occur only when certain
conditions are true, code a trigger condition. A trigger condition is similar to a
predicate in a SELECT, except that the trigger condition begins with WHEN, rather
than WHERE. If you do not include a trigger condition in your triggered action, the
trigger body executes every time the trigger is activated.
For a row trigger, DB2 evaluates the trigger condition once for each modified row of
the subject table. For a statement trigger, DB2 evaluates the trigger condition once
for each execution of the triggering SQL statement.
The following example shows a trigger condition that causes the trigger body to
execute only when the number of ordered items is greater than the number of
available items:
CREATE TRIGGER CK_AVAIL
NO CASCADE BEFORE INSERT ON ORDERS
REFERENCING NEW AS NEW_ORDER
FOR EACH ROW MODE DB2SQL
WHEN (NEW_ORDER.QUANTITY >
(SELECT ON_HAND FROM PARTS
WHERE NEW_ORDER.PARTNO=PARTS.PARTNO))
BEGIN ATOMIC
VALUES(ORDER_ERROR(NEW_ORDER.PARTNO,
NEW_ORDER.QUANTITY));
END
Trigger body: In the trigger body, you code the SQL statements that you want to
execute whenever the trigger condition is true. The trigger body begins with BEGIN
ATOMIC and ends with END. You cannot include host variables or parameter
markers in your trigger body. If the trigger body contains a WHERE clause that
references transition variables, the comparison operator cannot be LIKE.
The statements you can use in a trigger body depend on the activation time of the
trigger. Table 21 summarizes which SQL statements you can use in which types of
triggers.
Table 21. Valid SQL statements for triggers and trigger activation times
SQL Statement Valid for Activation Time
Before After
SELECT Yes Yes
VALUES Yes Yes
CALL Yes Yes
SIGNAL SQLSTATE Yes Yes
SET transition-variable Yes No
INSERT No Yes
UPDATE No Yes
DELETE No Yes
The following list provides more detailed information about SQL statements that are
valid in triggers:
v SELECT, VALUES, and CALL
Use the SELECT or VALUES statement in a trigger body to conditionally or
unconditionally invoke a user-defined function. Use the CALL statement to invoke
a stored procedure. See “Invoking stored procedures and user-defined functions
from triggers” on page 217 for more information on invoking user-defined
functions and stored procedures from triggers.
A SELECT statement in the trigger body of a before trigger cannot reference the
subject table.
v SET transition-variable
If any SQL statement in the trigger body fails during trigger execution, DB2 rolls
back all changes that are made by the triggering SQL statement and the triggered
SQL statements. However, if the trigger body executes actions that are outside of
DB2's control or are not under the same commit coordination as the DB2
subsystem in which the trigger executes, DB2 cannot undo those actions. Examples
of external actions that are not under DB2's control are:
v Performing updates that are not under RRS commit control
v Sending an electronic mail message
If the trigger executes external actions that are under the same commit coordination
as the DB2 subsystem under which the trigger executes, and an error occurs during
trigger execution, DB2 places the application process that issued the triggering
statement in a must-rollback state. The application must then execute a rollback
operation to roll back those external actions. Examples of external actions that are
under the same commit coordination as the triggering SQL operation are:
v Executing a distributed update operation
Because a before trigger must not modify any table, functions and procedures that
you invoke from a trigger cannot include INSERT, UPDATE, or DELETE statements
that modify the subject table.
Use the VALUES statement to execute a function unconditionally; that is, once for
each execution of a statement trigger or once for each row in a row trigger. In this
example, user-defined function PAYROLL_LOG executes every time an update
operation occurs that activates trigger PAYROLL1:
CREATE TRIGGER PAYROLL1
AFTER UPDATE ON PAYROLL
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(PAYROLL_LOG(USER, 'UPDATE',
CURRENT TIME, CURRENT DATE));
END
Trigger cascading
An SQL operation that a trigger performs might modify the subject table or other
tables with triggers, so DB2 also activates those triggers. A trigger that is activated
as the result of another trigger can be activated at the same level as the original
trigger or at a different level. Two triggers, A and B, are activated at different levels
if trigger B is activated after trigger A is activated and completes before trigger A
completes. If trigger B is activated after trigger A is activated and completes after
trigger A completes, then the triggers are at the same level.
For example, in these cases, trigger A and trigger B are activated at the same level:
v Table X has two triggers that are defined on it, A and B. A is a before trigger and
B is an after trigger. An update to table X causes both trigger A and trigger B to
activate.
v Trigger A updates table X, which has a referential constraint with table Y, which
has trigger B defined on it. The referential constraint causes table Y to be
updated, which activates trigger B.
In these cases, trigger A and trigger B are activated at different levels:
v Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B is
an update trigger. An update to table X activates trigger A, which contains an
UPDATE statement on table B in its trigger body. This UPDATE statement
activates trigger B.
v Trigger A calls a stored procedure. The stored procedure contains an INSERT
statement for table X, which has insert trigger B defined on it. When the INSERT
statement on table X executes, trigger B is activated.
When triggers are activated at different levels, it is called trigger cascading. Trigger
cascading can occur only for after triggers because DB2 does not support
cascading of before triggers.
To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels
of cascading of triggers, stored procedures, and user-defined functions. If a trigger,
user-defined function, or stored procedure at the 17th level is activated, DB2 returns
SQLCODE -724 and backs out all SQL changes in the 16 levels of cascading.
However, as with any other SQL error that occurs during trigger execution, if any
action occurs that is outside the control of DB2, that action is not backed out.
You can write a monitor program that issues IFI READS requests to collect DB2
trace information about the levels of cascading of triggers, user-defined functions,
DB2 always activates all before triggers that are defined on a table before the after
triggers that are defined on that table, but within the set of before triggers, the
activation order is by timestamp, and within the set of after triggers, the activation
order is by timestamp.
In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering
event (INSERT), the same subject table (EMP), and the same activation time
(AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run
before the CREATE TRIGGER statement for NEWHIRE2:
CREATE TRIGGER NEWHIRE1
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END
When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first
because NEWHIRE1 was created first. Now suppose that someone drops and
recreates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2, so
the next time an insert operation occurs on EMP, NEWHIRE2 is activated before
NEWHIRE1.
If two row triggers are defined for the same action, the trigger that was created
earlier is activated first for all affected rows. Then the second trigger is activated for
all affected rows. In the previous example, suppose that an INSERT statement with
a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10 rows,
then NEWHIRE2 is activated for all 10 rows.
In general, the following steps occur when triggering SQL statement S1 performs an
insert, update, or delete operation on table T1:
1. DB2 determines the rows of T1 to modify. Call that set of rows M1. The
contents of M1 depend on the SQL operation:
If any constraint is violated, DB2 rolls back all changes that are made by
constraint actions or by statement S1.
5. DB2 processes all after triggers that are defined on T1, and all after triggers on
tables that are modified as the result of referential constraint actions, in order of
creation.
Each after row trigger executes the triggered action once for each row in M1. If
M1 is empty, the triggered action does not execute.
Each after statement trigger executes the triggered action once for each
execution of S1, even if M1 is empty.
If any triggered actions contain SQL insert, update, or delete operations, DB2
repeats steps 1 through 5 for each operation.
For example, table DEPT is a parent table of EMP, with these conditions:
v The DEPTNO column of DEPT is the primary key.
v The WORKDEPT column of EMP is the foreign key.
v The constraint is ON DELETE SET NULL.
Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
Also suppose that an SQL statement deletes the row with department number E21
from DEPT. Because of the constraint, DB2 finds the rows in EMP with a
WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is
equivalent to an update operation on EMP, which has update trigger EMPRAISE.
Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the
constraint action sets WORKDEPT values to null.
EXEC
. SQL OPEN C1;
.
.
When DB2 executes the FETCH statement that positions cursor C1 for the first
time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table
that contains the two rows of column T2:
1
2
When DB2 executes the positioned UPDATE statement for the first time, trigger
TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is
deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once,
when the FETCH statement is executed again, DB2 finds the second row of T1,
even though the second row of T2 was deleted. The FETCH statement positions
the cursor to the second row of T1, and the second row of T1 is updated. The
update operation causes the trigger to be activated again, which causes DB2 to
attempt to delete the second row of T2, even though that row was already deleted.
To avoid processing of the second row after it should have been deleted, use a
correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR
SELECT A1 FROM T1 X
WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1)
FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated
for each FETCH statement. The first time that the FETCH statement executes, it
positions the cursor to the first row of T1. The positioned UPDATE operation
activates the trigger, which deletes the second row of T2. Therefore, when the
FETCH statement executes again, no row is selected, so no update operation or
triggered action occurs.
If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger
execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
2
After the second row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
3 3 2
3
However, if DB2 updates the second row of T1 first, after the UPDATE statement
and the trigger execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
1 3 3
3
After the first row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 3 3
3 2 3
2
Introduction to LOBs
Working with LOBs involves defining the LOBs to DB2, moving the LOB data into
DB2 tables, then using SQL operations to manipulate the data. This chapter
concentrates on manipulating LOB data using SQL statements. For information on
defining LOBs to DB2, see Chapter 5 of DB2 SQL Reference. For information on
how DB2 utilities manipulate LOB data, see Part 2 of DB2 Utility Guide and
Reference.
These are the basic steps for defining LOBs and moving the data into DB2:
1. Define a column of the appropriate LOB type and a row identifier (ROWID)
column in a DB2 table. Define only one ROWID column, even if there are
multiple LOB columns in the table.
The LOB column holds information about the LOB, not the LOB data itself. The
table that contains the LOB information is called the base table. DB2 uses the
ROWID column to locate your LOB data. You need only one ROWID column in
a table that contains one or more LOB columns. You can define the LOB
column and the ROWID column in a CREATE TABLE or ALTER TABLE
statement. If you are adding a LOB column and a ROWID column to an existing
table, you must use two ALTER TABLE statements. Add the ROWID with the
first ALTER TABLE statement and the LOB column with the second.
2. Create a table space and table to hold the LOB data.
The table space and table are called a LOB table space and an auxiliary table.
If your base table is nonpartitioned, you must create one LOB table space and
one auxiliary table for each LOB column. If your base table is partitioned, for
each LOB column, you must create one LOB table space and one auxiliary
table for each partition. For example, if your base table has three partitions, you
must create three LOB table spaces and three auxiliary tables for each LOB
column. Create these objects using the CREATE LOB TABLESPACE and
CREATE AUXILIARY TABLE statements.
3. Create an index on the auxiliary table.
For example, suppose you want to add a resume for each employee to the
employee table. Employee resumes are no more than 5 MB in size. The employee
resumes contain single-byte characters, so you can define the resumes to DB2 as
CLOBs. You therefore need to add a column of data type CLOB with a length of 5
MB to the employee table. If a ROWID column has not been defined in the table,
you need to add the ROWID column before you add the CLOB column. Execute an
ALTER TABLE statement to add the ROWID column, and then execute another
ALTER TABLE statement to add the CLOB column. You might use statements like
this:
ALTER TABLE EMP
ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS;
COMMIT;
ALTER TABLE EMP
ADD EMP_RESUME CLOB(1M);
COMMIT;
Next, you need to define a LOB table space and an auxiliary table to hold the
employee resumes. You also need to define an index on the auxiliary table. You
must define the LOB table space in the same database as the associated base
table. You can use statements like this:
CREATE LOB TABLESPACE RESUMETS
IN DSN8D71A
LOG NO;
COMMIT;
CREATE AUXILIARY TABLE EMP_RESUME_TAB
IN DSN8D71A.RESUMETS
STORES DSN8710.EMP
COLUMN EMP_RESUME;
CREATE UNIQUE INDEX XEMP_RESUME
ON EMP_RESUME_TAB;
COMMIT;
Now that your DB2 objects for the LOB data are defined, you can load your
employee resumes into DB2. To do this in an SQL application, you can define a
host variable to hold the resume, copy the resume data from a file into the host
variable, and then execute an UPDATE statement to copy the data into DB2.
Although the data goes into the auxiliary table, your UPDATE statement specifies
the name of the base table. The C language declaration of the host variable might
be:
SQL TYPE is CLOB (5K) resumedata;
In this example, employeenum is a host variable that identifies the employee who is
associated with a resume.
After your LOB data is in DB2, you can write SQL applications to manipulate the
data. You can use most SQL statements with LOBs. For example, you can use
statements like these to extract information about an employee's department from
the resume:
EXEC SQL BEGIN DECLARE SECTION;
long deptInfoBeginLoc;
long deptInfoEndLoc;
SQL TYPE IS CLOB_LOCATOR resume;
SQL TYPE IS CLOB_LOCATOR deptBuffer;
EXEC
. SQL END DECLARE SECTION;
.
.
EXEC
. SQL FETCH C1 INTO :employeenum, :resume;
.
.
These statements use host variables of data type large object locator (LOB locator).
LOB locators let you manipulate LOB data without moving the LOB data into host
variables. By using LOB locators, you need much smaller amounts of memory for
your programs. LOB locators are discussed in “Using LOB locators to save storage”
on page 236.
Sample LOB applications: Table 22 lists the sample programs that DB2 provides
to assist you in writing applications to manipulate LOB data. All programs reside in
data set DSN710.SDSNSAMP.
Table 22. LOB samples shipped with DB2
Member that Language Function
contains
source code
DSNTEJ7 JCL Demonstrates how to create a table with LOB columns, an
auxiliary table, and an auxiliary index. Also demonstrates
how to load LOB data that is 32KB or less into a LOB table
space.
DSN8DLPL C Demonstrates the use of LOB locators and UPDATE
statements to move binary data into a column of type
BLOB.
DSN8DLRV C Demonstrates how to use a locator to manipulate data of
type CLOB.
For instructions on how to prepare and run the sample LOB applications, see Part 2
of DB2 Installation Guide.
You can declare LOB host variables and LOB locators in assembler, C, C⁺⁺,
COBOL, FORTRAN, and PL/I. For each host variable or locator of SQL type BLOB,
CLOB, or DBCLOB that you declare, DB2 generates an equivalent declaration that
uses host language data types. When you refer to a LOB host variable or locator in
an SQL statement, you must use the variable you specified in the SQL type
declaration. When you refer to the host variable in a host language statement, you
must use the variable that DB2 generates. See “Part 2. Coding SQL in your host
application program” on page 61 for the syntax of LOB declarations in each
language and for host language equivalents for each LOB type.
The following examples show you how to declare LOB host variables in each
supported language. In each table, the left column contains the declaration that you
code in your application program. The right column contains the declaration that
DB2 generates.
49 FILLER
PIC X(1048576-32*32767).
49 FILLER
PIC X(40960000-1250*32767).
01 DBCLOB-VAR USAGE IS 01 DBCLOB-VAR.
SQL TYPE IS DBCLOB(4000K). 02 DBCLOB-VAR-LENGTH
PIC 9(9) COMP.
02 DBCLOB-VAR-DATA.
49 FILLER PIC G(32767)
USAGE DISPLAY-1.2
49 FILLER PIC G(32767)
USAGE DISPLAY-1.
Repeat 1248 times
.
.
.
49 FILLER
PIC X(20480000-1250*32767)
USAGE DISPLAY-1.
01 BLOB-LOC USAGE IS SQL 01 BLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS BLOB-LOCATOR.
01 CLOB-LOC USAGE IS SQL 01 CLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS CLOB-LOCATOR.
01 DBCLOB-LOC USAGE IS SQL 01 DBCLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS DBCLOB-LOCATOR.
Notes:
1. Because the COBOL language allows character declarations of no more than 32767
bytes, for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2
creates multiple host language declarations of 32767 or fewer bytes.
2. Because the COBOL language allows graphic declarations of no more than 32767
double-byte characters, for DBCLOB host variables that are greater than 32767
double-byte characters in length, DB2 creates multiple host language declarations of
32767 or fewer double-byte characters.
Declarations of LOB host variables in PL/I: Table 27 shows PL/I declarations for
some typical LOB types.
Table 27. Examples of PL/I variable declarations
You Declare this Variable DB2 Generates this Variable
DCL BLOB_VAR DCL 1 BLOB_VAR,
SQL TYPE IS BLOB (1M); 2 BLOB_VAR_LENGTH FIXED BINARY(31),
2 BLOB_VAR_DATA,1
3 BLOB_VAR_DATA1(32)
CHARACTER(32767),
3 BLOB_VAR_DATA2
CHARACTER(1048576-32*32767);
DCL CLOB_VAR DCL 1 CLOB_VAR,
SQL TYPE IS CLOB (40000K); 2 CLOB_VAR_LENGTH FIXED BINARY(31),
2 CLOB_VAR_DATA,1
3 CLOB_VAR_DATA1(1250)
CHARACTER(32767),
3 CLOB_VAR_DATA2
CHARACTER(40960000-1250*32767);
DCL DBCLOB_VAR DCL 1 DBCLOB_VAR,
SQL TYPE IS DBCLOB (4000K); 2 DBCLOB_VAR_LENGTH FIXED BINARY(31),
2 DBCLOB_VAR_DATA,2
3 DBCLOB_VAR_DATA1(2500)
GRAPHIC(16383),
3 DBCLOB_VAR_DATA2
GRAPHIC(40960000-2500*16383);
DCL blob_loc DCL blob_loc FIXED BINARY(31);
SQL TYPE IS BLOB_LOCATOR;
DCL clob_loc DCL clob_loc FIXED BINARY(31);
SQL TYPE IS CLOB_LOCATOR;
DCL dbclob_loc SQL TYPE IS DCL dbclob_loc FIXED BINARY(31);
DBCLOB_LOCATOR;
LOB materialization
LOB materialization means that DB2 places a LOB value into contiguous storage in
a data space. Because LOB values can be very large, DB2 avoids materializing
LOB data until absolutely necessary. However, DB2 must materialize LOBs when
your application program:
v Calls a user-defined function with a LOB as an argument
v Moves a LOB into or out of a stored procedure
v Assigns a LOB host variable to a LOB locator host variable
v Converts a LOB from one CCSID to another
Data spaces for LOB materialization: The amount of storage that is used in data
spaces for LOB materialization depends on a number of factors including:
v The size of the LOBs
v The number of LOBs that need to be materialized in a statement
DB2 allocates a certain number of data spaces for LOB materialization. If there is
insufficient space available in a data space for LOB materialization, your application
receives SQLCODE -904.
Although you cannot completely avoid LOB materialization, you can minimize it by
using LOB locators, rather than LOB host variables in your application programs.
See “Using LOB locators to save storage” for information on how to use LOB
locators.
A LOB locator is associated with a LOB value or expression, not with a row in a
DB2 table or a physical storage location in a table space. Therefore, after you
select a LOB value using a locator, the value in the locator normally does not
change until the current unit of work ends. However the value of the LOB itself can
change.
If you want to remove the association between a LOB locator and its value before a
unit of work ends, execute the FREE LOCATOR statement. To keep the association
between a LOB locator and its value after the unit of work ends, execute the HOLD
LOCATOR statement. After you execute a HOLD LOCATOR statement, the locator
keeps the association with the corresponding value until you execute a FREE
LOCATOR statement or the program ends.
If you execute HOLD LOCATOR or FREE LOCATOR dynamically, you cannot use
EXECUTE IMMEDIATE. For more information on HOLD LOCATOR and FREE
LOCATOR, see Chapter 5 of DB2 SQL Reference.
Because the program uses LOB locators, rather than placing the LOB data into host
variables, no LOB data is moved until the INSERT statement executes. In addition,
no LOB data moves between the client and the server.
/**************************/
/* Declare host variables */ 1
/**************************/
EXEC SQL BEGIN DECLARE SECTION;
char userid[9];
char passwd[19];
long HV_START_DEPTINFO;
long HV_START_EDUC;
long HV_RETURN_CODE;
SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3;
EXEC SQL END DECLARE SECTION;
/*************************************************/
/* Use a single row select to get the document */ 2
/*************************************************/
EXEC SQL SELECT RESUME
INTO :HV_DOC_LOCATOR1
FROM EMP_RESUME
WHERE EMPNO = '000130'
AND RESUME_FORMAT = 'ascii';
/*****************************************************/
/* Use the POSSTR function to locate the start of */
/* sections "Department Information" and "Education" */ 3
/*****************************************************/
EXEC SQL SET :HV_START_DEPTINFO =
POSSTR(:HV_DOC_LOCATOR1, 'Department Information');
/*******************************************************/
/* Replace Department Information section with nothing */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR2 =
SUBSTR(:HV_DOC_LOCATOR1, 1, :HV_START_DEPTINFO -1)
|| SUBSTR (:HV_DOC_LOCATOR1, :HV_START_EDUC);
/*******************************************************/
/* Associate a new locator with the Department */
/* Information section */
/*******************************************************/
EXEC SQL SET :HV_NEW_SECTION_LOCATOR =
SUBSTR(:HV_DOC_LOCATOR1, :HV_START_DEPTINFO,
:HV_START_EDUC -:HV_START_DEPTINFO);
/*******************************************************/
/* Append the Department Information to the end */
/* of the resume */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR3 =
:HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR;
/*******************************************************/
/* Store the modified resume in the table. This is */ 4
/* where the LOB data really moves. */
/*******************************************************/
EXEC SQL INSERT INTO EMP_RESUME VALUES ('A00130', 'ascii',
:HV_DOC_LOCATOR3, DEFAULT);
/*********************/
/* Free the locators */ 5
/*********************/
EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
When you use LOB locators to retrieve data from columns that can contain null
values, define indicator variables for the LOB locators, and check the indicator
variables after you fetch data into the LOB locators. If an indicator variable is null
after a fetch operation, you cannot use the value in the LOB locator.
| This chapter contains information that applies to all user-defined functions and
| specific information about user-defined functions in languages other than Java™.
| For information on writing, preparing, and running Java user-defined functions, see
| DB2 Application Programming Guide and Reference for Java.
The user-defined function's definer and invoker determine that this new user-defined
function should have these characteristics:
v The user-defined function name is CALC_BONUS.
v The two input fields are of type DECIMAL(9,2).
v The output field is of type DECIMAL(9,2).
v The program for the user-defined function is written in COBOL and has a load
module name of CBONUS.
User-defined function invokers write and prepare application programs that invoke
CALC_BONUS. An invoker might write a statement like this, which uses the
user-defined function to update the BONUS field in the employee table:
UPDATE EMP
SET BONUS = CALC_BONUS(SALARY,COMM);
Member DSN8DUWC contains a client program that shows you how to invoke the
WEATHER user-defined table function.
Member DSNTEJ2U shows you how to define and prepare the sample user-defined
functions and the client program.
The user-defined function takes two integer values as input. The output from the
user-defined function is of type integer. The user-defined function is in the MATH
schema, is written in assembler, and contains no SQL statements. This CREATE
FUNCTION statement defines the user-defined function:
Suppose you want the FINDSTRING user-defined function to work on BLOB data
types, as well as CLOB types. You can define another instance of the user-defined
function that specifies a BLOB type as input:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC FINDSTRINBLOB
EXTERNAL NAME 'FNDBLOB'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED;
The user-defined function is written in COBOL, uses SQL only to perform queries,
always produces the same output for given input, and should not execute as a
parallel task. The program is reentrant, and successive invocations of the
user-defined function share information. You expect an invocation of the
user-defined function to return about 20 rows.
Your user-defined function can also access remote data using the following
methods:
v DB2 private protocol access using three-part names or aliases for three-part
names
v DRDA access using three-part names or aliases for three-part names
v DRDA access using CONNECT or SET CONNECTION statements
The user-defined function and the application that calls it can access the same
remote site if both use the same protocol.
You can write an external user-defined function in assembler, C, C⁺⁺, COBOL, PL/I,
or Java. User-defined functions that are written in COBOL can include
object-oriented extensions, just as other DB2 COBOL programs can. For
information on writing Java user-defined functions, see DB2 Application
Programming Guide and Reference for Java.
The following sections include additional information that you need when you write
a user-defined function:
v “Restrictions on user-defined function programs” on page 249
v “Coding your user-defined function as a main program or as a subprogram” on
page 249
v “Parallelism considerations” on page 249
v “Passing parameter values to and from a user-defined function” on page 251
v “Examples of passing parameters in a user-defined function” on page 263
v “Using special registers in a user-defined function” on page 276
v “Using a scratchpad in a user-defined function” on page 277
v “Accessing transition tables in a user-defined function or stored procedure” on
page 279
If you code your user-defined function as a subprogram and manage the storage
and files yourself, you can get better performance. The user-defined function should
always free any allocated storage before it exits. To keep data between invocations
of the user-defined function, use a scratchpad.
You must code a user-defined table function that accesses external resources as a
subprogram. Also ensure that the definer specifies the EXTERNAL ACTION
parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program
variables for a subprogram persist between invocations of the user-defined function,
and use of the EXTERNAL ACTION parameter ensures that the user-defined
function stays in the same address space from one invocation to another.
Parallelism considerations
If the definer specifies the parameter ALLOW PARALLEL in the definition of a
user-defined scalar function, and the invoking SQL statement runs in parallel, the
function can run under a parallel task. DB2 executes a separate instance of the
user-defined function for each parallel task. When you write your function program,
you need to understand how the following parameter values interact with ALLOW
PARALLEL so that you can avoid unexpected results:
v SCRATCHPAD
When an SQL statement invokes a user-defined function that is defined with the
ALLOW PARALLEL parameter, DB2 allocates one scratchpad for each parallel
task of each reference to the function. This can lead to unpredictable or incorrect
results.
When the query is executed with no parallelism, DB2 invokes COUNTER once
for each row of table T1, and there is one scratchpad for counter, which DB2
initializes the first time that COUNTER executes. COUNTER returns 1 the first
time it executes, 2 the second time, and so on. The result table for the query is
therefore:
1
2
3
4
5
6
7
8
9
10
Now suppose that the query is run with parallelism, and DB2 creates three
parallel tasks. DB2 executes the predicate WHERE C1 = COUNTER() for each
parallel task. This means that each parallel task invokes its own instance of the
user-defined function and has its own scratchpad. DB2 initializes the scratchpad
to zero on the first call to the user-defined function for each parallel task.
Figure 97 on page 252 shows the structure of the parameter list that DB2 passes to
a user-defined function. An explanation of each parameter follows.
Input parameter values: DB2 obtains the input parameters from the invoker's
parameter list, and your user-defined function receives those parameters according
to the rules of the host language in which the user-defined function is written. The
number of input parameters is the same as the number of parameters in the
user-defined function invocation. If one of the parameters in the function invocation
is an expression, DB2 evaluates the expression and assigns the result of the
expression to the parameter.
Table 31. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
CLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
DBCLOB(n) If m (=2*n) <= 65534:
var DS 0FL4
var_length DS FL4
var_data DS CLm
If m > 65534:
var DS 0FL4
var_length DS FL4
var_data DS CL65534
ORG var_data+(m-65534)
ROWID DS HL2,CL40
Table 32. Compatible C language declarations for LOBs, ROWIDs, and locators
SQL data type in definition C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Table 33. Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC X(n).
If length > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
PIC X(32767).
.
.
.
49 FILLER
PIC X(mod(n,32767)).
49 FILLER
PIC X(mod(n,32767)).
DBCLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC G(n)
USAGE DISPLAY-1.
If length > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC G(32767)
USAGE DISPLAY-1.
49 FILLER
PIC G(32767).
USAGE DISPLAY-1.
.
.
.
49 FILLER
PIC G(mod(n,32767))
USAGE DISPLAY-1.
ROWID 01 var.
49 var-LEN PIC 9(4)
USAGE COMP.
49 var-DATA PIC X(40).
Table 34. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Result parameters: Set these values in your user-defined function before exiting.
For a user-defined scalar function, you return one result parameter. For a
user-defined table function, you return the same number of parameters as columns
in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2
allocates a buffer for each result parameter value and passes the buffer address to
the user-defined function. Your user-defined function places each result parameter
value in its buffer. You must ensure that the length of the value you place in each
See “Passing parameter values to and from a user-defined function” on page 251 to
determine the host data type to use for each result parameter value. If the CREATE
FUNCTION statement contains a CAST FROM clause, use a data type that
corresponds to the SQL data type in the CAST FROM clause. Otherwise, use a
data type that corresponds to the SQL data type in the RETURNS or RETURNS
TABLE clause.
To improve performance for user-defined table functions that return many columns,
you can pass values for a subset of columns to the invoker. For example, a
user-defined table function might be defined to return 100 columns, but the invoker
needs values for only two columns. Use the DBINFO parameter to indicate to DB2
the columns for which you will return values. Then return values for only those
columns. See the explanation of DBINFO below for information on how to indicate
the columns of interest.
Input parameter indicators: These are SMALLINT values, which DB2 sets before
it passes control to the user-defined function. You use the indicators to determine
whether the corresponding input parameters are null. The number and order of the
indicators are the same as the number and order of the input parameters. On entry
to the user-defined function, each indicator contains one of these values:
0 The input parameter value is not null.
negative The input parameter value is null.
Code the user-defined function to check all indicators for null values unless the
user-defined function is defined with RETURNS NULL ON NULL INPUT. A
user-defined function defined with RETURNS NULL ON NULL INPUT executes only
if all input parameters are not null.
Result indicators: These are SMALLINT values, which you must set before the
user-defined function ends to indicate to the invoking program whether each result
parameter value is null. A user-defined scalar function has one result indicator. A
user-defined table function has the same number of result indicators as the number
of result parameters. The order of the result indicators is the same as the order of
the result parameters. Set each result indicator to one of these values:
0 or positive The result parameter is not null.
negative The result parameter is null.
SQLSTATE value: This is a CHAR(5) value, which you must set before the
user-defined function ends. The user-defined function can return one of these
SQLSTATE values:
00000 Use this value to indicate that the user-defined function executed
without any warnings or errors.
01Hxx Use these values to indicate that the user-defined function detected
a warning condition. xx can be any two single-byte alphanumeric
characters. DB2 returns SQLCODE +462 if the user-defined
function sets the SQLSTATE to 01Hxx.
02000 Use this value to indicate that there no more rows are to be
returned from a user-defined table function.
38yxx Use these values to indicate that the user-defined function detected
When your user-defined function returns an SQLSTATE of 38yxx other than one of
the four listed above, DB2 returns SQLCODE -443.
If both the user-defined function and DB2 set an SQLSTATE value, DB2 returns its
SQLSTATE value to the invoker.
User-defined function name: DB2 sets this value in the parameter list before the
user-defined function executes. This value is VARCHAR(137): 8 bytes for the
schema name, 1 byte for a period, and 128 bytes for the user-defined function
name. If you use the same code to implement multiple versions of a user-defined
function, you can use this parameter to determine which version of the function the
invoker wants to execute.
Specific name: DB2 sets this value in the parameter list before the user-defined
function executes. This value is VARCHAR(128) and is either the specific name
from the CREATE FUNCTION statement or a specific name that DB2 generated. If
you use the same code to implement multiple versions of a user-defined function,
you can use this parameter to determine which version of the function the invoker
wants to execute.
You must ensure that your user-defined function does not write more bytes to the
scratchpad than the scratchpad length.
Call type: For a user-defined scalar function, if the definer specified FINAL CALL in
the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined
function. For a user-defined table function, DB2 always passes this parameter to
the user-defined function.
On entry to a user-defined scalar function, the call type parameter has one of the
following values:
-1 This is the first call to the user-defined function for the SQL statement. For
a first call, all input parameters are passed to the user-defined function. In
addition, the scratchpad, if allocated, is set to binary zeros.
0 This is a normal call. For a normal call, all the input parameters are passed
to the user-defined function. If a scratchpad is also passed, DB2 does not
modify it.
1 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application explicitly closes
a cursor. When a value of 1 is passed to a user-defined function, the
user-defined function can execute SQL statements.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any
close cursor statements during this type of final call, the user-defined
function should tolerate SQLCODE -501 because DB2 might have already
closed cursors before the final call.
During the first call, your user-defined scalar function should acquire any system
resources it needs. During the final call, the user-defined scalar function should
release any resources it acquired during the first call. The user-defined scalar
function should return a result value only during normal calls. DB2 ignores any
If an invoking SQL statement contains more than one user-defined scalar function,
and one of those user-defined functions returns an error SQLSTATE, DB2 invokes
all of the user-defined functions for a final call, and the invoking SQL statement
receives the SQLSTATE of the first user-defined function with an error.
On entry to a user-defined table function, the call type parameter has one of the
following values:
-2 This is the first call to the user-defined function for the SQL statement. A
first call occurs only if the FINAL CALL keyword is specified in the
user-defined function definition. For a first call, all input parameters are
passed to the user-defined function. In addition, the scratchpad, if allocated,
is set to binary zeros.
-1 This is the open call to the user-defined function by an SQL statement. If
FINAL CALL is not specified in the user-defined function definition, all input
parameters are passed to the user-defined function, and the scratchpad, if
allocated, is set to binary zeros during the open call. If FINAL CALL is
specified for the user-defined function, DB2 does not modify the scratchpad.
0 This is a fetch call to the user-defined function by an SQL statement. For a
fetch call, all input parameters are passed to the user-defined function. If a
scratchpad is also passed, DB2 does not modify it.
1 This is a close call. For a close call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
2 This is a final call. This type of final call occurs only if FINAL CALL is
specified in the user-defined function definition. For a final call, no input
parameters are passed to the user-defined function. If a scratchpad is also
passed, DB2 does not modify it.
This type of final call occurs when the invoking application executes a
CLOSE CURSOR statement.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any
close cursor statements during this type of final call, the user-defined
function should tolerate SQLCODE -501 because DB2 might have already
closed cursors before the final call.
During the close call, a user-defined table function can set the SQLSTATE and
diagnostic message area.
CEETERM RC=0
*******************************************************************
* VARIABLE DECLARATIONS AND EQUATES *
*******************************************************************
R1 EQU 1 REGISTER 1
R7 EQU 7 REGISTER 7
PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
LTORG , PLACE LITERAL POOL HERE
PROGAREA DSECT
ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
PARM1 DS F PARAMETER 1
PARM2 DS F PARAMETER 2
RESULT DS CL9 RESULT
F_IND1 DS H INDICATOR FOR PARAMETER 1
F_IND2 DS H INDICATOR FOR PARAMETER 2
F_INDR DS H INDICATOR FOR RESULT
C or C⁺⁺:
For subprograms, you pass the parameters directly. For main programs, you use
the standard argc and argv variables to access the input and output parameters:
v The argv variable contains an array of pointers to the parameters that are passed
to the user-defined function. All string parameters that are passed back to DB2
must be null terminated.
– argv[0] contains the address of the load module name for the user-defined
function.
– argv[1] through argv[n] contain the addresses of parameters 1 through n.
v The argc variable contains the number of parameters that are passed to the
external user-defined function, including argv[0].
Figure 99 on page 266 shows the parameter conventions for a user-defined scalar
function that is written as a main program that receives two parameters and returns
one result.
main(argc,argv)
int argc;
char *argv[];
{
/***************************************************/
/* Assume that the user-defined function invocation*/
/* included 2 input parameters in the parameter */
/* list. Also assume that the definition includes */
/* the SCRATCHPAD, FINAL CALL, and DBINFO options, */
/* so DB2 passes the scratchpad, calltype, and */
/* dbinfo parameters. */
/* The argv vector contains these entries: */
/* argv[0] 1 load module name */
/* argv[1-2] 2 input parms */
/* argv[3] 1 result parm */
/* argv[4-5] 2 null indicators */
/* argv[6] 1 result null indicator */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified func name */
/* argv[9] 1 specific func name */
/* argv[10] 1 diagnostic string */
/* argv[11] 1 scratchpad */
/* argv[12] 1 call type */
/* argv[13] + 1 dbinfo */
/* ------ */
/* 14 for the argc variable */
/***************************************************/
if argc<>14
{
.
.
.
/**********************************************************/
/* This section would contain the code executed if the */
/* user-defined function is invoked with the wrong number */
/* of parameters. */
/**********************************************************/
}
Figure 99. How a C or C⁺⁺ user-defined function that is written as a main program receives
parameters (Part 1 of 2)
/***************************************************/
/* Access the null indicator for the first */
/* parameter on the invoked user-defined function */
/* as follows: */
/***************************************************/
short int ind1;
ind1 = *(short int *) argv[4];
/***************************************************/
/* Use the expression below to assign */
/* 'xxxxx' to the SQLSTATE returned to caller on */
/* the SQL statement that contains the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[7],"xxxxx/0");
/***************************************************/
/* Obtain the value of the qualified function */
/* name with this expression. */
/***************************************************/
char f_func[28];
strcpy(f_func,argv[8]);
/***************************************************/
/* Obtain the value of the specific function */
/* name with this expression. */
/***************************************************/
char f_spec[19];
strcpy(f_spec,argv[9]);
/***************************************************/
/* Use the expression below to assign */
/* 'yyyyyyyy' to the diagnostic string returned */
/* in the SQLCA associated with the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[10],"yyyyyyyy/0");
/***************************************************/
/* Use the expression below to assign the */
/* result of the function. */
/***************************************************/
char l_result[11];
strcpy(argv[3],l_result);
.
.
.
Figure 99. How a C or C⁺⁺ user-defined function that is written as a main program receives
parameters (Part 2 of 2)
Figure 100 on page 268 shows the parameter conventions for a user-defined scalar
function written as a C subprogram that receives 2 parameters and returns one
result.
Figure 100. How a C language user-defined function that is written as a subprogram receives
parameters (Part 1 of 2)
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *f_ind1;
l_ind1 = *f_ind2;
strcpy(ludf_sqlstate,udf_sqlstate);
strcpy(ludf_fname,udf_fname);
strcpy(ludf_specname,udf_specname);
l_udf_call_type = *udf_call_type;
strcpy(ludf_msgtext,udf_msgtext);
memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
.
Figure 100. How a C language user-defined function that is written as a subprogram receives
parameters (Part 2 of 2)
Figure 101 on page 270 shows the parameter conventions for a user-defined scalar
function that is written as a C⁺⁺ subprogram that receives two parameters and
returns one result. This example demonstrates that you must use an extern "C"
modifier to indicate that you want the C⁺⁺ subprogram to receive parameters
according to the C linkage convention. This modifier is necessary because the
CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function,
passes parameters using the C linkage convention.
Figure 101. How a C⁺⁺ user-defined function that is written as a subprogram receives
parameters (Part 1 of 2)
Figure 101. How a C⁺⁺ user-defined function that is written as a subprogram receives
parameters (Part 2 of 2)
COBOL: Figure 102 on page 272 shows the parameter conventions for a
user-defined table function that is written as a main program that receives two
parameters and returns two results. For a COBOL user-defined function that is a
subprogram, the conventions are the same.
DATA DIVISION.
.
.
.
LINKAGE SECTION.
*********************************************************
* Declare each of the parameters *
*********************************************************
01 UDFPARM1 PIC S9(9) USAGE COMP.
01 UDFPARM2 PIC X(10).
.
.
.
*********************************************************
* Declare these variables for result parameters *
*********************************************************
01 UDFRESULT1 PIC X(10).
01 UDFRESULT2 PIC X(10).
.
.
.
*********************************************************
* Declare a null indicator for each parameter *
*********************************************************
01 UDF-IND1 PIC S9(4) USAGE COMP.
01 UDF-IND2 PIC S9(4) USAGE COMP.
.
.
.
*********************************************************
* Declare a null indicator for result parameter *
*********************************************************
01 UDF-RIND1 PIC S9(4) USAGE COMP.
01 UDF-RIND2 PIC S9(4) USAGE COMP.
.
.
.
*********************************************************
* Declare the SQLSTATE that can be set by the *
* user-defined function *
*********************************************************
01 UDF-SQLSTATE PIC X(5).
*********************************************************
* Declare the qualified function name *
*********************************************************
01 UDF-FUNC.
49 UDF-FUNC-LEN PIC 9(4) USAGE BINARY.
49 UDF-FUNC-TEXT PIC X(137).
*********************************************************
* Declare the specific function name *
*********************************************************
01 UDF-SPEC.
49 UDF-SPEC-LEN PIC 9(4) USAGE BINARY.
49 UDF-SPEC-TEXT PIC X(128).
PL/I: Figure 103 on page 275 shows the parameter conventions for a user-defined
scalar function that is written as a main program that receives two parameters and
returns one result. For a PL/I user-defined function that is a subprogram, the
conventions are the same.
Table 35 shows information you need when you use special registers in a
user-defined function.
| Table 35. Characteristics of special registers in a user-defined function
| Special register Initial value when Initial value when Function
| INHERIT SPECIAL DEFAULT SPECIAL can use
| REGISTERS option is REGISTERS option is SET
| specified specified statement
| to modify?
| CURRENT The value of bind option The value of bind option Yes
| APPLICATION ENCODING for the ENCODING for the
| ENCODING SCHEME user-defined function user-defined function
| package1 package1
| CURRENT DATE New value for each SQL New value for each SQL Not
| statement in the statement in the applicable5
| user-defined function user-defined function
| package2 package2
| CURRENT DEGREE Inherited from invoker3 The value of field Yes
| CURRENT DEGREE on
| installation panel
| DSNTIP4
| CURRENT LOCALE Inherited from invoker The value of field Yes
| LC_CTYPE CURRENT DEGREE on
| installation panel
| DSNTIP4
| CURRENT MEMBER New value for each SET New value for each SET No
| host-variable=CURRENT host-variable=CURRENT
| MEMBER statement MEMBER statement
| CURRENT The value of bind option The value of bind option Yes
| OPTIMIZATION HINT OPTHINT for the OPTHINT for the
| user-defined function user-defined function
| package or inherited package
| from invoker6
| CURRENT Inherited from invoker4 Inherited from invoker4 Yes
| PACKAGESET
| CURRENT PATH The value of bind option The value of bind option Yes
| PATH for the PATH for the
| user-defined function user-defined function
| package or inherited package
| from invoker6
| CURRENT PRECISION Inherited from invoker The value of field Yes
| DECIMAL ARITHMETIC
| on installation panel
| DSNTIP4
| CURRENT RULES Inherited from invoker The value of bind option Yes
| SQLRULES for the
| user-defined function
| package
| CURRENT SERVER Inherited from invoker Inherited from invoker Yes
The scratchpad consists of a 4-byte length field, followed by the scratchpad area.
The definer can specify the length of the scratchpad area in the CREATE
FUNCTION statement. The specified length does not include the length field. The
default size is 100 bytes. DB2 initializes the scratchpad for each function to binary
zeros at the beginning of execution for each subquery of an SQL statement and
does not examine or change the content thereafter. On each invocation of the
user-defined function, DB2 passes the scratchpad to the user-defined function. You
can therefore use the scratchpad to preserve information between invocations of a
reentrant user-defined function.
Figure 104 on page 279 demonstrates how to enter information in a scratchpad for
a user-defined function defined like this:
CREATE FUNCTION COUNTER()
RETURNS INT
SCRATCHPAD
FENCED
NOT DETERMINISTIC
NO SQL
NO EXTERNAL ACTION
LANGUAGE C
PARAMETER STYLE DB2SQL
EXTERNAL NAME 'UDFCTR';
The scratchpad length is not specified, so the scratchpad has the default length of
100 bytes, plus 4 bytes for the length field. The user-defined function increments an
integer value and stores it in the scratchpad on each execution.
To access transition tables in a user-defined function, use table locators, which are
pointers to the transition tables. You declare table locators as input parameters in
the CREATE FUNCTION statement using the TABLE LIKE table-name AS
LOCATOR clause. See Chapter 5 of DB2 SQL Reference for more information.
The five basic steps to accessing transition tables in a user-defined function are:
1. Declare input parameters to receive table locators. You must define each
parameter that receives a table locator as an unsigned 4-byte integer.
2. Declare table locators. You can declare table locators in assembler, C, C⁺⁺,
COBOL, PL/I, and in an SQL procedure compound statement. The syntax for
declaring table locators in C, C⁺⁺, COBOL, and PL/I is described in “Chapter 9.
Embedding SQL statements in host languages” on page 107. The syntax for
declaring table locators in an SQL procedure is described in Chapter 6 of DB2
SQL Reference.
3. Declare a cursor to access the rows in each transition table.
4. Assign the input parameter values to the table locators.
The following examples show how a user-defined function that is written in C, C⁺⁺,
COBOL, or PL/I accesses a transition table for a trigger. The transition table,
NEWEMP, contains modified rows of the employee sample table. The trigger is
defined like this:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES (CHECKEMP(TABLE NEWEMPS));
END;
Assembler: Figure 105 on page 281 shows how an assembler program accesses
rows of transition table NEWEMPS.
************************************************************
* Declare table locator host variable TRIGTBL *
************************************************************
TRIGTBL SQL TYPE IS TABLE LIKE EMP AS LOCATOR
************************************************************
* Declare a cursor to retrieve rows from the transition *
* table *
************************************************************
EXEC SQL DECLARE C1 CURSOR FOR X
SELECT LASTNAME FROM TABLE(:TRIGTBL LIKE EMP) X
WHERE SALARY > 100000
************************************************************
* Copy table locator for trigger transition table *
************************************************************
L R2,TABLOC GET ADDRESS OF LOCATOR
L R2,0(0,R2) GET LOCATOR VALUE
ST R2,TRIGTBL
EXEC SQL OPEN C1
EXEC SQL FETCH C1 INTO :NAME
.
.
.
Figure 105. How an assembler user-defined function accesses a transition table (Part 1 of 2)
NAME
. DS CL24
.
.
DS 0D
PROGSIZE EQU *-PROGAREA DYNAMIC WORKAREA SIZE
PARMAREA DSECT
TABLOC
. DS A INPUT PARAMETER FOR TABLE LOCATOR
.
.
END CHECKEMP
Figure 105. How an assembler user-defined function accesses a transition table (Part 2 of 2)
C or C⁺⁺: Figure 106 shows how a C or C⁺⁺ program accesses rows of transition
table NEWEMPS.
/**********************************************************/
/* Declare table locator host variable trig_tbl_id */
/**********************************************************/
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS TABLE LIKE EMP AS LOCATOR trig_tbl_id;
char name[25];
EXEC SQL END DECLARE SECTION;
.
.
.
/**********************************************************/
/* Declare a cursor to retrieve rows from the transition */
/* table */
/**********************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:trig_tbl_id LIKE EMPLOYEE)
WHERE SALARY > 100000;
/**********************************************************/
/* Fetch a row from transition table */
/**********************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :name;
.
.
.
COBOL: Figure 107 on page 283 shows how a COBOL program accesses rows of
transition table NEWEMPS.
LINKAGE SECTION.
*********************************************************
* Declare table locator host variable TRIG-TBL-ID *
*********************************************************
01 TRIG-TBL-ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR.
.
.
.
*********************************************************
* Declare cursor to retrieve rows from transition table *
*********************************************************
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:TRIG-TBL-ID LIKE EMP)
WHERE SALARY > 100000 END-EXEC.
*********************************************************
* Fetch a row from transition table *
*********************************************************
EXEC SQL OPEN C1 END-EXEC.
EXEC SQL FETCH C1 INTO :NAME END-EXEC.
.
.
.
PROG-END.
GOBACK.
PL/I: Figure 108 on page 284 shows how a PL/I program accesses rows of
transition table NEWEMPS.
/****************************************************/
/* Declare a cursor to retrieve rows from the */
/* transition table */
/****************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:TRIG_TBL_ID LIKE EMP)
WHERE SALARY > 100000;
/****************************************************/
/* Retrieve rows from the transition table */
/****************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :NAME;
.
.
.
END CHECK_EMP;
When the primary program of a user-defined function calls another program, DB2
uses the CURRENT PACKAGESET special register to determine the collection to
search for the called program's package. The primary program can change this
collection ID by executing the statement SET CURRENT PACKAGESET. If the
value of CURRENT PACKAGESET is blank, DB2 uses the method described in
“The order of search” on page 416 to search for the package.
To maximize the number of user-defined functions and stored procedures that can
run concurrently, follow these preparation recommendations:
v Ask the system administrator to set the region size parameter in the startup
procedures for the WLM-established stored procedures address spaces to
REGION=0. This lets an address space obtain the largest possible amount of
storage below the 16-MB line.
The definer can list these options as values of the RUN OPTIONS parameter of
CREATE FUNCTION, or the system administrator can establish these options as
defaults during Language Environment installation.
This should be the first command that you enter from the terminal or include in
your commands file.
This command directs output from your debugging session to the log data set
you defined in step 2. For example, if you defined a log data set with DD name
INSPLOG in the start-up procedure for the stored procedures address space,
the first command should be:
SET LOG ON FILE INSPLOG;
You can combine the Language Environment run-time TEST option with
CEETEST calls. For example, you might want to use TEST to name the
commands data set but use CEETEST calls to control when the Debug Tool
takes control.
Driver applications: You can write a small driver application that calls the
user-defined function as a subprogram and passes the parameter list for the
user-defined function. You can then test and debug the user-defined function as a
normal DB2 application under TSO. You can then use TSO TEST and other
commonly used debugging tools.
SQL INSERT: You can use SQL to insert debugging information into a DB2 table.
This allows other machines in the network (such as a workstation) to easily access
the data in the table using DRDA access.
DB2 discards the debugging information if the application executes the ROLLBACK
statement. To prevent the loss of the debugging data, code the calling application
so that it retrieves the diagnostic data before executing the ROLLBACK statement.
| See “Defining a user-defined function” on page 244 and Chapter 5 of DB2 SQL
| Reference for a description of the parameters that you can specify in the CREATE
| FUNCTION statement for an SQL scalar function.
| To prepare an SQL scalar function for execution, you execute the CREATE
| FUNCTION statement, either statically or dynamically.
See the following sections for details you should know before you invoke a
user-defined function:
v “Syntax for user-defined function invocation”
v “Ensuring that DB2 executes the intended user-defined function” on page 290
v “Casting of user-defined function arguments” on page 296
v “What happens when a user-defined function abnormally terminates” on page 297
| function-name ( )
ALL ,
DISTINCT
expression
TABLE transition-table-name
|
||
| Figure 109. Syntax for user-defined scalar function invocation
|
Use the syntax shown in Figure 110 on page 290 when you invoke a table function:
correlation-clause:
AS
correlation-name
,
( column-name )
See Chapter 2 of DB2 SQL Reference for more information about the syntax of
user-defined function invocation.
| If two or more candidates fit the unqualified function invocation equally well
| because the function invocation contains parameter markers, DB2 issues an
| error.
The remainder of this section discusses details of the function resolution process
and gives suggestions on how you can ensure that DB2 picks the right function.
To determine whether a data type is promotable to another data type, see Table 36.
The first column lists data types in function invocations. The second column lists
data types to which the types in the first column can be promoted, in order from
best fit to worst fit. For example, suppose that in this statement, the data type of A
is SMALLINT:
SELECT USER1.ADDTWO(A) FROM TABLEA;
If the data types of all parameters in a function instance are the same as those in
the function invocation, that function instance is a best fit. If no exact match exists,
DB2 compares data types in the parameter lists from left to right, using this method:
1. DB2 compares the data types of the first parameter in the function invocation to
the data type of the first parameter in each function instance.
| If the first parameter in the invocation is an untyped parameter marker, DB2
| does not do the comparison.
2. For the first parameter, if one function instance has a data type that fits the
function invocation better than the data types in the other instances, that
function is a best fit. Table 36 on page 292 shows the possible fits for each data
type, in best-to-worst order.
| 3. If the data types of the first parameter are the same for all function instances, or
| if the first parameter in the function invocation is an untyped parameter marker,
| DB2 repeats this process for the next parameter. DB2 continues this process for
| each parameter until it finds a best fit.
Candidate 2:
CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE)
RETURNS DECIMAL(9,2)
EXTERNAL NAME 'FUNC2'
PARAMETER STYLE DB2SQL
LANGUAGE COBOL;
DB2 compares the data type of the first parameter in the user-defined function
invocation to the data types of the first parameters in the candidate functions.
Because the first parameter in the invocation has data type VARCHAR, and both
The data type of the second parameter in the invocation is SMALLINT. INTEGER,
which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which
is the data type of candidate 2. Therefore, candidate 1 is DB2's choice for
execution.
Before you use EXPLAIN to obtain information about function resolution, create
DSN_FUNCTION_TABLE. The table definition looks like this:
CREATE TABLE DSN_FUNCTION_TABLE
(QUERYNO INTEGER NOT NULL WITH DEFAULT,
QBLOCKNO INTEGER NOT NULL WITH DEFAULT,
APPLNAME CHAR(8) NOT NULL WITH DEFAULT,
PROGNAME CHAR(8) NOT NULL WITH DEFAULT,
COLLID CHAR(18) NOT NULL WITH DEFAULT,
GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT,
EXPLAIN_TIME TIMESTAMP NOT NULL WITH DEFAULT,
SCHEMA_NAME CHAR(8) NOT NULL WITH DEFAULT,
FUNCTION_NAME CHAR(18) NOT NULL WITH DEFAULT,
SPEC_FUNC_NAME CHAR(18) NOT NULL WITH DEFAULT,
FUNCTION_TYPE CHAR(2) NOT NULL WITH DEFAULT,
VIEW_CREATOR CHAR(8) NOT NULL WITH DEFAULT,
VIEW_NAME CHAR(18) NOT NULL WITH DEFAULT,
PATH VARCHAR(254) NOT NULL WITH DEFAULT,
FUNCTION_TEXT VARCHAR(254) NOT NULL WITH DEFAULT);
When you invoke a user-defined function that is sourced on another function, DB2
casts your parameters to the data types and lengths of the sourced function.
The following example demonstrates what happens when the parameter definitions
of a sourced function differ from those of the function on which it is sourced.
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must
first assign this value to the data type of the input parameter in the definition of
TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes
001234.56. Next, DB2 casts the parameter value to a source function parameter,
which is DECIMAL(6,0). The parameter value then becomes 001234. (When you
cast a value, that value is truncated, rather than rounded.)
Although trigger activations count in the levels of SQL statement nesting, the
previous restrictions on SQL statements do not apply to SQL statements that are
executed in the trigger body. For example, suppose that trigger TR1 is defined on
table T1:
CREATE TRIGGER TR1
AFTER INSERT ON T1
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
UPDATE T1 SET C1=1;
END
Now suppose that you execute this SQL statement at level 1 of nesting:
Although the UPDATE statement in the trigger body is at level 2 of nesting and
modifies the same table that the triggering statement updates, DB2 can execute the
INSERT statement successfully.
The access path that DB2 chooses for a predicate determines whether a
user-defined function in that predicate is executed. To ensure that DB2 executes the
external action for each row of the result set, put the user-defined function
invocation in the SELECT list.
The results can differ even more, depending on the order in which DB2 retrieves
the rows from the table. Suppose that an ascending index is defined on column C2.
Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that row
1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in the
select list is again 1, so the result of the query in this case is:
COUNTER() C1 C2
--------- -- --
1 1 b
| A similar situation occurs with scrollable cursors and nondeterministic functions. The
| result of a nondeterministic user-defined function can be different each time you
| execute the user-defined function. If the select list of a scrollable cursor contains a
| nondeterministic user-defined function, and you use that cursor to retrieve the same
| row multiple times, the results can differ each time you retrieve the row.
You must define a column of type ROWID in the table because tables with any type
of LOB columns require a ROWID column, and internally, the VIDEO_CATALOG
table contains two LOB columns. For more information on LOB data, see
“Chapter 13. Programming for large objects (LOBs)” on page 229.
After you define distinct types and columns of those types, you can use those data
types in the same way you use built-in types. You can use the data types in
assignments, comparisons, function invocations, and stored procedure calls.
However, when you assign one column value to another or compare two column
values, those values must be of the same distinct type. For example, you must
assign a column value of type VIDEO to a column of type VIDEO, and you can
compare a column value of type AUDIO only to a column of type AUDIO. When you
assign a host variable value to a column with a distinct type, you can use any host
data type that is compatible with the source data type of the distinct type. For
example, to receive an AUDIO or VIDEO value, you can define a host variable like
this:
SQL TYPE IS BLOB (1M) HVAV;
For example, if you have defined a user-defined function to convert U.S. dollars to
euro currency, you do not want anyone to use this same user-defined function to
convert Japanese Yen to euros because the U.S. dollars to euros function returns
the wrong amount. Suppose you define three distinct types:
CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2) WITH COMPARISONS;
CREATE DISTINCT TYPE EURO AS DECIMAL(9,2) WITH COMPARISONS;
CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2) WITH COMPARISONS;
DB2 does not let you compare data of a distinct type directly to data of its source
type. However, you can compare a distinct type to its source type by using a cast
function.
For example, suppose you want to know which products sold more than US
$100 000.00 in the US in the month of July in 1992 (7/92). Because you cannot
compare data of type US_DOLLAR with instances of data of the source type of
US_DOLLAR (DECIMAL) directly, you must use a cast function to cast data from
DECIMAL to US_DOLLAR or from US_DOLLAR to DECIMAL. Whenever you
create a distinct type, DB2 creates two cast functions, one to cast from the source
type to the distinct type and the other to cast from the distinct type to the source
type. For distinct type US_DOLLAR, DB2 creates a cast function called DECIMAL
and a cast function called US_DOLLAR. When you compare an object of type
US_DOLLAR to an object of type DECIMAL, you can use one of those cast
functions to make the data types identical for the comparison. Suppose table
US_SALES is defined like this:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR INTEGER CHECK (YEAR > 1985),
TOTAL US_DOLLAR);
The casting satisfies the requirement that the compared data types are identical.
You cannot use host variables in statements that you prepare for dynamic
execution. As explained in “Using parameter markers” on page 508, you can
substitute parameter markers for host variables when you prepare a statement, and
then use host variables when you execute the statement.
If you use a parameter marker in a predicate of a query, and the column to which
you compare the value represented by the parameter marker is of a distinct type,
you must cast the parameter marker to the distinct type, or cast the column to its
source type.
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER
WHERE CUST_NUM = CAST (? AS CNUM)
If you need to assign a value of one distinct type to a column of another distinct
type, a function must exist that converts the value from one type to another.
Because DB2 provides cast functions only between distinct types and their source
types, you must write the function to convert from one distinct type to another.
You need to insert values from the TOTAL column in JAPAN_SALES into the
TOTAL column of JAPAN_SALES_98. Because INSERT statements follow
assignment rules, DB2 does not let you insert the values directly from one column
to the other because the columns are of different distinct types. Suppose that a
user-defined function called US_DOLLAR has been written that accepts values of
type JAPANESE_YEN as input and returns values of type US_DOLLAR. You can
then use this function to insert values into the JAPAN_SALES_98 table:
INSERT INTO JAPAN_SALES_98
SELECT PRODUCT_ITEM, US_DOLLAR(TOTAL)
FROM JAPAN_SALES
WHERE YEAR = 1998;
You can assign a column value of a distinct type to a host variable if you can assign
a column value of the distinct type's source type to the host variable. In the
following example, you can assign SIZECOL1 and SIZECOL2, which has distinct
type SIZE, to host variables of type double and short because the source type of
SIZE, which is INTEGER, can be assigned to host variables of type double or short.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
In this example, values of host variable hv2 can be assigned to columns SIZECOL1
and SIZECOL2, because C data type short is equivalent to DB2 data type
SMALLINT, and SMALLINT is promotable to data type INTEGER. However, values
of hv1 cannot be assigned to SIZECOL1 and SIZECOL2, because C data type
double, which is equivalent to DB2 data type DOUBLE, is not promotable to data
type INTEGER.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
Because the result type of both US_DOLLAR functions is US_DOLLAR, you have
satisfied the requirement that the distinct types of the combined columns are the
same.
The HOUR function takes only the TIME or TIMESTAMP data type as an argument,
so you need a sourced function that is based on the HOUR function that accepts
the FLIGHT_TIME data type. You might declare a function like this:
CREATE FUNCTION HOUR(FLIGHT_TIME)
RETURNS INTEGER
SOURCE SYSIBM.HOUR(TIME);
Example: Using an infix operator with distinct type arguments: Suppose you
want to add two values of type US_DOLLAR. Before you can do this, you must
define a version of the + function that accepts values of type US_DOLLAR as
operands:
CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR)
RETURNS US_DOLLAR
SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source
function must be the version of + with arguments of type DECIMAL(9,2).
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if
you want to call CDN_TO_US with a constant or host variable argument, you must
cast that argument to distinct type EURO:
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(:H1));
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(10000));
Suppose you keep electronic mail documents that are sent to your company in a
DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you
define it as a distinct type so that you can control the types of operations that are
performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
You have also defined and written user-defined functions to search for and return
the following information about an electronic mail document:
v Subject
v Sender
v Date sent
v Message content
v Indicator of whether the document contains a user-specified string
The user-defined function definitions look like this:
CREATE FUNCTION SUBJECT(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SUBJECT'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION SENDER(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SENDER'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION SENDING_DATE(E_MAIL)
RETURNS DATE
EXTERNAL NAME 'SENDDATE'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION CONTENTS(E_MAIL)
RETURNS CLOB(1M)
EXTERNAL NAME 'CONTENTS'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION CONTAINS(E_MAIL, VARCHAR (200))
RETURNS INTEGER
EXTERNAL NAME 'CONTAINS'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
Because the table contains a column with a source data type of CLOB, the table
requires a ROWID column and an associated LOB table space, auxiliary table, and
index on the auxiliary table. Use statements like this to define the LOB table space,
the auxiliary table, and the index:
CREATE LOB TABLESPACE DOCTSLOB
LOG YES
GBPCACHE SYSTEM;
To populate the document table, you write code that executes an INSERT statement
to put the first part of a document in the table, and then executes multiple UPDATE
statements to concatenate the remaining parts of the document. For example:
EXEC SQL BEGIN DECLARE SECTION;
char hv_current_time[26];
SQL TYPE IS CLOB (1M) hv_doc;
EXEC SQL END DECLARE SECTION;
/* Determine the current time and put this value */
/* into host variable hv_current_time. */
/* Read up to 1 MB of document data from a file */
/*
. into host variable hv_doc. */
.
.
Now that the data is in the table, you can execute queries to learn more about the
documents. For example, you can execute this query to determine which
documents contain the word 'performance':
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT),
SUBJECT(A_DOCUMENT)
FROM DOCUMENTS
WHERE CONTAINS(A_DOCUMENT,'performance') = 1;
Because the electronic mail documents can be very large, you might want to use
LOB locators to manipulate the document data instead of fetching all of a document
into a host variable. You can use a LOB locator on any distinct type that is defined
on one of the LOB types. The following example shows how you can cast a LOB
locator as a distinct type, and then use the result in a user-defined function that
takes a distinct type as an argument:
Figure 112 illustrates the program preparation process when you use the DB2
precompiler. Figure 113 on page 316 illustrates the program preparation process
when you use an SQL statement coprocessor. “Chapter 20. Preparing an
application program to run” on page 397 supplies specific details about
accomplishing these steps.
After you have processed SQL statements in your source program, you create a
load module, possibly one or more packages, and an application plan. Creating a
load module involves compiling and link-editing the modified source code that is
produced by the precompiler. Creating a package or an application plan, a process
unique to DB2, involves binding one or more DBRMs.
A few options, however, can affect the way that you write your program. For
example, you need to know if you are using NOFOR or STDSQL(YES) before you
begin coding.
| Before you begin writing your program, review the list of options in Table 48 on
| page 403. You can specify any of those options whether you use the DB2
| precompiler or an SQL statement coprocessor. However, the SQL statement
| coprocessor might ignore certain options because there are compiler options that
| provide the same information.
Binding or rebinding a package or plan in use: Packages and plans are locked
when you bind or run them. Packages that run under a plan are not locked until the
plan uses them. If you run a plan and some packages in the package list never run,
those packages are never locked.
You cannot bind or rebind a package or a plan while it is running. However, you can
bind a different version of a package that is running.
Options for binding and rebinding: Several of the options of BIND PACKAGE
and BIND PLAN can affect your program design. For example, you can use a bind
option to ensure that a package or plan can run only from a particular CICS
connection or a particular IMS region—you do not have to enforce this in your code.
Several other options are discussed at length in later chapters, particularly the ones
that affect your program’s use of locks, such as the option ISOLATION. Before you
finish reading this chapter, you might want to review those options in Chapter 2 of
DB2 Command Reference.
Input to binding the plan can include DBRMs only, a package list only, or a
combination of the two. When choosing one of those alternatives for your
application, consider the impact of rebinding and see “Planning for changes to your
application” on page 319.
Binding all DBRMs to a plan is suitable for small applications that are unlikely to
change or that require all resources to be acquired when the plan is allocated rather
than when your program first uses them.
Advantages of packages
You must decide how to use packages based on your application design and your
operational objectives. Keep in mind the following:
Ease of maintenance: When you use packages, you do not need to bind the entire
plan again when you change one SQL statement. You need to bind only the
package associated with the changed SQL statement.
Flexibility in using bind options: The options of BIND PLAN apply to all DBRMs
bound directly to the plan. The options of BIND PACKAGE apply only to the single
DBRM bound to that package. The package options need not all be the same as
the plan options, and they need not be the same as the options for other packages
used by the same plan.
Flexibility in using name qualifiers: You can use a bind option to name a qualifier
for the unqualified object names in SQL statements in a plan or package. By using
packages, you can use different qualifiers for SQL statements in different parts of
your application. By rebinding, you can redirect your SQL statements, for example,
from a test table to a production table.
With packages, you probably do not need dynamic plan selection and its
accompanying exit routine. A package listed within a plan is not accessed until
it is executed. However, it is possible to use dynamic plan selection and
packages together. Doing so can reduce the number of plans in an
application, and hence less effort to maintain the dynamic plan exit routine.
See “Using packages with dynamic plan selection” on page 423 for information
on using packages with dynamic plan selection.
A change to your program probably invalidates one or more of your packages and
perhaps your entire plan. For some changes, you must bind a new object; for
others, rebinding is sufficient.
v To bind a new plan or package, other than a trigger package, use the
subcommand BIND PLAN or BIND PACKAGE with the option
ACTION(REPLACE).
To bind a new trigger package, recreate the trigger associated with the trigger
package.
v To rebind an existing plan or package, other than a trigger package, use the
REBIND subcommand.
To rebind trigger package, use the REBIND TRIGGER PACKAGE subcommand.
Table 37 tells which action particular types of change require. For more information
on trigger packages, see “Working with trigger packages” on page 322.
If you want to change the bind options in effect when the plan or package runs,
review the descriptions of those options in Chapter 2 of DB2 Command Reference.
Not all options of BIND are also available on REBIND.
A plan or package can also become invalid for reasons that do not depend on
operations in your program: for example, if an index is dropped that is used as an
access path by one of your queries. In those cases, DB2 might rebind the plan or
package automatically, the next time it is used. (For details about that operation,
see “Automatic rebinding” on page 322.)
Table 37. Changes requiring BIND or REBIND
Change made: Minimum action necessary:
Drop a table, index or other object, If a table with a trigger is dropped, recreate the trigger
and recreate the object if you recreate the table. Otherwise, no change is
required; automatic rebind is attempted at the next run.
Revoke an authorization to use an None required; automatic rebind is attempted at the
object next run. Automatic rebind fails if authorization is still
not available; then you must issue REBIND for the
package or plan.
Run RUNSTATS to update catalog Issue REBIND for the package or plan to possibly
statistics change the access path chosen.
Dropping objects
If you drop an object that a package depends on, the following occurs:
v If the package is not appended to any running plan, the package becomes
invalid.
v If the package is appended to a running plan, and the drop occurs outside of that
plan, the object is not dropped, and the package does not become invalid.
v If the package is appended to a running plan, and the drop occurs within that
plan, the package becomes invalid.
In all cases, the plan does not become invalid unless it has a DBRM referencing
the dropped object. If the package or plan becomes invalid, automatic rebind occurs
the next time the package or plan is allocated.
Rebinding a package
Table 38 clarifies which packages are bound, depending on how you specify
collection-id (coll-id) package-id (pkg-id), and version-id (ver-id) on the REBIND
PACKAGE subcommand. For syntax and descriptions of this subcommand, see
Chapter 2 of DB2 Command Reference.
REBIND PACKAGE does not apply to packages for which you do not have the
BIND privilege. An asterisk (*) used as an identifier for collections, packages, or
versions does not apply to packages at remote sites.
Table 38. Behavior of REBIND PACKAGE specification. “All” means all collections, packages,
or versions at the local DB2 server for which the authorization ID that issues the command
has the BIND privilege. The symbol 'v .' stands for a required period in the command syntax;
'.' stands for an asterisk.
INPUT Collections Packages Versions
Affected Affected Affected
. all all all
.v.v(.) all all all
.v. all all all
.v.v(ver-id) all all ver-id
.v.v() all all empty string
coll-idv. coll-id all all
coll-idv.v(.) coll-id all all
coll-idv.v(ver-id) coll-id all ver-id
coll-idv.v() coll-id all empty string
coll-idvpkg-idv(.) coll-id pkg-id all
The following example shows the options for rebinding a package at the remote
location, SNTERSA. The collection is GROUP1, the package ID is PROGA, and the
version ID is V1. The connection types shown in the REBIND subcommand replace
connection types specified on the original BIND subcommand. For information on
the REBIND subcommand options, see DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not
for packages at remote sites. Any of the following commands rebinds all versions of
all packages in all collections, at the local DB2 system, for which you have the
BIND privilege.
Either of the following commands rebinds all versions of all packages in the local
collection LEDGER for which you have the BIND privilege.
Either of the following commands rebinds the empty string version of the package
DEBIT in all collections, at the local DB2 system, for which you have the BIND
privilege.
Rebinding a plan
Using the PKLIST keyword replaces any previously specified package list. Omitting
the PKLIST keyword allows the use of the previous package list for rebinding. Using
the NOPKLIST keyword deletes any package list specified when the plan was
previously bound.
The following example rebinds PLANA and changes the package list.
For a description of the technique and several examples of its use, see
“Appendix E. REBIND subcommands for lists of plans or packages” on page 915.
As with any other package, DB2 marks a trigger package invalid when you drop a
table, index, or view on which the trigger package depends. DB2 executes an
automatic rebind the next time the trigger activates. However, if the automatic
rebind fails, DB2 does not mark the trigger package inoperative.
Unlike other packages, a trigger package is freed if you drop the table on which the
trigger is defined, so you can recreate the trigger package only by recreating the
table and the trigger.
You can use the subcommand REBIND TRIGGER PACKAGE to rebind a trigger
package that DB2 has marked inoperative. You can also use REBIND TRIGGER
PACKAGE to change the option values with which DB2 originally bound the trigger
package. The default values for the options that you can change are:
v CURRENTDATA(YES)
v EXPLAIN(YES)
v FLAG(I)
v ISOLATION(RR)
v IMMEDWRITE(NO)
v RELEASE(COMMIT)
When you run REBIND TRIGGER PACKAGE, you can change only the values of
options CURRENTDATA, EXPLAIN, FLAG, IMMEDWRITE, ISOLATION, and
RELEASE.
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package when
the attributes of the data on which the plan or package depends change, or if the
environment in which the package executes changes. Whether the automatic rebind
occurs depends on the value of the field AUTO BIND on installation panel
DSNTIPO. The options used for an automatic rebind are the options used during
the most recent bind process.
In the following cases, DB2 might automatically rebind a plan or package that has
not been marked as invalid:
v A plan or package is bound in a different release of DB2 from the release in
which it was first used.
v A plan or package has a location dependency and runs at a location other than
the one at which it was bound. This can happen when members of a data
sharing group are defined with location names, and a package runs on a different
member from the one on which it was bound.
Whether EXPLAIN runs during automatic rebind depends on the value of the field
EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you
specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except
“PLAN_TABLE not found.”
The SQLCA is not available during automatic rebind. Therefore, if you encounter
lock contention during an automatic rebind, DSNT501I messages cannot
accompany any DSNT376I messages that you receive. To see the matching
DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND
PACKAGE.
After the basic recommendations, the chapter tells what you can do about a major
technique that DB2 uses to control concurrency.
v Transaction locks mainly control access by SQL statements. Those locks are
the ones over which you have the most control.
– “Aspects of transaction locks” on page 333 describes the various types of
transaction locks that DB2 uses and how they interact.
– “Lock tuning” on page 339 describes what you can change to control locking.
Your choices include:
- “Bind options” on page 339
- “Isolation overriding with SQL statements” on page 351
- “The statement LOCK TABLE” on page 352
To prevent those situations from occurring unless they are specifically allowed, DB2
might use locks to control concurrency.
What do locks do? A lock associates a DB2 resource with an application process
in a way that affects how other processes can access the same resource. The
process associated with the resource is said to “hold” or “own” the lock. DB2 uses
locks to ensure that no process accesses data that has been changed, but not yet
committed, by another process.
What do you do about locks? To preserve data integrity, your application process
acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. You can make better use of
your resources and improve concurrency by understanding the effects of those
parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is
already held by another application process and cannot be shared. The suspended
process temporarily stops running.
Order of precedence for lock requests: Incoming lock requests are queued.
Requests for lock promotion, and requests for a lock by an application process that
already holds a lock on the same object, precede requests for locks by new
applications. Within those groups, the request order is “first in, first out”.
Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.
Timeout
Definition: An application process is said to time out when it is terminated because
it has been suspended for longer than a preset interval.
IMS
If you are using IMS, and a timeout occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
COMMIT and ROLLBACK operations do not time out. The command STOP
DATABASE, however, may time out and send messages to the console, but it will
retry up to 15 times.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold
locks on resources that the others need and without which they cannot proceed.
Example: Figure 114 on page 328 illustrates a deadlock between two transactions.
Table M (2) OK
(4)
000300 Page B Job PROJNCHG
Suspend
Notes:
1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG
accesses table M, and acquires an exclusive lock for page B, which contains
record 000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A,
which contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock
on page B of table M. The job is suspended, because job PROJNCHG is
holding an exclusive lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while still holding the
lock on page A of table N. The job is suspended, because job EMPLJCHG is
holding an exclusive lock on page B. The situation is a deadlock.
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll
back the current unit of work for one of the processes or request a process to
terminate. That frees the locks and allows the remaining processes to continue. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason
code 00C90088 is returned in the SQLERRD(3) field of the SQLCA.
If you are using IMS, and a deadlock occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
CICS
If you are using CICS and a deadlock occurs, the CICS attachment facility
decides whether or not to roll back one of the application processes, based on
the value of the ROLBE or ROLBI parameter. If your application process is
chosen for rollback, it receives one of two SQLCODEs in the SQLCA:
-911 A SYNCPOINT command with the ROLLBACK option was
issued on behalf of your application process. All updates
(CICS commands and DL/I calls, as well as SQL statements)
that occurred during the current unit of work have been
undone. (SQLSTATE '40001')
-913 A SYNCPOINT command with the ROLLBACK option was not
issued. DB2 rolls back only the incomplete SQL statement that
encountered the deadlock or timed out. CICS does not roll
back any resources. Your application process should either
issue a SYNCPOINT command with the ROLLBACK option
itself or terminate. (SQLSTATE '57033')
Consider using the DSNTIAC subroutine to check the SQLCODE and display
the SQLCA. Your application must take appropriate actions before resuming.
Keep unlike things apart: Give users different authorization IDs for work with
different databases; for example, one ID for work with a shared database and
another for work with a private database. This effectively adds to the number of
possible (but not concurrent) application processes while minimizing the number of
databases each application process can access.
Plan for batch inserts: If your application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space map
means the added overhead of page P-lock negotiation. For these types of
applications, consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or implicit
clustering index) when assigning space for the SQL INSERT statement. For more
information about using this option in data sharing, see Chapter 6 of DB2 Data
Sharing: Planning and Administration. For the syntax, see Chapter 5 of DB2 SQL
Reference.
Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the default
for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2 usually
chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table spaces. For
| LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX SYSTEM. You
| should use LOCKSIZE TABLESPACE or LOCKSIZE TABLE only for read-only table
| spaces or tables, or when concurrent access to the object is not needed. Before
you choose LOCKSIZE ROW, you should estimate whether there will be an
increase in overhead for locking and weigh that against the increase in concurrency.
Examine small tables: For small tables with high concurrency requirements,
estimate the number of pages in the data and in the index. If the index entries are
short or they have many duplicates, then the entire index can be one root page and
a few leaf pages. In this case, spread out your data to improve concurrency, or
consider it a reason to use row locks.
Partition the data: Online queries typically make few data changes, but they occur
often. Batch jobs are just the opposite; they run for a long time and change many
rows, but occur infrequently. The two do not run well together. You might be able to
separate online applications from batch, or two batch jobs from each other. To
separate online and batch applications, provide separate partitions. Partitioning can
also effectively separate batch jobs from each other.
Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, you can specify the maximum number of rows that can be
on a page. For example, if you use MAXROWS 1, each row occupies a whole
page, and you confine a page lock to a single row. Consider this option if you have
a reason to avoid using row locking, such as in a data sharing environment where
row locking overhead can be excessive.
Taking commit points frequently in a long running unit of recovery (UR) has the
following benefits:
v Reduces lock contention
v Improves the effectiveness of lock avoidance, especially in a data sharing
environment
v Reduces the elapsed time for DB2 system restart following a system failure
v Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application
v Provides more opportunity for utilities, such as online REORG, to break in
| Consider using the UR CHECK FREQ field or the UR LOG WRITE CHECK field of
| installation panel DSNTIPN to help you identify those applications that are not
| committing frequently. UR CHECK FREQ, which identifies when too many
| checkpoints have occurred without a UR issuing a commit, is helpful in monitoring
| overall system activity. UR LOG WRITE CHECK enables you to detect applications
| that might write too many log records between commit points, potentially creating a
| lengthy recovery situation for critical tables.
Even though an application might conform to the commit frequency standards of the
installation under normal operational conditions, variation can occur based on
system workload fluctuations. For example, a low-priority application might issue a
commit frequently on a system that is lightly loaded. However, under a heavy
system load, the use of the CPU by the application may be pre-empted, and, as a
result, the application may violate the rule set by the UR CHECK FREQ parameter.
For this reason, add logic to your application to commit based on time elapsed
since last commit, and not solely based on the amount of SQL processing
performed. In addition, take frequent commit points in a long running unit of work
that is read-only to reduce lock contention and to provide opportunities for utilities,
such as online REORG, to access the data.
Close cursors: If you define a cursor using the WITH HOLD option, the locks it
needs can be held past a commit point. Use the CLOSE CURSOR statement as
soon as possible in your program to cause those locks to be released and the
resources they hold to be freed at the first commit point that follows the CLOSE
CURSOR statement. Whether page or row locks are held for WITH HOLD cursors
is controlled by the RELEASE LOCKS parameter on panel DSNTIP4.
Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will
acquire table and table space locks when the objects are first used and not when
the plan is allocated, is the best choice for concurrency. Packages are always
| bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better
| protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that
| need gross locks instead of intent locks or that run with other applications that may
| request gross locks instead of intent locks. Acquiring the locks at plan allocation
| also prevents any one transaction in the application from incurring the cost of
| acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you
| might want to bind all DBRMs directly to the plan.
For information on how to make an agent part of a global transaction for RRSAF
applications, see “Chapter 30. Programming for the Recoverable Resource Manager
Services attachment facility (RRSAF)” on page 767.
Knowing the aspects helps you understand why a process suspends or times out or
why two processes deadlock.
As Figure 115 on page 334 suggests, row locks and page locks occupy an equal
place in the hierarchy of lock sizes.
Table space lock Table space lock LOB table space lock
Table lock
Row lock Page lock Row lock Page lock LOB lock
Row lock Page lock Row lock Page lock Row lock Page lock
Row lock Page lock Row lock Page lock Row lock Page lock
No matter how LOCKPART is defined, utility jobs can control separate partitions
of a table space or index space and can run concurrently with operations on
other partitions.
v A simple table space can contain more than one table. A lock on the table
space locks all the data in every table. A single page of the table space can
contain rows from every table. A lock on a page locks every row in the page, no
matter what tables the data belongs to. Thus, a lock needed to access data from
one table can make data from other tables temporarily unavailable. That effect
can be partly undone by using row locks instead of page locks. But that step
does not relieve the sweeping effect of a table space lock.
v In a segmented table space, rows from different tables are contained in different
pages. Locking a page does not lock data from more than one table. Also, DB2
can acquire a table lock, which locks only the data from one specific table.
Because a single row, of course, contains data from only one table, the effect of
a row lock is the same as for a simple or partitioned table space: it locks one row
of data from one table.
v In a LOB table space, pages are not locked. Because there is no concept of a
row in a LOB table space, rows are not locked. Instead, LOBs are locked. See
“LOB locks” on page 355 for more information.
Effects
For maximum concurrency, locks on a small amount of data held for a short
duration are better than locks on a large amount of data held for a long duration.
However, acquiring a lock requires processor time, and holding a lock requires
storage; thus, acquiring and holding one table space lock is more economical than
acquiring and holding many page locks. Consider that trade-off to meet your
performance and concurrency objectives.
Duration of partition, table, and table space locks: Partition, table, and table
space locks can be acquired when a plan is first allocated, or you can delay
acquiring them until the resource they lock is first used. They can be released at
the next commit point or be held until the program terminates.
Duration of page and row locks: If a page or row is locked, DB2 acquires the lock
only when it is needed. When the lock is released depends on many factors, but it
is rarely held beyond the next commit point.
For information about controlling the duration of locks, see “Bind options” on
page 339.
The possible modes for page and row locks and the modes for partition, table, and
table space locks are listed below. See “LOB locks” on page 355 for more
information about modes for LOB locks and locks on LOB table spaces.
When a page or row is locked, the table, partition, or table space containing it is
also locked. In that case, the table, partition, or table space lock has one of the
intent modes: IS, IX, or SIX. The modes S, U, and X of table, partition, and table
space locks are sometimes called gross modes. In the context of reading, SIX is a
gross mode lock because you don’t get page or row locks; in this sense, it is like an
S lock.
Example: An SQL statement locates John Smith in a table of customer data and
changes his address. The statement locks the entire table space in mode IX and
the specific row that it changes in mode X.
Definition: Locks of some modes do not shut out all other users. Assume that
application process A holds a lock on a table space that process B also wants to
access. DB2 requests, on behalf of B, a lock of some particular mode. If the mode
of A’s lock permits B’s request, the two locks (or modes) are said to be compatible.
Compatible lock modes: Compatibility for page and row locks is easy to define.
Table 39 shows whether page locks of any two modes, or row locks of any two
modes, are compatible (Yes) or not (No). No question of compatibility of a page lock
with a row lock can arise, because a table space cannot use both page and row
locks.
Table 39. Compatibility of page lock and row lock modes
Lock Mode S U X
S Yes Yes No
U Yes No No
X No No No
Compatibility for table space locks is slightly more complex. Table 40 shows
whether or not table space locks of any two modes are compatible.
Table 40. Compatibility of table and table space (or partition) lock modes
Lock Mode IS IX S U SIX X
The underlying data page or row locks are acquired to serialize the reading and
updating of index entries to ensure the data is logically consistent, meaning that the
data is committed and not subject to rollback or abort. The data locks can be held
for a long duration such as until commit. However, the page latches are only held
for a short duration while the transaction is accessing the page. Because the index
pages are not locked, hot spot insert scenarios (which involve several transactions
trying to insert different entries into the same index page at the same time) do not
cause contention problems in the index.
A query that uses index-only access might lock the data page or row, and that lock
can contend with other processes that lock the data. However, using lock avoidance
techniques can reduce the contention. See “Lock avoidance” on page 348 for more
information about lock avoidance.
Lock tuning
This section describes what you can change to affect how a particular application
uses transaction locks, under:
v “Bind options”
v “Isolation overriding with SQL statements” on page 351
v “The statement LOCK TABLE” on page 352
Bind options
These options determine when an application process acquires and releases its
locks and to what extent it isolates its actions from possible effects of other
processes acting concurrently.
Effect of LOCKPART YES: Partition locks follow the same rules as table space
locks, and all partitions are held for the same duration. Thus, if one package is
using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all
partitions use RELEASE(DEALLOCATE).
For table spaces defined as LOCKPART YES, lock demotion occurs as with other
table spaces; that is, the lock is demoted at the table space level, not the partition
level.
Restriction: This combination is not allowed for BIND PACKAGE. Use this
combination if processing efficiency is more important than concurrency. It is a good
choice for batch jobs that would release table and table space locks only to
reacquire them almost immediately. It might even improve concurrency, by allowing
batch jobs to finish sooner. Generally, do not use this combination if your
application contains many SQL statements that are often not executed.
IMS
A CHKP or SYNC call (for single-mode transactions), a GU call to the I/O
PCB, or a ROLL or ROLB call is completed
CICS
A SYNCPOINT command is issued.
Exception: If the cursor is defined WITH HOLD, table or table space locks
necessary to maintain cursor position are held past the commit point. (See “The
effect of WITH HOLD for a cursor” on page 350 for more information.
v The least restrictive lock needed to execute each SQL statement is used except
when a more restrictive lock remains from a previous statement. In that case,
that lock is used without change.
Application
Time line
Figure 116. How an application using RR isolation acquires locks. All locks are held until the
application commits.
Applications that use repeatable read can leave rows or pages locked for
longer periods, especially in a distributed environment, and they can claim
more logical partitions than similar applications using cursor stability.
They are also subject to being drained more often by utility operations.
Because so many locks can be taken, lock escalation might take place.
Frequent commits release the locks and can help avoid lock escalation.
With repeatable read, lock promotion occurs for table space scan to prevent
the insertion of rows that might qualify for the predicate. (If access is via
index, DB2 locks the key range. If access is via table space scans, DB2
locks the table, partition, or table space.)
ISOLATION (RS)
Allows the application to read the same pages or rows more than once
without allowing qualifying rows to be updated or deleted by another
Application
Time line
Figure 117. How an application using RS isolation acquires locks when no lock avoidance
techniques are used. Locks L2 and L4 are held until the application commits. The other locks
aren’t held.
Applications using read stability can leave rows or pages locked for long
periods, especially in a distributed environment.
| Figure 118 and Figure 119 show processing of positioned update and delete
| operations without optimistic concurrency control and with optimistic
| concurrency control.
|
Figure 118. Positioned updates and deletes without optimistic concurrency control
Figure 119. Positioned updates and deletes with optimistic concurrency control
ISOLATION (UR)
Allows the application to read while acquiring few locks, at the risk of
reading uncommitted data. UR isolation applies only to read-only
operations: SELECT, SELECT INTO, or FETCH from a read-only result
table.
There is an element of uncertainty about reading uncommitted data.
Example: An application tracks the movement of work from station to
station along an assembly line. As items move from one station to another,
the application subtracts from the count of items at the first station and
adds to the count of items at the second. Assume you want to query the
count of items at all the stations, while the application is running
concurrently.
What can happen if your query reads data that the application has changed
but has not committed?
If the application subtracts an amount from one record before adding it
to another, the query could miss the amount entirely.
If the application adds first and then subtracts, the query could add the
amount twice.
If those situations can occur and are unacceptable, do not use UR isolation.
Restrictions: You cannot use UR isolation for the types of statement listed
below. If you bind with ISOLATION(UR), and the statement does not specify
WITH RR or WITH RS, then DB2 uses CS isolation for:
v INSERT, UPDATE, and DELETE
v Any cursor defined with FOR UPDATE OF
When can you use uncommitted read (UR)? You can probably use UR
isolation in cases like the following ones:
v When errors cannot occur.
Example: A reference table, like a table of descriptions of parts by part
number. It is rarely updated, and reading an uncommitted update is
probably no more damaging than reading the table 5 seconds earlier. Go
ahead and read it with ISOLATION(UR).
Example: The employee table of Spiffy Computer, our hypothetical user.
For security reasons, updates can be made to the table only by members
of a single department. And that department is also the only one that can
query the entire table. It is easy to restrict queries to times when no
updates are being made and then run with UR isolation.
v When an error is acceptable.
Example: Spiffy wants to do some statistical analysis on employee data.
A typical question is, “What is the average salary by sex within education
Local access: Locally, CURRENTDATA(YES) means that the data upon which
the cursor is positioned cannot change while the cursor is positioned on it. If the
cursor is positioned on data in a local base table or index, then the data returned
with the cursor is current with the contents of that table or index. If the cursor is
positioned on data in a work file, the data returned with the cursor is current only
with the contents of the work file; it is not necessarily current with the contents of
the underlying table or index.
Application
Request Request next
row or page row or page
Time line
Figure 120. How an application using isolation CS with CURRENTDATA(YES) acquires locks.
This figure shows access to the base table. The L2 and L4 locks are released after DB2
moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current
with the contents of the table or index, regardless of whether a work file is used.
Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.
If you are using parallelism but want to maintain currency with the data, you have
the following options:
v Disable parallelism (Use SET DEGREE = ’1’ or bind with DEGREE(1)).
v Use isolation RR or RS (parallelism can still be used).
v Use the LOCK TABLE statement (parallelism can still be used).
To take the best advantage of this method of avoiding locks, make sure all
applications that are accessing data concurrently issue COMMITs frequently.
Figure 121 on page 349 shows how DB2 can avoid taking locks and Table 41 on
page 349 summarizes the factors that influence lock avoidance.
Time line
DB2
Figure 121. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This
figure shows access to the base table. If DB2 must take a lock, then locks are released when
DB2 moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).
Table 41. Lock avoidance factors. “Returned data” means data that satisfies the predicate.
“Rejected data” is that which does not satisfy the predicate.
Avoid Avoid
locks on locks on
Isolation CURRENTDATA Cursor type
returned rejected
data? data?
UR N/A Read-only N/A N/A
Read-only
YES Updatable No
Ambiguous
CS Yes
Read-only Yes
NO Updatable No
Ambiguous Yes
Read-only
RS N/A Updatable No Yes
Ambiguous
Read-only
RR N/A Updatable No No
Ambiguous
For example, the plan value for CURRENTDATA has no effect on the packages
executing under that plan. If you do not specify a CURRENTDATA option explicitly
when you bind a package, the default is CURRENTDATA(YES).
The rules are slightly different for the bind options RELEASE and ISOLATION. The
values of those two options are set when the lock on the resource is acquired and
usually stay in effect until the lock is released. But a conflict can occur if a
statement that is bound with one pair of values requests a lock on a resource that
is already locked by a statement that is bound with a different pair of values. DB2
resolves the conflict by resetting each option with the available value that causes
the lock to be held for the greatest duration.
Table 42 shows how conflicts between isolation levels are resolved. The first column
is the existing isolation level, and the remaining columns show what happens when
another isolation level is requested by a new application process.
Table 42. Resolving isolation conflicts
UR CS RS RR
UR n/a CS RS RR
CS CS n/a RS RR
RS RS RS n/a RR
RR RR RR RR n/a
For locks and claims needed for cursor position, the rules described above differ as
follows:
Page and row locks: If your installation specifies NO on the RELEASE LOCKS
field of installation panel DSNTIP4, as described in Part 5 (Volume 2) of DB2
Administration Guide, a page or row lock is held past the commit point. This page
or row lock is not necessary for cursor position, but the NO option is provided for
compatibility that might rely on this lock. However, an X or U lock is demoted to an
S lock at that time. (Because changes have been committed, exclusive control is no
longer needed.) After the commit point, the lock is released at the next commit
point, provided that no cursor is still positioned on that page or row.
A YES for RELEASE LOCKS means that no data page or row locks are held past
commit.
Claims: All claims, for any claim class, are held past the commit point. They are
released at the next commit point after all held cursors have moved off the object or
have been closed.
finds the maximum, minimum, and average bonus in the sample employee table.
The statement is executed with uncommitted read isolation, regardless of the value
of ISOLATION with which the plan or package containing the statement is bound.
Using KEEP UPDATE LOCKS on the WITH clause: You can use the clause
KEEP UPDATE LOCKS clause when you specify a SELECT with FOR UPDATE
OF. This option is only valid when you use WITH RR or WITH RS. By using this
clause, you tell DB2 to acquire an X lock instead of an U or S lock on all the
qualified pages or rows.
Here is an example:
SELECT ...
FOR UPDATE OF WITH RS KEEP UPDATE LOCKS;
With read stability (RS) isolation, a row or page rejected during stage 2 processing
still has the X lock held on it, even though it is not returned to the application.
With repeatable read (RR) isolation, DB2 acquires the X locks on all pages or rows
that fall within the range of the selection expression.
Executing the statement requests a lock immediately, unless a suitable lock exists
already, as described below. The bind option RELEASE determines when locks
acquired by LOCK TABLE or LOCK TABLE with the PART option are released.
You can use LOCK TABLE on any table, including auxiliary tables of LOB table
spaces. See “The LOCK TABLE statement” on page 358 for information about
locking auxiliary tables.
Caution when using LOCK TABLE with simple table spaces: The statement
locks all tables in a simple table space, even though you name only one table. No
other process can update the table space for the duration of the lock. If the lock is
in exclusive mode, no other process can read the table space, unless that process
is running with UR isolation.
Additional examples of LOCK TABLE: You might want to lock a table or partition
that is normally shared for any of the following reasons:
Taking a“snapshot”
If you want to access an entire table throughout a unit of work as it
was at a particular moment, you must lock out concurrent changes.
If other processes can access the table, use LOCK TABLE IN
SHARE MODE. (RR isolation is not enough; it locks out changes
only from rows or pages you have already accessed.)
Avoiding overhead
If you want to update a large part of a table, it can be more efficient
to prevent concurrent access than to lock each page as it is
updated and unlock it when it is committed. Use LOCK TABLE IN
EXCLUSIVE MODE.
Preventing timeouts
Your application has a high priority and must not risk timeouts from
contention with other application processes. Depending on whether
your application updates or not, use either LOCK IN EXCLUSIVE
MODE or LOCK TABLE IN SHARE MODE.
Access paths
The access path used can affect the mode, size, and even the object of a lock. For
example, an UPDATE statement using a table space scan might need an X lock on
the entire table space. If rows to be updated are located through an index, the
same statement might need only an IX lock on the table space and X locks on
individual pages or rows.
If you use the EXPLAIN statement to investigate the access path chosen for an
SQL statement, then check the lock mode in column TSLOCKMODE of the
resulting PLAN_TABLE. If the table resides in a nonsegmented table space, or is
defined with LOCKSIZE TABLESPACE, the mode shown is that of the table space
lock. Otherwise, the mode is that of the table lock.
IMS
A CHKP or SYNC call, or (for single-mode transactions) a GU call to the I/O
PCB
CICS
A SYNCPOINT command.
LOB locks
The locking activity for LOBs is described separately from transaction locks
because the purpose of LOB locks is different than that of regular transaction locks.
Terminology: A lock that is taken on a LOB value in a LOB table space is called a
LOB lock.
DB2 also obtains locks on the LOB table space and the LOB values stored in that
LOB table space, but those locks have the following primary purposes:
v To determine whether space from a deleted LOB can be reused by an inserted or
updated LOB
Storage for a deleted LOB is not reused until no more readers (including held
locators) are on the LOB and the delete operation has been committed.
v To prevent deallocating space for a LOB that is currently being read
A LOB can be deleted from one application’s point-of-view while a reader from
another application is reading the LOB. The reader continues reading the LOB
because all readers, including those readers that are using uncommitted read
isolation, acquire S-locks on LOBs to prevent the storage for the LOB they are
reading from being deallocated. That lock is held until commit. A held LOB
| locator or a held cursor cause the LOB lock and LOB table space lock to be held
past commit.
Table 44 shows the relationship between the action that is occurring on the LOB
value and the associated LOB table space and LOB locks that are acquired.
Table 44. Locks that are acquired for operations on LOBs. This table does not account for
gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE
statement, or lock escalation.
Action on LOB value LOB table space
lock LOB lock Comment
Read (including UR) IS S Prevents storage from being
reused while the LOB is
being read or while locators
are referencing the LOB
Insert IX X Prevents other processes
from seeing a partial LOB
Delete IS S To hold space in case the
delete is rolled back. (The X
is on the base table row or
page.) Storage is not
reusable until the delete is
committed and no other
readers of the LOB exist.
Update IS->IX Two LOB Operation is a delete
locks: an followed by an insert.
S-lock for the
delete and an
X-lock for the
insert.
Update the LOB to null IS S No insert, just a delete.
or zero-length
Update a null or IX X No delete, just an insert.
zero-length LOB to a
value
Duration of locks
Duration of locks on LOB table spaces
Locks on LOB table spaces are acquired when they are needed; that is, the
ACQUIRE option of BIND has no effect on when the table space lock on the LOB
table space is taken. The table space lock is released according to the value
specified on the RELEASE option of BIND (except when a cursor is defined WITH
HOLD or if a held LOB locator exists).
If the application uses HOLD LOCATOR, the LOB lock is not freed until the first
commit operation after a FREE LOCATOR statement is issued, or until the thread is
deallocated.
| A note about INSERT with fullselect: Because LOB locks are held until commit
and because locks are put on each LOB column in both a source table and a target
table, it is possible that a statement such as an INSERT with a fullselect that
involves LOB columns can accumulate many more locks than a similar statement
that does not involve LOB columns. To prevent system problems caused by too
many locks, you can:
v Ensure that you have lock escalation enabled for the LOB table spaces that are
involved in the INSERT. In other words, make sure that LOCKMAX is non-zero
for those LOB table spaces.
v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before
executing the INSERT with fullselect.
v Increase the LOCKMAX value on the table spaces involved and ensure that the
user lock limit is sufficient.
v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the
auxiliary table that is contained in the LOB table space locks the LOB table
space.)
If your application intercepts abends, DB2 commits work because it is unaware that
an abend has occurred. If you want DB2 to roll back work automatically when an
abend occurs in your program, do not let the program or runtime environment
intercept the abend. For example, if your program uses Language Environment, and
you want DB2 to roll back work automatically when an abend occurs in the
program, specify the runtime options ABTERMENC(ABEND) and TRAP(ON).
A unit of work is a logically distinct procedure containing steps that change the data.
If all the steps complete successfully, you want the data changes to become
permanent. But, if any of the steps fail, you want all modified data to return to the
original value before the procedure began.
For example, suppose two employees in the sample table DSN8710.EMP exchange
offices. You need to exchange their office phone numbers in the PHONENO
column. You would use two UPDATE statements to make each phone number
current. Both statements, taken together, are a unit of work. You want both
statements to complete successfully. For example, if only one statement is
successful, you want both phone numbers rolled back to their original values before
attempting another update.
When a unit of work completes, all locks implicitly acquired by that unit of work after
it begins are released, allowing a new unit of work to begin.
The amount of processing time used by a unit of work in your program determines
the length of time DB2 prevents other users from accessing that locked data. When
several programs try to use the same data concurrently, each program’s unit of
work must be as short as possible to minimize the interference between the
programs. The remainder of this chapter describes the way a unit of work functions
in various environments. For more information on unit of work, see Chapter 1 of
DB2 SQL Reference or Part 4 (Volume 1) of DB2 Administration Guide.
A commit point occurs when you issue a COMMIT statement or your program
terminates normally. You should issue a COMMIT statement only when you are sure
Before you can connect to another DBMS you must issue a COMMIT statement. If
the system fails at this point, DB2 cannot know that your transaction is complete. In
this case, as in the case of a failure during a one-phase commit operation for a
single subsystem, you must make your own provision for maintaining data integrity.
If your program abends or the system fails, DB2 backs out uncommitted data
changes. Changed data returns to its original condition without interfering with other
system activities.
Consider the inventory example, in which the quantity of items sold is subtracted
from the inventory file and then added to the reorder file. When both transactions
complete (and not before) and the data in the two files is consistent, the program
can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps
fails, you want the data to return to the value it had before the unit of work began.
That is, you want it rolled back to a previous point of consistency. You can achieve
this by using the SYNCPOINT command with the ROLLBACK option.
By using a SYNCPOINT command with the ROLLBACK option, you can back out
uncommitted data changes. For example, a program that updates a set of related
rows sometimes encounters an error after updating several of them. The program
can use the SYNCPOINT command with the ROLLBACK option to undo all of the
updates without giving up control.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. You can coordinate DB2 with CICS functions used in programs, so
that DB2 and non-DB2 data are consistent.
A commit point can occur in a program as the result of any one of the following four
events:
v The program terminates normally. Normal program termination is always a
commit point.
v The program issues a checkpoint call. Checkpoint calls are a program’s means
of explicitly indicating to IMS that it has reached a commit point in its processing.
v The program issues a SYNC call. The SYNC call is a Fast Path system service
call to request commit point processing. You can use a SYNC call only in a
nonmessage-driven Fast Path program.
v For a program that processes messages as its input, a commit point can occur
when the program retrieves a new message. IMS considers a new message the
start of a new unit of work in the program. Unless you define the transaction as
single- or multiple-mode on the TRANSACT statement of the APPLCTN macro
for the program, retrieving a new message does not signal a commit point. For
more information about the APPLCTN macro, see the IMS Install Volume 2:
System Definition and Tailoring.
– If you specify single-mode, a commit point in DB2 occurs each time the
program issues a call to retrieve a new message. Specifying single-mode can
simplify recovery; you can restart the program from the most recent call for a
new message if the program abends. When IMS restarts the program, the
program starts by processing the next message.
– If you specify multiple-mode, a commit point occurs when the program issues
a checkpoint call or when it terminates normally. Those are the only times
during the program that IMS sends the program’s output messages to their
destinations. Because there are fewer commit points to process in
multiple-mode programs than in single-mode programs, multiple-mode
programs could perform slightly better than single-mode programs. When a
multiple-mode program abends, IMS can restart it only from a checkpoint call.
Instead of having only the most recent message to reprocess, a program
DB2 does some processing with single- and multiple-mode programs that IMS does
not. When a multiple-mode program issues a call to retrieve a new message, DB2
performs an authorization check and closes all open cursors in the program.
If the program processes messages, IMS sends the output messages that the
application program produces to their final destinations. Until the program reaches a
commit point, IMS holds the program’s output messages at a temporary destination.
If the program abends, people at terminals, and other application programs do not
receive inaccurate information from the terminating application program.
The SQL COMMIT and ROLLBACK statements are not valid in an IMS
environment.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original state without interfering with other system activities.
Sometimes DB2 data does not return to a consistent state immediately. DB2 does
not process data in an indoubt state until you restart IMS. To ensure that DB2 and
IMS are synchronized, you must restart both DB2 and IMS.
There are two calls available to IMS programs to simplify program recovery: the
symbolic checkpoint call and the restart call.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program to be restored at restart. Symbolic checkpoint calls do not
support OS/VS files; if your program accesses OS/VS files, you can convert those
files to GSAM and use symbolic checkpoints. DB2 always recovers to the last
checkpoint. You must restart the program from that point.
However, message-driven BMPs must issue checkpoint calls rather than get-unique
calls to establish commit points, because they can restart from a checkpoint only. If
a program abends after issuing a get-unique call, IMS backs out the database
updates to the most recent commit point—the get-unique call.
Checkpoints also close all open cursors, which means you must reopen the cursors
you want and re-establish positioning.
If a batch-oriented BMP does not issue checkpoints frequently enough, IMS can
abend that BMP or another application program for one of these reasons:
v If a BMP retrieves and updates many database records between checkpoint
calls, it can monopolize large portions of the databases and cause long waits for
other programs needing those segments. (The exception to this is a BMP with a
processing option of GO. IMS does not enqueue segments for programs with this
processing option.) Issuing checkpoint calls releases the segments that the BMP
has enqueued and makes them available to other programs.
v If IMS is using program isolation enqueuing, the space needed to enqueue
information about the segments that the program has read and updated must not
exceed the amount defined for the IMS system. If a BMP enqueues too many
When you issue a DL/I CHKP call from an application program using DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits
all the DB2 database resources. No checkpoint information is recorded for DB2
databases in the IMS log or the DB2 log. The application program must record
relevant information about DB2 databases for a checkpoint, if necessary.
One way to do this is to put such information in a data area included in the DL/I
CHKP call. There can be undesirable performance implications of re-establishing
position within a DB2 database as a result of the commit processing that takes
place because of a DL/I CHKP call. The fastest way to re-establish a position in a
DB2 database is to use an index on the target table, with a key that matches
one-to-one with every column in the SQL predicate.
Another limitation of processing DB2 databases in a BMP program is that you can
restart the program only from the latest checkpoint and not from any checkpoint, as
in IMS.
Using ROLL
Issuing a ROLL call causes IMS to terminate the program with a user abend code
U0778. This terminates the program without a storage dump.
When you issue a ROLL call, the only option you supply is the call function, ROLL.
Using ROLB
The advantage of using ROLB is that IMS returns control to the program after
executing ROLB, thus the program can continue processing. The options for ROLB
are:
v The call function, ROLB
v The name of the I/O PCB.
In batch programs
If your IMS system log is on direct access storage, and if the run option BKO is Y to
specify dynamic back out, you can use the ROLB call in a batch program. The
ROLB call backs out the database updates since the last commit point and returns
control to your program. You cannot specify the address of an I/O area as one of
the options on the call; if you do, your program receives an AD status code. You
must, however, have an I/O PCB for your program. Specify CMPAT=YES on the
CMPAT keyword in the PSBGEN statement for your program’s PSB. For more
information on using the CMPAT keyword, see IMS Utilities Reference: System.
Example: Rolling back to the most recently created savepoint: When the
ROLLBACK TO SAVEPOINT statement is executed in the following code, DB2 rolls
back work to savepoint B.
EXEC
. SQL SAVEPOINT A;
.
.
EXEC
. SQL SAVEPOINT B;
.
.
When savepoints are active, you cannot access remote sites using three-part
names or aliases for three-part names. You can, however, use DRDA access with
explicit CONNECT statements when savepoints are active. If you set a savepoint
before you execute a CONNECT statement, the scope of that savepoint is the local
site. If you set a savepoint after you execute the CONNECT statement, the scope
of that savepoint is the site to which you are connected.
You can set a savepoint with the same name multiple times within a unit of work.
Each time that you set the savepoint, the new value of the savepoint replaces the
old value.
Example: Setting a savepoint multiple times: Suppose that the following actions
take place within a unit of work:
1. Application A sets savepoint S.
2. Application A calls stored procedure P.
3. Stored procedure P sets savepoint S.
If you do not want a savepoint to have different values within a unit of work, you
can use the UNIQUE option in the SAVEPOINT statement. If an application
executes a SAVEPOINT statement for a savepoint that was previously defined as
unique, an SQL error occurs.
Savepoints are automatically released at the end of a unit of work. However, if you
no longer need a savepoint before the end of a transaction, you should execute the
SQL RELEASE SAVEPOINT statement. Releasing savepoints is essential if you
need to use three-part names to access remote locations.
In this chapter, we assume that you are requesting services from a remote DBMS.
That DBMS is a server in that situation, and your local system is a requester or
client.
Your application can be connected to many DBMSs at one time; the one currently
performing work is the current server. When the local system is performing work, it
also is called the current server.
A remote server can be truly remote in the physical sense: thousands of miles
away. But that is not necessary; it could even be another subsystem of the same
operating system your local DBMS runs under. We assume that your local DBMS is
an instance of DB2 for OS/390 and z/OS. A remote server could be an instance of
DB2 for OS/390 and z/OS also, or an instance of one of many other products.
A DBMS, whether local or remote, is known to your DB2 system by its location
name. The location name of a remote DBMS is recorded in the communications
database. (If you need more information about location names or the
communications database, see Part 3 of DB2 Installation Guide.)
Example 1: You can write a query like this to access data at a remote server:
SELECT * FROM CHICAGO.DSN8710.EMP
WHERE EMPNO = '0001000';
The mode of access depends on whether you bind your DBRMs into packages and
on the value of field DATABASE PROTOCOL in installation panel DSNTIP5 or the
value of bind option DBPROTOCOL. Bind option DBPROTOCOL overrides the
installation setting.
Example 2: You can also write statements like these to accomplish the same task:
# Before you can execute the query at location CHICAGO, you must bind the
# application as a remote package at the CHICAGO server. Before you can run the
# application, you must also bind a local plan with a package list that includes the
# remote package.
Example 3: You can call a stored procedure, which is a subroutine that can contain
many SQL statements. Your program executes this:
EXEC SQL
CONNECT TO ATLANTA;
EXEC SQL
CALL procedure_name (parameter_list);
The parameter list is a list of host variables that is passed to the stored procedure
and into which it returns the results of its execution. The stored procedure must
already exist at location ATLANTA.
Two methods of access: The examples above show two different methods for
accessing distributed data.
v Example 1 shows a statement that can be executed with DB2 private protocol
access or DRDA access.
If you bind the DBRM that contains the statement into a plan at the local DB2
and specify the bind option DBPROTOCOL(PRIVATE), you access the server
using DB2 private protocol access.
If you bind the DBRM that contains the statement using one of these methods,
you access the server using DRDA access.
Method 1:
– Bind the DBRM into a package at the local DB2 using the bind option
DBPROTOCOL(DRDA®).
– Bind the DBRM into a package at the remote location (CHICAGO).
– Bind the packages into a plan using bind option DBPROTOCOL(DRDA).
Method 2:
– Bind the DBRM into a package at the remote location.
| – Bind the remote package and the DBRM into a plan at the local site, using the
bind option DBPROTOCOL(DRDA).
v Examples 2 and 3 show statements that are executed with DRDA access only.
When you use these methods for DRDA access, your application must include an
explicit CONNECT statement to switch your connection from one system to
another.
If you update two or more DBMSs you must consider how updates can be
coordinated, so that units of work at the two DBMSs are either both committed or
both rolled back. Be sure to read “Coordinating updates to two or more data
sources” on page 379.
You can use the resource limit facility at the server to govern distributed SQL
statements. Governing is by plan for DB2 private protocol access and by package
for DRDA access. See “Considerations for moving from DB2 private protocol access
to DRDA access” on page 393 for information on changes you need to make to
your resource limit facility tables when you move from DB2 private protocol access
to DRDA access.
Because platforms other than DB2 for OS/390 and z/OS might not support the
three-part name syntax, you should not code applications with three-part names if
you plan to port those applications to other platforms.
In a three-part table name, the first part denotes the location. The local DB2 makes
and breaks an implicit connection to a remote server as needed.
The following overview shows how the application uses three-part names:
Read input values
Do for all locations
Read location name
Set up statement to prepare
Prepare statement
Execute statement
End loop
Commit
After the application obtains a location name, for example 'SAN_JOSE', it next
creates the following character string:
INSERT INTO SAN_JOSE.DSN8710.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then
executes these statements:
EXEC SQL
PREPARE STMT1 FROM :INSERTX;
EXEC SQL
EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8710.PROJ)” on page 822.
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
Programming hint: You might find it convenient to use aliases when creating
character strings that become prepared statements, instead of using full three-part
names like SAN_JOSE.DSN8710.PROJ. For information on aliases, see the section
on CREATE ALIAS in DB2 SQL Reference.
In this example, Spiffy’s application executes CONNECT for each server in turn and
the server executes INSERT. In this case the tables to be updated each have the
same name, though each is defined at a different server. The application executes
the statements in a loop, with one iteration for each server.
The application connects to each new server by means of a host variable in the
CONNECT statement. CONNECT changes the special register CURRENT SERVER
to show the location of the new server. The values to insert in the table are
transmitted to a location as input host variables.
The following overview shows how the application uses explicit CONNECTs:
Read input values
Do for all locations
Read location name
Connect to location
Execute insert statement
End loop
Commit
Release all
The application inserts a new location name into the variable LOCATION_NAME,
and executes the following statements:
EXEC SQL
CONNECT TO :LOCATION_NAME;
EXEC SQL
INSERT INTO DSN8710.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8710.PROJ)” on page 822. LOCATION_NAME is
a character-string variable of length 16.
Releasing connections
When you connect to remote locations explicitly, you must also break those
connections explicitly. You have considerable flexibility in determining how long
connections remain open, so the RELEASE statement differs significantly from
CONNECT.
Examples: Using the RELEASE statement, you can place any of the following in
the release-pending state.
v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v The current SQL connection, whatever its location name:
EXEC SQL RELEASE CURRENT;
v All connections except the local connection:
EXEC SQL RELEASE ALL;
v All DB2 private protocol connections. If the first phase of your application
program uses DB2 private protocol access and the second phase uses DRDA
access, then open DB2 private protocol connections from the first phase could
cause a CONNECT operation to fail in the second phase. To prevent that error,
execute the following statement before the commit operation that separates the
two phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between
instances of DB2 for OS/390 and z/OS.
Three-part names and multiple servers: If you use a three-part name, or an alias
that resolves to one, in a statement executed at a remote server by DRDA access,
and if the location name is not that of the server, then the method by which the
remote server accesses data at the named location depends on the value of
DBPROTOCOL. If the package at the first remote server is bound with
DBPROTOCOL(PRIVATE), DB2 uses DB2 private protocol access to access the
second remote server. If the package at the first remote server is bound with
DBPROTOCOL(DRDA), DB2 uses DRDA access to access the second remote
server. We recommend that you follow these steps so that access to the second
remote server is by DRDA access:
v Rebind the package at the first remote server with DBPROTOCOL(DRDA).
v Bind the package that contains the three-part name at the second server.
Accessing declared temporary tables using three-part names: You can access
a remote declared temporary table using a three-part name only if you use DRDA
access. However, if you combine explicit CONNECT statements and three-part
names in your application, a reference to a remote declared temporary table must
be a forward reference. For example, you can perform the following series of
actions, which includes a forward reference to a declared temporary table:
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the remote site */
EXEC SQL CONNECT RESET; /* Connect back to local site */
EXEC SQL INSERT INTO CHICAGO.SESSION.T1
(VALUES 'ABCDEF'); /* Access the temporary table*/
/* at the remote site (forward reference) */
However, you cannot perform the following series of actions, which includes a
backward reference to the declared temporary table:
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL INSERT INTO ATLANTA.SESSION.T1
(VALUES 'ABCDEF'); /* Cannot access temp table */
/* from the remote site (backward reference)*/
Savepoints: In a distributed environment, you can set savepoints only if you use
DRDA access with explicit CONNECT statements. If you set a savepoint and then
execute an SQL statement with a three-part