DB2DataMovement Db2dme953
DB2DataMovement Db2dme953
5
for Linux, UNIX, and Windows
Version 9 Release 5
SC23-5847-03
DB2 Version 9.5
for Linux, UNIX, and Windows
Version 9 Release 5
SC23-5847-03
Note
Before using this information and the product it supports, read the general information under Appendix F, “Notices,” on
page 473.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/
planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright IBM Corporation 1993, 2010.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . . v LOAD authority . . . . . . . . . . . 133
Loading data . . . . . . . . . . . . . 133
Loading XML data . . . . . . . . . . 135
Part 1. Data movement utilities and Load considerations for partitioned tables . . . 136
reference . . . . . . . . . . . . . . 1 LBAC-protected data load considerations . . . 139
Identity column load considerations . . . . . 140
Chapter 1. Data movement options Generated column load considerations . . . . 143
available in DB2 V9.5 . . . . . . . . . 3 Moving data using the CURSOR file type . . . 145
Propagating dependent immediate staging
tables . . . . . . . . . . . . . . . 148
Chapter 2. Export utility . . . . . . . . 7 Refreshing dependent immediate materialized
Export utility overview . . . . . . . . . . . 7 query tables . . . . . . . . . . . . . 149
Privileges and authorities required to use the export Multidimensional clustering considerations . . 150
utility. . . . . . . . . . . . . . . . . 8 Moving data using a customized application
Exporting data. . . . . . . . . . . . . . 8 (user exit) . . . . . . . . . . . . . 151
Exporting XML data . . . . . . . . . . . 9 Additional considerations for load . . . . . . 159
LBAC-protected data export considerations . . . 12 Parallelism and loading . . . . . . . . . 159
Table export considerations . . . . . . . . 13 Index creation during load operations . . . . 159
Typed table export considerations . . . . . . 14 Compression dictionary creation during load
Identity column export considerations . . . . 16 operations . . . . . . . . . . . . . 169
LOB export considerations . . . . . . . . 16 Options for improving load performance . . . 170
Reference - Export . . . . . . . . . . . . 18 Load features for maintaining referential integrity 174
EXPORT . . . . . . . . . . . . . . 18 Checking for integrity violations following a
EXPORT command using the ADMIN_CMD load operation . . . . . . . . . . . . 174
procedure . . . . . . . . . . . . . . 28 Checking for constraint violations using SET
db2Export - Export data from a database . . . 38 INTEGRITY . . . . . . . . . . . . . 177
Export sessions - CLP examples . . . . . . 44 Table locking during load operations . . . . 180
Read access load operations . . . . . . . 181
Chapter 3. Import utility . . . . . . . 47 Table space states during and after load
Import overview. . . . . . . . . . . . . 47 operations . . . . . . . . . . . . . 183
Privileges and authorities required to use import . . 49 Table states during and after load operations 184
Importing data . . . . . . . . . . . . . 50 Load exception tables . . . . . . . . . 186
Importing XML data . . . . . . . . . . 52 Failed or incomplete loads . . . . . . . . . 187
Imported table re-creation . . . . . . . . 53 Restarting an interrupted load operation . . . 187
Typed table import considerations . . . . . . 54 Restarting or terminating an ALLOW READ
LBAC-protected data import considerations . . 57 ACCESS load operation . . . . . . . . . 188
Buffered-insert imports . . . . . . . . . 59 Recovering data with the load copy location file 189
Identity column import considerations . . . . 60 Load dump file. . . . . . . . . . . . 191
Generated column import considerations . . . 61 Load temporary files . . . . . . . . . . 191
LOB import considerations . . . . . . . . 62 Load utility log records . . . . . . . . . 192
User-defined distinct types import considerations 63 Load overview–partitioned database environments 192
Additional considerations for import . . . . . . 63 Loading data in a partitioned database
Client/server environments and import . . . . 63 environment. . . . . . . . . . . . . 194
Table locking during import . . . . . . . . 64 Monitoring a load operation in a partitioned
Reference - Import . . . . . . . . . . . . 65 database environment using the LOAD QUERY
IMPORT . . . . . . . . . . . . . . 65 command . . . . . . . . . . . . . 200
IMPORT command using the ADMIN_CMD Resuming, restarting, or terminating load
procedure . . . . . . . . . . . . . . 89 operations in a partitioned database
db2Import - Import data into a table, hierarchy, environment. . . . . . . . . . . . . 202
nickname or view . . . . . . . . . . . 113 Migration and version compatibility . . . . . 204
Import sessions - CLP examples . . . . . . 126 Reference - Load in a partitioned environment 204
Reference - Load . . . . . . . . . . . . 212
Chapter 4. Load utility . . . . . . . 129 LOAD . . . . . . . . . . . . . . . 212
Load overview . . . . . . . . . . . . . 129 LOAD command using the ADMIN_CMD
Privileges and authorities required to use load . . 132 procedure . . . . . . . . . . . . . 244
db2Load - Load data into a table . . . . . . 276
Best practice usage v the target table has constraints and you
don't want the target table to be put in the
Set Integrity Pending state
v the target table has triggers and you want
them fired
References Importing data
The export utility is a relatively simple, yet flexible data movement utility. You can
activate it through the Control Center, by issuing the EXPORT command in the
CLP, by calling the ADMIN_CMD stored procedure, or by calling the db2Export
API through a user application.
Additional options
There are a number of parameters that allow you to customize an export operation.
File type modifiers offer many options such as allowing you to change the format
of the data, date and time stamps, or code page, or have certain data types written
to separate files. Using the METHOD parameters, you can specify different
column names to be used for the exported data.
You can export from tables that include one or more columns with an XML data
type. Use the XMLFILE, XML TO, and XMLSAVESCHEMA parameters to specify
details about how those exported documents are stored.
There are a few ways to improve the export utility’s performance. As the export
utility is an embedded SQL application and does SQL fetches internally,
optimizations that apply to SQL operations apply to the export utility as well.
Consider taking advantage of large buffer pools, indexing, and sort heaps. In
addition, try to minimize device contention on the output files by placing them
away from the containers and log devices.
Together, privileges and authorities control access to the database manager and its
database objects. You can access only those objects for which you have the
appropriate authorization: that is, the required privilege or authority.
When you are exporting LBAC-protected data, the session authorization ID must
be allowed to read the rows or columns that you are trying to export. Protected
rows that the session authorization ID is not authorized to read are not exported. If
the SELECT statement includes any protected columns that the session
authorization ID is not allowed to read, the export utility fails, and an error
(SQLSTATE 42512) is returned.
Exporting data
Use the export utility to export data from a database to a file. The file can have
one of several external file formats. You can specify the data to be exported by
supplying an SQL SELECT statement or by providing hierarchical information for
typed tables.
You need SYSADM authority, DBADM authority, the CONTROL privilege, or the
SELECT privilege on each participating table or view to export data from a
database
Before running the export utility, you must be connected (or be able to implicitly
connect) to the database from which you will export the data. If implicit connect is
enabled, a connection to the default database is established. Utility access to Linux,
UNIX, or Windows database servers from Linux, UNIX, or Windows clients must
be through a direct connection through the engine and not through a DB2 Connect
gateway or loop back environment.
Because the utility issues a COMMIT statement, you should complete all
transactions and release all locks by issuing a COMMIT or a ROLLBACK statement
before running the export utility. There is no requirement for applications accessing
the table and using separate connections to disconnect.
Procedure
1. From the Control Center, expand the object tree until you find the Tables or
Views folder.
2. Click on the folder that you want to work with. Any existing tables or views
are displayed in the pane on the right side of the window (the contents pane).
3. In the contents pane, right-click the table or view that you want, and select
Export from the pop-up menu. The Export Table notebook opens.
Example
Detailed information about the Export Table notebook is provided in the Control
Center online help facility.
A very simple export operation requires you to specify only a target file, a file
format, and a source file for the SELECT statement.
where filename is the name of the output file that you want to create and export,
ixf is the file format, and table is the name of the table that contains the data you
want to copy.
However, you might also want to specify a messages file to which warning and
error messages will be written. To do that, add the MESSAGES parameter and a
message file name (in this case, msg.txt) so the command is:
db2 export to filename of ixf messages msgs.txt select * from table
The XML data, however, is represented in the main data file with an XML data
specifier (XDS). The XDS is a string represented as an XML tag named "XDS",
which has attributes that describe information about the actual XML data in the
column; such information includes the name of the file that contains the actual
XML data, and the offset and length of the XML data within that file.
The destination paths and base names of the exported XML files can be specified
with the XML TO and XMLFILE options. If the XML TO or XMLFILE option is
specified, the format of the exported XML file names, stored in the FIL attribute of
the XDS, is xmlfilespec.xxx.xml, where xmlfilespec is the value specified for the
XMLFILE option, and xxx is a sequence number for xml files produced by the
export utility. Otherwise, the format of the exported XML file names is:
By default, exported XML files are written to the path of the exported data file.
The default base name for exported XML files is the name of the exported data file,
with an appending 3-digit sequence number, and the .xml extension.
Examples
For the following examples, imagine a table USER.T1 containing four columns and
two rows:
C1 INTEGER
C2 XML
C3 VARCHAR(10)
C4 XML
Table 2. USER.T1
C1 C2 C3 C4
2 <?xml version="1.0" 'char1' <?xml version="1.0"
encoding="UTF-8" ?><note encoding="UTF-8" ?><note
time="12:00:00"><to>You</ time="13:00:00"><to>Him</
to><from> Me</ to><from> Her</
from><heading>note1</heading> from><heading>note2</heading><
<body>Hello World!</body></ body>Hello World!</body></note>
note>
4 NULL 'char2' ?xml version="1.0" encoding="UTF-8"
?><note time="14:00:00">to>Us</
to><from> Them</
from><heading>note3</heading>
<body>Hello World!</body></
note>
Example 1
The following command exports the contents of USER.T1 in DEL format to the file
"t1export.del". XML documents contained in columns C2 and C4 are written to the
path "/home/user/xmlpath". The XML files are named with the base name
"xmldocs", with multiple exported XML documents written to the same XML file.
The XMLSAVESCHEMA option indicates that XML schema information is saved
during the export procedure.
EXPORT TO /mypath/t1export.del OF DEL XML TO /home/user/xmlpath
XMLFILE xmldocs XMLSAVESCHEMA SELECT * FROM USER.T1
Example 3
The following command is similar to Example 2, except that each exported XML
document is written to a separate XML file.
EXPORT TO /mypath/t1export.del OF DEL XML TO /home/user/xmlpath
XMLFILE xmldocs MODIFIED BY XMLINSEPFILES XMLSAVESCHEMA
SELECT * FROM USER.T1
Example 4
Note: The result of this particular XQuery does not produce well-formed XML
documents. Therefore, the file exported above could not be directly imported into
an XML column.
If your LBAC credentials do not allow you to read a row, that row is not exported,
but no error is returned. If your LBAC credentials do not allow you to read a
column, the export utility fails, and an error (SQLSTATE 42512) is returned.
Examples
In the following examples, output is in DEL format and is written to the file
myfile.del. The data is exported from a table named REPS, which was created
with the following statement:
create table reps (row_label db2securitylabel,
id integer,
name char(30))
security policy data_access_policy
This example exports the values of the row_label column in the default format:
db2 export to myfile.del of del select * from reps
The data file is not very readable in most text editors because the values for the
row_label column are likely to contain several ASCII control characters.
The following example exports the values of the row_label column in the security
label string format:
db2 export to myfile.del of del select SECLABEL_TO_CHAR
(row_label,’DATA_ACCESS_POLICY’), id, name from reps
To export a table, you must specify the PC/IXF file format. You can then re-create
your saved table (including its indexes) using the import utility in CREATE mode.
However, some information is not saved to the exported IXF file if any of the
following conditions exist:
v The index column names contain hexadecimal values of 0x2B or 0x2D.
v The table contains XML columns.
v The table is multidimensional clustered (MDC).
v The table contains a table partitioning key.
v The index name is longer than 128 bytes due to code page conversion.
v The table is protected.
v The EXPORT command contains action strings other than SELECT * FROM
tablename
v You specify the METHOD N parameter for the export utility.
For a list of table attributes that are lost, see "Table import considerations." If any
information is not saved, warning SQL27984W is returned when the table is
re-created.
Note: Import's CREATE mode is being deprecated. Use the db2look utility to
capture and re-create your tables.
Index information
If the column names specified in the index contain either - or + characters, the
index information is not collected, and warning SQL27984W is returned. The
export utility completes its processing, and the data exported is unaffected.
However, the index information is not saved in the IXF file. As a result, you must
create the indexes separately using the db2look utility.
Space limitations
The export operation fails if the data that you are exporting exceeds the space
available on the file system on which the exported file is created. In this case, you
should limit the amount of data selected by specifying conditions on the WHERE
clause so that the exported file fits on the target file system. You can run the export
utility multiple times to export all of the data.
When working with typed tables, the export utility controls what is placed in the
output file; specify only the target table name and, optionally, the WHERE clause.
You can express subselect statements only by specifying the target table name and
the WHERE clause. You cannot specify a fullselect or select-statement when
exporting a hierarchy.
The mapping used for typed tables is known as the traverse order, the order of
proceeding top-to-bottom, left-to-right through all of the supertables and subtables
in the hierarchy. Before each typed row is written out during an export operation,
an identifier is translated into an index value. This index value can be any number
from one to the number of relevant types in the hierarchy. Index values are
generated by numbering each type when moving through the hierarchy in a
specific order—the traverse order. Figure 1 shows a hierarchy with four valid
traverse orders:
v Person, Employee, Manager, Architect, Student
v Person, Student, Employee, Manager, Architect
v Person, Employee, Architect, Manager, Student
v Person, Student, Employee, Architect, Manager
5 Employee 6
Student
Employee_t
Student_t
4 (SerialNum, Salary, REF 7 (SerialNum, Marks)
(Department_t))
Manager Architect
Manager_t Architect_t
(Bonus) (StockOption)
The traverse order is important when moving data between table hierarchies
because it determines where the data is moved in relation to other data. There are
two types of traverse order: default and user specified.
The default traverse order behaves differently when used with different file
formats. Exporting data to the PC/IXF file format creates a record of all relevant
types, their definitions, and relevant tables. The export utility also completes the
mapping of an index value to each table. When working with the PC/IXF file
format, you should use the default traverse order.
With the ASC, DEL, or WSF file format, the order in which the typed rows and the
typed tables are created could be different, even though the source and target
hierarchies might be structurally identical. This results in time differences that the
default traverse order identifies when proceeding through the hierarchies. The
creation time of each type determines the order used to move through the
hierarchy at both the source and the target when using the default traverse order.
Ensure that the creation order of each type in both the source and the target
hierarchies is identical and that there is structural identity between the source and
the target. If these conditions cannot be met, select a user-specified traverse order.
Although you determine the starting point and the path down the hierarchy when
defining the traverse order, remember that the subtables must be traversed in
If you want to control the traverse order through the hierarchies, ensure that the
same traverse order is used for both the export and the import utilities.
Example 1
Note that setting the parameter HIERARCHY STARTING to Person indicates that
the default traverse order starting from the table PERSON.
Example 2
To export the entire hierarchy, but only the data for those people over the age of
20, you would enter the following commands:
DB2 CONNECT TO Source_db
DB2 EXPORT TO entire_hierarchy.del OF DEL HIERARCHY (Person,
Employee, Manager, Architect, Student) WHERE Age>=20
If the SELECT statement that you specify for the export operation is of the form
SELECT * FROM tablename and you do not use the METHOD option is not used,
exporting identity column properties to IXF files is supported. You can then use
the REPLACE_CREATE and the CREATE options of the IMPORT command to
re-create the table, including its identity column properties. If you create the
exported IXF file from a table containing an identity column of type GENERATED
ALWAYS, the only way that you can successfully import the data file is to specify
the identityignore file type modifier during the import operation. Otherwise, all
rows are rejected (SQL3550W is issued).
Note: The CREATE and REPLACE_CREATE options of the IMPORT command are
deprecated and might be removed in a future release.
By default, LOBs are written to a single file, but you can also specify that the
individual LOBs are to be stored in separate files. The export utility generates a
LOB Location Specifier (LLS) to enable the storage of multiple LOBs in one file.
The LLS, which is written to the export output file, is a string that indicates where
the LOB data is stored within the file. The format of the LLS is
lobfilename.ext.nnn.mmm/, where lobfilename.ext is the name of the file that
contains the LOB, nnn is the offset of the LOB within the file (measured in bytes),
and mmm is the length of the LOB (measured in bytes). For example, an LLS of
db2exp.001.123.456/ indicates that the LOB is located in the file db2exp.001,
begins at an offset of 123 bytes into the file, and is 456 bytes long. If the indicated
size in the LLS is 0, the LOB is considered to have a length of 0. If the length is -1,
the LOB is considered to be NULL and the offset and file name are ignored.
If you don't want individual LOB data concatenated to the same file, use the
lobsinsepfiles file type modifier to write each LOB to a separate file.
Note: The IXF file format does not store the LOB options of the column, such as
whether or not the LOB column is logged. This means that the import utility
cannot re-create a table containing a LOB column that is defined to be 1 GB or
larger.
Example 1
The following example shows how to export LOBs (where the exported LOB files
have the specified base name lobs1) to a DEL file:
db2 export to myfile.del of del lobs to mylobs/
lobfile lobs1 modified by lobsinfile
select * from emp_photo
Example 2
The following example shows how to export LOBs to a DEL file, where each LOB
value is written to a separate file and lobfiles are written to two directories:
db2 export to myfile.del of del
lobs to /db2exp1/, /db2exp2/ modified by lobsinfile
select * from emp_photo
EXPORT
Exports data from a database to one of several external file formats. The user
specifies the data to be exported by supplying an SQL SELECT statement, or by
providing hierarchical information for typed tables.
Quick link to “File type modifiers for the export utility” on page 22.
Authorization
Required connection
Command syntax
EXPORT TO filename OF filetype
,
LOBS TO lob-path
, ,
,
XMLSAVESCHEMA ,
METHOD N ( column-name )
select-statement
XQUERY xquery-statement
HIERARCHY STARTING sub-table-name
traversal-order-list
WHERE
traversal-order-list:
( sub-table-name )
Usage notes
v Be sure to complete all table operations and release all locks before starting an
export operation. This can be done by issuing a COMMIT after closing all
cursors opened WITH HOLD, or by issuing a ROLLBACK.
v Table aliases can be used in the SELECT statement.
v The messages placed in the message file include the information returned from
the message retrieval service. Each message begins on a new line.
v The export utility produces a warning message whenever a character column
with a length greater than 254 is selected for export to DEL format files.
But an export command using the same select clause will not:
db2 export to test.del of del select col2 from test
where char(col2)=’05/10/2005’;
Number of rows exported: 0
Now, replacing the LOCALE date format with ISO format gives the expected
results:
db2 export to test.del of del select col2 from test
where char(col2)=’2005-05-10’;
Number of rows exported: 3
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the
LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length
of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in
the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456
bytes long.
If you specify the “lobsinfile” modifier when using EXPORT, the LOB data is
placed in the locations specified by the LOBS TO clause. Otherwise the LOB data
is sent to the data file directory. The LOBS TO clause specifies one or more paths
to directories in which the LOB files are to be stored. There will be at least one
file per LOB path, and each file will contain at least one LOB. The LOBS TO or
LOBFILE options will implicitly activate the LOBSINFILE behavior.
To indicate a null LOB , enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
xmlinsepfiles Each XQuery Data Model (XDM) instance is written to a separate file. By default,
multiple values are concatenated together in the same file.
lobsinsepfiles Each LOB value is written to a separate file. By default, multiple values are
concatenated together in the same file.
xmlnodeclaration XDM instances are written without an XML declaration tag. By default, XDM
instances are exported with an XML declaration tag at the beginning that includes
an encoding attribute.
xmlchar XDM instances are written in the character codepage. Note that the character
codepage is the value specified by the codepage file type modifier, or the
application codepage if it is not specified. By default, XDM instances are written
out in Unicode.
Table 4. Valid file type modifiers for the export utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.2 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter
as follows:
modified by chardel’’
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the output data set. Converts character data to this code page from the
application code page during the export operation.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive. The codepage modifier cannot be used with the
lobsinfile modifier.
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.2
In the following example, coldel; causes the export utility to use the semicolon
character (;) as a column delimiter for the exported data:
db2 "export to temp of del modified by coldel;
select * from staff where dept = 20"
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The
default value is a period (.). The specified character is used in place of a period as
a decimal point character.2
nochardel Column data will not be surrounded by character delimiters. This option should
not be specified if the data is intended to be imported or loaded using DB2. It is
provided to support vendor data files that do not have character delimiters.
Improper usage might result in data loss or corruption.
This option cannot be specified with chardelx or nodoubledel. These are mutually
exclusive options.
nodoubledel Suppresses recognition of double character delimiters.2
In the first export operation, the content of the exported file data will be
+00000000000000000000000000001.10. In the second operation, which is identical
to the first except for the striplzeros modifier, the content of the exported file
data will be +1.10.
The MMM element will produce the following values: 'Jan', 'Feb', 'Mar', 'Apr',
'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', and 'Dec'. 'Jan' is equal to month 1, and
'Dec' is equal to month 12.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive.
Table 6. Valid file type modifiers for the export utility: WSF file format
Modifier Description
1 Creates a WSF file that is compatible with Lotus 1-2-3 Release 1, or Lotus 1-2-3
Release 1a.5 This is the default.
2 Creates a WSF file that is compatible with Lotus Symphony Release 1.0.5
3 Creates a WSF file that is compatible with Lotus 1-2-3 Version 2, or Lotus
Symphony Release 1.1.5
4 Creates a WSF file containing DBCS characters.
Note:
1. The export utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted, the
export operation fails, and an error code is returned.
2. Delimiter considerations for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
3. The export utility normally writes
v date data in YYYYMMDD format
v char(date) data in "YYYY-MM-DD" format
v time data in "HH.MM.SS" format
v time stamp data in "YYYY-MM-DD-HH. MM.SS.uuuuuu" format
Data contained in any datetime columns specified in the SELECT statement
for the export operation will also be in these formats.
4. For time stamp formats, care must be taken to avoid ambiguity between the
month and the minute descriptors, since they both use the letter M. A month
field must be adjacent to other date fields. A minute field must be adjacent to
other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
5. These files can also be directed to a specific product by specifying an L for
Lotus 1-2-3, or an S for Symphony in the filetype-mod parameter string. Only
one value or product designator can be specified.
.
The LOBSINFILE file type modifier must be specified in order to have LOB
files generated.
11. The export utility appends a numeric identifier to each LOB file or XML file.
The identifier starts as a 3 digit, 0 padded sequence value, starting at
.001
. After the 999th LOB file or XML file, the identifier will no longer be padded
with zeroes (for example, the 1000th LOG file or XML file will have an
extension of
.1000
or
.xml
. For example, a generated LOB file would have a name in the format
myfile.del.001.lob
.
12. It is possible to have the export utility export XDM instances that are not
well-formed documents by specifying an XQuery. However, you will not be
Chapter 2. Export utility 27
able to import or load these exported documents directly into an XML
column, since XML columns can only contain complete documents.
Quick link to “File type modifiers for the export utility” on page 32.
Authorization
Required connection
Command syntax
EXPORT TO filename OF filetype
,
LOBS TO lob-path
, ,
,
XMLSAVESCHEMA ,
METHOD N ( column-name )
select-statement
XQUERY xquery-statement
HIERARCHY STARTING sub-table-name
traversal-order-list
WHERE
traversal-order-list:
( sub-table-name )
Usage notes
v Be sure to complete all table operations and release all locks before starting an
export operation. This can be done by issuing a COMMIT after closing all
cursors opened WITH HOLD, or by issuing a ROLLBACK.
v Table aliases can be used in the SELECT statement.
v The messages placed in the message file include the information returned from
the message retrieval service. Each message begins on a new line.
v The export utility produces a warning message whenever a character column
with a length greater than 254 is selected for export to DEL format files.
But an export command using the same select clause will not:
db2 export to test.del of del select col2 from test
where char(col2)=’05/10/2005’;
Number of rows exported: 0
Now, replacing the LOCALE date format with ISO format gives the expected
results:
db2 export to test.del of del select col2 from test
where char(col2)=’2005-05-10’;
Number of rows exported: 3
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the
LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length
of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in
the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456
bytes long.
If you specify the “lobsinfile” modifier when using EXPORT, the LOB data is
placed in the locations specified by the LOBS TO clause. Otherwise the LOB data
is sent to the data file directory. The LOBS TO clause specifies one or more paths
to directories in which the LOB files are to be stored. There will be at least one
file per LOB path, and each file will contain at least one LOB. The LOBS TO or
LOBFILE options will implicitly activate the LOBSINFILE behavior.
To indicate a null LOB , enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
xmlinsepfiles Each XQuery Data Model (XDM) instance is written to a separate file. By default,
multiple values are concatenated together in the same file.
lobsinsepfiles Each LOB value is written to a separate file. By default, multiple values are
concatenated together in the same file.
xmlnodeclaration XDM instances are written without an XML declaration tag. By default, XDM
instances are exported with an XML declaration tag at the beginning that includes
an encoding attribute.
xmlchar XDM instances are written in the character codepage. Note that the character
codepage is the value specified by the codepage file type modifier, or the
application codepage if it is not specified. By default, XDM instances are written
out in Unicode.
Table 8. Valid file type modifiers for the export utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.2 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter
as follows:
modified by chardel’’
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the output data set. Converts character data to this code page from the
application code page during the export operation.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive. The codepage modifier cannot be used with the
lobsinfile modifier.
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.2
In the following example, coldel; causes the export utility to use the semicolon
character (;) as a column delimiter for the exported data:
db2 "export to temp of del modified by coldel;
select * from staff where dept = 20"
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The
default value is a period (.). The specified character is used in place of a period as
a decimal point character.2
nochardel Column data will not be surrounded by character delimiters. This option should
not be specified if the data is intended to be imported or loaded using DB2. It is
provided to support vendor data files that do not have character delimiters.
Improper usage might result in data loss or corruption.
This option cannot be specified with chardelx or nodoubledel. These are mutually
exclusive options.
nodoubledel Suppresses recognition of double character delimiters.2
In the first export operation, the content of the exported file data will be
+00000000000000000000000000001.10. In the second operation, which is identical
to the first except for the striplzeros modifier, the content of the exported file
data will be +1.10.
The MMM element will produce the following values: 'Jan', 'Feb', 'Mar', 'Apr',
'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', and 'Dec'. 'Jan' is equal to month 1, and
'Dec' is equal to month 12.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive.
Table 10. Valid file type modifiers for the export utility: WSF file format
Modifier Description
1 Creates a WSF file that is compatible with Lotus 1-2-3 Release 1, or Lotus 1-2-3
Release 1a.5 This is the default.
2 Creates a WSF file that is compatible with Lotus Symphony Release 1.0.5
3 Creates a WSF file that is compatible with Lotus 1-2-3 Version 2, or Lotus
Symphony Release 1.1.5
4 Creates a WSF file containing DBCS characters.
Note:
1. The export utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted, the
export operation fails, and an error code is returned.
2. Delimiter considerations for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
3. The export utility normally writes
v date data in YYYYMMDD format
v char(date) data in "YYYY-MM-DD" format
v time data in "HH.MM.SS" format
v time stamp data in "YYYY-MM-DD-HH. MM.SS.uuuuuu" format
Data contained in any datetime columns specified in the SELECT statement
for the export operation will also be in these formats.
4. For time stamp formats, care must be taken to avoid ambiguity between the
month and the minute descriptors, since they both use the letter M. A month
field must be adjacent to other date fields. A minute field must be adjacent to
other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
5. These files can also be directed to a specific product by specifying an L for
Lotus 1-2-3, or an S for Symphony in the filetype-mod parameter string. Only
one value or product designator can be specified.
.
The LOBSINFILE file type modifier must be specified in order to have LOB
files generated.
11. The export utility appends a numeric identifier to each LOB file or XML file.
The identifier starts as a 3 digit, 0 padded sequence value, starting at
.001
. After the 999th LOB file or XML file, the identifier will no longer be padded
with zeroes (for example, the 1000th LOG file or XML file will have an
extension of
.1000
or
.xml
. For example, a generated LOB file would have a name in the format
myfile.del.001.lob
.
12. It is possible to have the export utility export XDM instances that are not
well-formed documents by specifying an XQuery. However, you will not be
Chapter 2. Export utility 37
able to import or load these exported documents directly into an XML
column, since XML columns can only contain complete documents.
Authorization
Required connection
Usage notes
Before starting an export operation, you must complete all table operations and
release all locks in one of two ways:
v Close all open cursors that were defined with the WITH HOLD clause, and
commit the data changes by executing the COMMIT statement.
v Roll back the data changes by executing the ROLLBACK statement.
The messages placed in the message file include the information returned from the
message retrieval service. Each message begins on a new line.
If the export utility produces warnings, the message will be written out to a
message file, or standard output if one is not specified.
If the db2uexpm.bnd module or any other shipped .bnd files are bound manually,
the format option on the binder must not be used.
PC/IXF import should be used to move data between databases. If character data
containing row separators is exported to a delimited ASCII (DEL) file and
processed by a text transfer program, fields containing the row separators will
shrink or expand.
The export utility will not create multiple-part PC/IXF files when invoked from an
AIX system.
Index definitions for a table are included in the PC/IXF file when the contents of a
single database table are exported to a PC/IXF file with a piActionString
parameter beginning with SELECT * FROM tablename, and the piDataDescriptor
parameter specifying default names. Indexes are not saved for views, or if the
SELECT clause of the piActionString includes a join. A WHERE clause, a GROUP
BY clause, or a HAVING clause in the piActionString parameter will not prevent
the saving of indexes. In all of these cases, when exporting from typed tables, the
entire hierarchy must be exported.
The export utility will store the NOT NULL WITH DEFAULT attribute of the table
in an IXF file if the SELECT statement provided is in the form: SELECT * FROM
tablename.
For file formats other than IXF, it is recommended that the traversal order list be
specified, because it tells DB2 how to traverse the hierarchy, and what sub-tables to
export. If this list is not specified, all tables in the hierarchy are exported, and the
default order is the OUTER order. The alternative is to use the default order, which
is the order given by the OUTER function.
Note: Use the same traverse order during an import operation. The load utility
does not support loading hierarchies or sub-hierarchies.
CONTINUE EXPORT
STOP EXPORT
If this parameter is NULL, or a value for dcoldata has not been specified,
the utility uses the column names from the database table.
msgfile
File, path, or device name where error and warning messages are to be
sent.
number
A host variable that will contain the number of exported rows.
Example 2
The following example shows how to export the information about employees in
Department 20 from the STAFF table in the SAMPLE database (to which the user
must be connected) to awards.ixf, with the output in IXF format:
db2 export to awards.ixf of ixf messages msgs.txt select * from staff
where dept = 20
Example 3
The following example shows how to export LOBs to a DEL file:
db2 export to myfile.del of del lobs to mylobs/
lobfile lobs1, lobs2 modified by lobsinfile
select * from emp_photo
Example 4
The following example shows how to export LOBs to a DEL file, specifying a
second directory for files that might not fit into the first directory:
Example 5
The following example shows how to export data to a DEL file, using a single
quotation mark as the string delimiter, a semicolon as the column delimiter, and a
comma as the decimal point. The same convention should be used when importing
data back into the database:
db2 export to myfile.del of del
modified by chardel’’ coldel; decpt,
select * from staff
Like export, import is a relatively simple data movement utility. It can be activated
through the Control Center, by issuing CLP commands, by calling the
ADMIN_CMD stored procedure, or by calling its API, db2Import, through a user
application.
There are a number of data formats that import supports, as well as features that
can be used with import:
v Import supports IXF, WSF, ASC, and DEL data formats.
v Import can be used with file type modifiers to customize the import operation.
v Import can be used to move hierarchical data and typed tables.
v Import logs all activity, updates indexes, verifies constraints, and fires triggers.
v Import allows you to specify the names of the columns within the table or view
into which the data is to be inserted.
v Import can be used with DB2 Connect.
Import modes
Import has five modes which determine the method in which the data is imported.
The first three, INSERT, INSERT_UPDATE, and REPLACE are used when the
target tables already exist. All three support IXF, WSF, ASC, and DEL data formats.
However, only INSERT and INSERT_UPDATE can be used with nicknames.
Table 11. Overview of INSERT, INSERT_UPDATE, and REPLACE import modes
Mode Best practice usage
INSERT Inserts input data into target table without
changing existing data
INSERT_UPDATE Updates rows with matching primary key
values with values of input rows
Where there's no matching row, inserts
imported row into the table
REPLACE Deletes all existing data and inserts
imported data, while keeping table and
index definitions
The other two modes, REPLACE_CREATE and CREATE, are used when the target
tables do not exist. They can only be used with input files in the PC/IXF format,
which contains a structured description of the table that is to be created. Imports
cannot be performed in these modes if the object table has any dependents other
than itself.
The number of steps and the amount of time required for an import depend on the
amount of data being moved and the options that you specify. An import
operation follows these steps:
1. Locking tables
Import acquires either an exclusive (X) lock or a nonexclusive (IX) lock on
existing target tables, depending on whether you allow concurrent access to the
table.
2. Locating and retrieving data
Import uses the FROM clause to locate the input data. If your command
indicates that XML or LOB data is present, import will locate this data.
3. Inserting data
Import either replaces existing data or adds new rows of data to the table.
4. Checking constraints and firing triggers
As the data is written, import ensures that each inserted row complies with the
constraints defined on the target table. Information about rejected rows is
written to the messages file. Import also fires existing triggers.
5. Committing the operation
Import saves the changes made and releases the locks on the target table. You
can also specify that periodic take place during the import.
Additional options
There are a number of options that allow you to customize an import operation.
You can specify file type modifiers in the MODIFIED BY clause to change the
format of the data, tell the import utility what to do with the data, and to improve
performance.
The import utility, by default, does not perform commits until the end of a
successful import, except in the case of some ALLOW WRITE ACCESS imports.
There are a few ways to improve import's performance. As the import utility is an
embedded SQL application and does SQL fetches internally, optimizations that
apply to SQL operations apply to import as well. You can use the compound file
type modifier to perform a specified number of rows to insert at a time, rather
than the default row-by-row insertion. If you anticipate that a large number of
warnings will be generated (and, therefore, slow down the operation) during the
import, you can also specify the norowwarnings file type modifier to suppress
warnings about rejected rows.
Messages file
During an import, standard ASCII text message files are written to contain the
error, warning, and informational messages associated with that operation. If the
utility is invoked through the application programming interface (API) db2Import,
you must specify the name of these files in advance with the MESSAGES
parameter, otherwise it is optional. The messages file is a convenient way of
monitoring the progress of an import, as you can access is while the import is in
progress. In the event of a failed import operation, message files can be used to
determine a restarting point by indicating the last row that was successfully
imported.
Users can access only those objects for which they have the appropriate
authorization; that is, the required privilege or authority.
With SYSADM or DBADM authority, you can perform any type of import
operation. The table below lists the other authorities on each participating table,
view or nickname that enable you to perform the corresponding type of import.
Table 13. Authorities required to perform import operations
Mode Required authority
INSERT CONTROL or
INSERT and SELECT
INSERT_UPDATE CONTROL or
INSERT, SELECT, UPDATE, and DELETE
REPLACE CONTROL or
INSERT, SELECT, and DELETE
If you want to import to a hierarchy, the required authority also depends on the
mode. For existing hierarchies, CONTROL privilege on every subtable in the
hierarchy is sufficient for a REPLACE operation. For hierarchies that don't exist,
CONTROL privilege on every subtable in the hierarchy, along with CREATETAB
and USE, is sufficient for a REPLACE_CREATE operation.
In addition, there a few considerations for importing into tables with label-based
access control (LBAC) security labels defined on them. To import data into a table
that has protected columns, the session authorization ID must have LBAC
credentials that allow write access to all protected columns in the table. To import
data into a table that has protected rows, the session authorization ID must have
been granted a security label for write access that is part of the security policy
protecting the table.
Importing data
The import utility inserts data from an external file with a supported file format
into a table, hierarchy, view or nickname. The load utility is a faster alternative, but
the load utility does not support loading data at the hierarchy level.
Before invoking the import utility, you must be connected to (or be able to
implicitly connect to) the database into which the data will be imported. If implicit
connect is enabled, a connection to the default database is established. Utility
access to DB2 for Linux, UNIX, or Windows database servers from DB2 for Linux,
UNIX, or Windows clients must be a direct connection through the engine and not
through a DB2 Connect gateway or loop back environment. Since the utility will
issue a COMMIT or a ROLLBACK statement, you should complete all transactions
and release all locks by issuing a COMMIT statement or a ROLLBACK operation
before invoking import.
Note: The CREATE and REPLACE_CREATE options of the IMPORT command are
deprecated and might be removed in a future release.
The following limitation applies to the import utility: If the volume of output
messages generated by an import operation against a remote database exceeds 60
KB, the utility will keep the first 30 KB and the last 30 KB.
The import utility can be invoked through the command line processor (CLP), the
Import Table notebook in the Control Center, or by calling the application
programming interface (API) db2Import from a client application.
Procedure
1. From the Control Center, expand the object tree until you find the Tables folder.
2. Click on the Tables folder. Any existing tables are displayed in the pane on the
right side of the window (the contents pane).
3. Right-click on the table you want in the contents pane, and select Import from
the pop-up menu. The Import Table notebook opens.
Detailed information about the Import Table notebook is provided through the
Control Center online help facility.
A very simple import operation requires you to specify only an input file, a file
format, an import mode, and a target table (or the name of the table that is to be
created).
For example, to import data from the CLP, enter the IMPORT command:
db2 import from filename of fileformat import_mode into table
where filename is the name of the input file that contains the data you want to
import, ixf is the file format, insert is the mode, and table is the name of the
table that you want to insert the data into.
However, you might also want to specify a messages file to which warning and
error messages will be written. To do that, add the MESSAGES parameter and a
message file name so the command is:
db2 import from filename of fileformat messages messagefile import_mode into table
When importing data into an XML table column, you can use the XML FROM
option to specify the paths of the input XML data file or files. For example, for an
XML file "/home/user/xmlpath/xmldocs.001.xml" that had previously been
exported, the following command could be used to import the data back into the
table.
IMPORT FROM t1export.del OF DEL XML FROM /home/user/xmlpath INSERT INTO USER.T1
You can use the XMLPARSE option to specify whether whitespace in the imported
XML documents is preserved or stripped. In the following example, all imported
XML documents are validated against XML schema information that was saved
when the XML documents were exported, and these documents are parsed with
whitespace preserved.
IMPORT FROM t1export.del OF DEL XML FROM /home/user/xmlpath XMLPARSE PRESERVE
WHITESPACE XMLVALIDATE USING XDS INSERT INTO USER.T1
For import to be able to re-create the table, the export operation must meet some
requirements. The original table must have been exported to an IXF file. If you
export files with DEL or ASC file formats, the output files do not contain
descriptions of the target table, but they contain the record data. To re-create a
table with data stored in these file formats, create the target table, then use the
load or import utility to populate the table from these files. You can use the
db2look utility to capture the original table definitions and to generate the
corresponding data definition language (DDL). As well, the SELECT statement
used during the export can only contain certain action strings. For example, no
column names can be used in the SELECT clause and only SELECT * is permitted.
Note: Import's CREATE mode is being deprecated. Use the db2look utility to
capture and re-create your tables.
Retained attributes
The re-created table will retain the following attributes of the original table:
v The primary key name, and definition
v Column information, including:
– Column name
– Column data type, including user-defined distinct types, which are preserved
as their base type
– Identity properties
– Lengths (except for lob_file types)
– Code page (if applicable)
– Identity options
– Whether the column is defined as nullable or not nullable
– Default values for constants, if any, but not other types of default values
v Index information, including:
– Index name
– Index creator name
– Column names, and whether each column is sorted in ascending or
descending order
– Whether the index is defined as unique
– Whether the index is clustered
– Whether the index allows reverse scans
– PCTFREE values
– MINPCTUSED values
Note: No index information is retained if the column names in the index contain
the characters - or +, in which case SQL27984W is returned.
Lost attributes
The re-created table does not retain several attributes of the original table,
including:
If the import fails and SQL3311N is returned, you can still re-create the table using
the file type modifier forcecreate. This modifier allows you to create the table
with missing or limited information.
Table re-creation
The type of import you are able to perform depends on the file format of the input
file. When working with ASC, DEL, or WSF data, the target table or hierarchy
must exist before the data can be imported. However, data from a PC/IXF file can
be imported even if the table or hierarchy does not already exist if you specify an
import CREATE operation. It must be noted that if the CREATE option is specified,
import cannot alter subtable definitions.
Traverse order
The traverse order contained in the input file enables the hierarchies in the data to
be maintained. Therefore, the same traverse order must be used when invoking the
export utility and the import utility.
For the PC/IXF file format, one need only specify the target subtable name, and
use the default traverse order stored in the file.
When using options other than CREATE with typed tables, the traverse order list
enables one to specify the traverse order. This user-specified traverse order must
match the one used during the export operation. The import utility guarantees the
accurate movement of data to the target database given the following:
v An identical definition of subtables in both the source and the target databases
v An identical hierarchical relationship among the subtables in both the source
and target databases
v An identical traverse order
Although you determine the starting point and the path down the hierarchy when
defining the traverse order, each branch must be traversed to the end before the
next branch in the hierarchy can be started. The import utility looks for violations
of this condition within the specified traverse order.
Examples
Examples in this section are based on the following hierarchical structure with four
valid traverse orders:
v Person, Employee, Manager, Architect, Student
v Person, Student, Employee, Manager, Architect
v Person, Employee, Architect, Manager, Student
v Person, Student, Employee, Architect, Manager
5 Employee 6
Student
Employee_t
Student_t
4 (SerialNum, Salary, REF 7 (SerialNum, Marks)
(Department_t))
Manager Architect
Manager_t Architect_t
(Bonus) (StockOption)
Example 1
To re-create an entire hierarchy (contained in the data file entire_hierarchy.ixf
created by a prior export operation) using import, you would enter the following
commands:
DB2 CONNECT TO Target_db
DB2 IMPORT FROM entire_hierarchy.ixf OF IXF CREATE INTO
HIERARCHY STARTING Person AS ROOT TABLE
Each type in the hierarchy is created if it does not exist. If these types already
exist, they must have the same definition in the target database as in the source
database. An SQL error (SQL20013N) is returned if they are not the same. Since a
new hierarchy is being created, none of the subtables defined in the data file being
moved to the target database (Target_db) can exist. Each of the tables in the source
database hierarchy is created. Data from the source database is imported into the
correct subtables of the target database.
Example 2
To re-create the entire hierarchy of the source database and import it to the target
database, while only keeping selected data, you would enter the following
commands:
DB2 CONNECT TO Target_db
DB2 IMPORT FROM entire_hierarchy.del OF DEL INSERT INTO (Person,
Employee(Salary), Architect) IN HIERARCHY (Person, Employee,
Manager, Architect, Student)
The target tables PERSON, EMPLOYEE, and ARCHITECT must all exist. Data is
imported into the PERSON, EMPLOYEE, and ARCHITECT subtables. That is, the
following will be imported:
v All columns in PERSON into PERSON
v All columns in PERSON plus SALARY in EMPLOYEE into EMPLOYEE
v All columns in PERSON plus SALARY in EMPLOYEE, plus all columns in
ARCHITECT into ARCHITECT
Example 3
This example shows how to export from a regular table, and import as a single
subtable in a hierarchy. The EXPORT command operates on regular (non-typed)
tables, so there is no Type_id column in the data file. The file type modifier
no_type_id is used to indicate this, so that the import utility does not expect the
first column to be the Type_id column.
DB2 CONNECT TO Source_db
DB2 EXPORT TO Student_sub_table.del OF DEL SELECT * FROM
Regular_Student
DB2 CONNECT TO Target_db
DB2 IMPORT FROM Student_sub_table.del OF DEL METHOD P(1,2,3,5,4)
MODIFIED BY NO_TYPE_ID INSERT INTO HIERARCHY (Student)
In this example, the target table STUDENT must exist. Since STUDENT is a
subtable, the modifier no_type_id is used to indicate that there is no Type_id in the
first column. However, you must ensure that there is an existing Object_id column,
in addition to all of the other attributes that exist in the STUDENT table. Object-id
is expected to be the first column in each row imported into the STUDENT table.
The METHOD clause reverses the order of the last two attributes.
If you do not have valid LBAC credentials, the import fails and an error
(SQLSTATE 42512) is returned. In cases where the input data does not contain a
security label or that security label is not in its internal binary format, you can use
several file type modifiers to allow your import to proceed.
When you import data into a table with protected rows, the target table has one
column with a data type of DB2SECURITYLABEL. If the input row of data does
not contain a value for that column, that row is rejected unless the usedefaults file
type modifier is specified in the import command, in which case the security label
you hold for write access from the security policy protecting the table is used. If
you do not hold a security label for write access, the row is rejected and processing
continues on to the next row.
When you import data into a table that has protected rows and the input data does
include a value for the column with a data type of DB2SECURITYLABEL, the
same rules are followed as when you insert data into that table. If the security
label protecting the row being imported (the one in that row of the data file) is one
that you are able to write to, then that security label is used to protect the row. (In
other words, it is written to the column that has a data type of
Delimiter considerations
When importing data into a column with a data type of DB2SECURITYLABEL, the
value in the data file is assumed by default to be the actual bytes that make up the
internal representation of that security label. However, some raw data might
contain newline characters which could be misinterpreted by the IMPORT
command as delimiting the row. If you have this problem, use the delprioritychar
file type modifier to ensure that the character delimiter takes precedence over the
row delimiter. When you use delprioritychar, any record or column delimiters
that are contained within character delimiters are not recognized as being
delimiters. Using the delprioritychar file type modifier is safe to do even if none
of the values contain a newline character, but it does slow the import down
slightly.
If the data being imported is in ASC format, you might want to take an extra step
in order to prevent any trailing white space from being included in the imported
security labels and security label names. ASCII format uses column positions as
delimiters, so this might occur when importing into variable-length fields. Use the
striptblanks file type modifier to truncate any trailing blank spaces.
You can also import a data file in which the values of the security label column are
security label names. To import this sort of file you must use the file type modifier
seclabelname. When you use seclabelname, all values for columns with a data type
of DB2SECURITYLABEL are assumed to be string constants containing the names
of existing security labels. If no security label exists with the indicated name for
the security policy protecting the table, the row is not inserted and a warning
(SQLSTATE 01H53) is returned.
For all examples, the input data file myfile.del is in DEL format. All are importing
data into a table named REPS, which was created with this statement:
create table reps (row_label db2securitylabel,
id integer,
name char(30))
security policy data_access_policy
For this example, the input file is assumed to contain security labels in the default
format:
db2 import from myfile.del of del modified by delprioritychar insert into reps
For this example, the input file is assumed to contain security labels in the security
label string format:
db2 import from myfile.del of del modified by seclabelchar insert into reps
For this example, the input file is assumed to contain security labels names for the
security label column:
db2 import from myfile.del of del modified by seclabelname insert into reps
Buffered-insert imports
In a partitioned database environment, the import utility can be enabled to use
buffered inserts. This reduces the messaging that occurs when data is imported,
resulting in better performance.
The buffered inserts option should only be enabled if you are not concerned about
error reporting, since details about a failed buffered insert are not returned.
When buffered inserts are used, import sets a default WARNINGCOUNT value to
1. As a result, the operation will fail if any rows are rejected. If a record is rejected,
the utility will roll back the current transaction. The number of committed records
can be used to determine which records were successfully inserted into the
database. The number of committed records can be non zero only if the
COMMITCOUNT option was specified.
Use the DB2 bind utility to request buffered inserts. The import package,
db2uimpm.bnd, must be rebound against the database using the INSERT BUF
option. For example:
db2 connect to your_database
db2 bind db2uimpm.bnd insert buf
If no identity-related file type modifiers are used, the utility works according to the
following rules:
v If the identity column is GENERATED ALWAYS, an identity value is generated
for a table row whenever the corresponding row in the input file is missing a
value for the identity column, or a NULL value is explicitly given. If a
non-NULL value is specified for the identity column, the row is rejected
(SQL3550W).
v If the identity column is GENERATED BY DEFAULT, the import utility makes
use of user-supplied values, if they are provided; if the data is missing or
explicitly NULL, a value is generated.
The import utility does not perform any extra validation of user-supplied identity
values beyond what is normally done for values of the identity column's data type
(that is, SMALLINT, INT, BIGINT, or DECIMAL). Duplicate values will not be
reported. In addition, the compound=x modifier cannot be used when importing
data into a table with an identity column.
There are two ways you can simplify the import of data into tables that contain an
identity column: the identitymissing and the identityignore file type modifiers.
A user might want to import data from a file (import.del) into TABLE1, and this
data might have been exported from a table that does not have an identity column.
The following is an example of such a file:
Robert, 45.2, J
Mike, 76.9, K
Leo, 23.4, I
One way to import this file would be to explicitly list the columns to be imported
through the IMPORT command as follows:
db2 import from import.del of del replace into table1 (c1, c3, c4)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of importing the file is to use the
identitymissing file type modifier as follows:
db2 import from import.del of del modified by identitymissing
replace into table1
If the user-supplied values of 1, 2, and 3 are not to be used for the identity
column, the user could issue the following IMPORT command:
db2 import from import.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
Again, this approach might be cumbersome and prone to error if the table has
many columns. The identityignore modifier simplifies the syntax as follows:
db2 import from import.del of del modified by identityignore
replace into table1
Note: The CREATE and REPLACE_CREATE options of the IMPORT command are
deprecated and might be removed in a future release.
If no generated column-related file type modifiers are used, the import utility
works according to the following rules:
v A value is generated for a generated column whenever the corresponding row in
the input file is missing a value for the column, or a NULL value is explicitly
given. If a non-NULL value is supplied for a generated column, the row is
rejected (SQL3550W).
v If the server generates a NULL value for a generated column that is not nullable,
the row of data to which this field belongs is rejected (SQL0407N). This could
happen, for example, if a non-nullable generated column were defined as the
sum of two table columns that have NULL values supplied to them in the input
file.
There are two ways you can simplify the import of data into tables that contain a
generated column: the generatedmissing and the generatedignore file type
modifiers.
A user might want to import data from a file (load.del) into TABLE1, and this
data might have been exported from a table that does not have any generated
columns. The following is an example of such a file:
1, 5, J
2, 6, K
3, 7, I
One way to import this file would be to explicitly list the columns to be imported
through the IMPORT command as follows:
db2 import from import.del of del replace into table1 (c1, c2, c3)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of importing the file is to use the
generatedmissing file type modifier as follows:
db2 import from import.del of del modified by generatedmissing
replace into table1
The user-supplied, non-NULL values of 10, 11, and 12 (for g1), and 15, 16, and 17
(for g2) result in the row being rejected (SQL3550W). To avoid this, the user could
issue the following IMPORT command:
db2 import from import.del of del method P(1, 2, 5)
replace into table1 (c1, c2, c3)
Again, this approach might be cumbersome and prone to error if the table has
many columns. The generatedignore modifier simplifies the syntax as follows:
db2 import from import.del of del modified by generatedignore
replace into table1
For an INSERT_UPDATE, if the generated column is also a primary key and the
generatedignore modifier is specified, the IMPORT command honors the
generatedignore modifier. The IMPORT command does not substitute the
user-supplied value for this column in the WHERE clause of the UPDATE
statement.
The import utility, by default, treats data in the input file as data to load into the
column. However, when large object (LOB) data is stored in the main input data
The LOBS FROM clause implicitly activates lobsinfile. The LOBS FROM clause
conveys to the import utility the list of paths to search for the LOB files while
importing the data. If LOBS FROM option is not specified, the LOB files to import
are assumed to reside in the same path as the input relational data file.
For an import in CREATE mode, you can specify that the LOB data be created and
stored in a separate table space by using the LONG IN clause.
The following example shows how you would import an DEL file which has its
LOBs stored in separate files:
IMPORT FROM inputfile.del OF DEL
LOBS FROM /tmp/data
MODIFIED BY lobsinfile
INSERT INTO newtable
When import uses a stored procedure, messages are created in the message file
using the default language installed on the server. The messages are in the
language of the application if the language at the client and the server are the
same.
If you receive an error about writing or opening data on the server, ensure that:
v The directory exists.
v There is sufficient disk space for the files.
v The instance owner has write permission in the directory.
In ALLOW NO ACCESS mode, import acquires an exclusive (X) lock on the target
table is before inserting any rows. Holding a lock on a table has two implications:
v First, if there are other applications holding a table lock or row locks on the
import target table, the import utility waits for those applications to commit or
roll back their changes.
v Second, while import is running, any other application requesting locks waits for
the import operation to complete.
Note: You can specify a locktimeout value, which prevents applications (including
the import utility) from waiting indefinitely for a lock.
By requesting an exclusive lock at the beginning of the operation, import prevents
deadlocks from occurring as a result of other applications working and holding
row locks on the same target table.
In ALLOW WRITE ACCESS mode, the import utility acquires a nonexclusive (IX)
lock on the target table. Holding this lock on the table has the following
implications:
v If there are other applications holding an incompatible table lock, the import
utility does not start inserting data until all of these applications commit or roll
back their changes.
v While import is running, any other application requesting an incompatible table
lock waits until the import commits or rolls back the current transaction. Note
that import's table lock does not persist across a transaction boundary. As a
result, online import has to request and potentially wait for a table lock after
every commit.
v If there are other applications holding an incompatible row lock, the import
utility stops inserting data until all of these applications commit or roll back
their changes.
Reference - Import
IMPORT
Inserts data from an external file with a supported file format into a table,
hierarchy, view or nickname. LOAD is a faster alternative, but the load utility does
not support loading data at the hierarchy level.
Quick link to “File type modifiers for the import utility” on page 79.
Authorization
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT and SELECT privilege on each participating table or view
v IMPORT to an existing table using the INSERT_UPDATE option, requires one of
the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT, SELECT, UPDATE and DELETE privilege on each participating table
or view
v IMPORT to an existing table using the REPLACE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on the table or view
– INSERT, SELECT, and DELETE privilege on the table or view
v IMPORT to a new table using the CREATE or REPLACE_CREATE option,
requires one of the following:
Required connection
Command syntax
IMPORT FROM filename OF filetype
,
METHOD L-method_specs
,
N ( column-name )
,
P ( column-position )
XMLPARSE STRIP WHITESPACE
PRESERVE
ALLOW NO ACCESS
XMLVALIDATE USING XDS-specs ALLOW WRITE ACCESS
SCHEMA schema-sqlid
SCHEMALOCATION HINTS
COMMITCOUNT n RESTARTCOUNT n ROWCOUNT n
AUTOMATIC SKIPCOUNT
WARNINGCOUNT n NOTIMEOUT
create-table-specs:
table-name
,
( insert-column )
hierarchy description AS ROOT TABLE
UNDER sub-table-name
table-name
,
( insert-column )
hierarchy description
hierarchy description:
ALL TABLES
sub-table-list HIERARCHY STARTING sub-table-name
IN traversal-order-list
L-method-specs:
L ( column-start column-end )
,
sub-table-list:
( sub-table-name )
,
( insert-column )
traversal-order-list:
( sub-table-name )
tblspace-specs:
IN tablespace-name
INDEX IN tablespace-name LONG IN tablespace-name
XDS-specs:
XDS
DEFAULT schema-sqlid ,
IGNORE ( schema-sqlid )
Command parameters
ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the
default is to import all tables specified in the traversal order.
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table
is acquired before any rows are inserted. This prevents concurrent
applications from accessing table data. This is the default import behavior.
ALLOW WRITE ACCESS
Runs import in the online mode. An intent exclusive (IX) lock on the target
table is acquired when the first row is inserted. This allows concurrent
readers and writers to access table data. Online mode is not compatible
with the REPLACE, CREATE, or REPLACE_CREATE import options.
Online mode is not supported in conjunction with buffered inserts. The
import operation will periodically commit inserted data to prevent lock
escalation to a table lock and to avoid running out of active log space.
These commits will be performed even if the COMMITCOUNT option
was not used. During each commit, import will lose its IX table lock, and
will attempt to reacquire it after the commit. This parameter is required
when you import to a nickname and COMMITCOUNT must be specified
with a valid number (AUTOMATIC is not considered a valid option).
AS ROOT TABLE
Creates one or more sub-tables as a stand-alone table hierarchy.
COMMITCOUNT n | AUTOMATIC
Performs a COMMIT after every n records are imported. When a number n
is specified, import performs a COMMIT after every n records are
imported. When compound inserts are used, a user-specified commit
frequency of n is rounded up to the first integer multiple of the compound
count value. When AUTOMATIC is specified, import internally determines
when a commit needs to be performed. The utility will commit for either
one of two reasons:
v to avoid running out of active log space
v to avoid lock escalation from row level to table level
If the ALLOW WRITE ACCESS option is specified, and the
COMMITCOUNT option is not specified, the import utility will perform
commits as if COMMITCOUNT AUTOMATIC had been specified.
The ability of the import operation to avoid running out of active log space
is affected by the DB2 registry variable
DB2_FORCE_APP_ON_MAX_LOG:
v If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the
COMMITCOUNT AUTOMATIC command option is specified, the
import utility will be able to automatically avoid running out of active
log space.
v If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the
COMMITCOUNT n command option is specified, the import utility will
Note: If the data was exported from an MVS™ host database, and it
contains LONGVAR fields whose lengths, calculated on the page size, are
more than 254, CREATE might fail because the rows are too long. See
“Imported table re-creation” for a list of restrictions. In this case, the table
should be created manually, and IMPORT with INSERT should be
invoked, or, alternatively, the LOAD command should be used.
DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is
specified. The schema specified through the DEFAULT clause identifies a
schema to use for validation when the XML Data Specifier (XDS) of an
imported XML document does not contain an SCH attribute identifying an
XML Schema.
The DEFAULT clause takes precedence over the IGNORE and MAP
clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP
specifications will be ignored.
FROM filename
HIERARCHY
Specifies that hierarchical data is to be imported.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is
Note: Specifying which table space will contain an index can only be done
when the table is created.
insert-column
Specifies the name of a column in the table or the view into which data is
to be inserted.
INSERT
Adds the imported data to the table without changing the existing table
data.
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows
(of the target table) with matching primary keys.
INTO table-name
Specifies the database table into which the data is to be imported. This
table cannot be a system table, a declared temporary table or a summary
table.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except
in the case of an earlier server, when the fully qualified or the unqualified
table name should be used. A qualified table name is in the form:
schema.tablename. The schema is the user name under which the table was
created.
LOBS FROM lob-path
The names of the LOB data files are stored in the main data file (ASC,
DEL, or IXF), in the column that will be loaded into the LOB column. The
maximum number of paths that can be specified is 999. This will implicitly
activate the LOBSINFILE behavior.
This parameter is not valid when you import to a nickname.
Note: This method can only be used with ASC files, and is the
only valid option for that file type.
N Specifies the names of the columns in the data file to be imported.
The case of these column names must match the case of the
corresponding names in the system catalogs. Each table column
that is not nullable should have a corresponding entry in the
METHOD N list. For example, given data fields F1, F2, F3, F4, F5,
and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT
NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid
request, while method N (F2, F1) is not valid.
Note: This method can only be used with IXF or DEL files, and is
the only valid option for the DEL file type.
MODIFIED BY filetype-mod
Specifies file type modifier options. See “File type modifiers for the import
utility” on page 79.
NOTIMEOUT
Specifies that the import utility will not time out while waiting for locks.
This option supersedes the locktimeout database configuration parameter.
Other applications are not affected.
Usage notes
Be sure to complete all table operations and release all locks before starting an
import operation. This can be done by issuing a COMMIT after closing all cursors
opened WITH HOLD, or by issuing a ROLLBACK.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
Updates from the IMPORT command will always be committed at the end of an
IMPORT task. The IMPORT command can also perform automatic commits during
its execution to reduce the size of the lock list and the active log space. The
IMPORT command will rollback if the active log becomes full during IMPORT
processing.
v By default, automatic commits are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the
COMMITCOUNT parameter is not zero.
v Offline import does not perform automatic COMMITs if any of the following
conditions are true:
– The target is a view, not a table
– Compound inserts are used
– Buffered inserts are used
v By default, online import performs automatic commit to free both the active log
space and the lock list. Automatic commits are not performed only if a
COMMITCOUNT value of zero is specified.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when
recreating tables from PC/IXF files. (Primary key definitions are preserved if the
data was previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60 KB), the message file returned to the user on the
client might be missing messages from the middle of the import operation. The
first 30 KB of message information and the last 30 KB of message information are
always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on
a hard drive rather than on diskettes.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand. The file copying step is not necessary if the
source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
for a regular application. If the two differ, and the FORCEIN option is specified,
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
For table objects on an 8 KB page that are close to the limit of 1012 columns,
import of PC/IXF data files might cause DB2 to return an error, because the
maximum size of an SQL statement was exceeded. This situation can occur only if
the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not
apply to import of DEL or ASC files. If PC/IXF files are being used to create a
new table, an alternative is use db2look to dump the DDL statement that created
the table, and then to issue that statement through the CLP.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import
(INSERT option) is supported. The RESTARTCOUNT parameter, but not the
COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF
file. The table returns to the state it was in when exported.
Importing a multiple-part PC/IXF file whose individual parts are copied from a
Windows system to an AIX system is supported. Only the name of the first file
must be specified in the IMPORT command. For example, IMPORT FROM data.ixf
OF IXF INSERT INTO TABLE1. The file data.002, etc should be available in the same
directory as data.ixf.
Security labels in their internal format might contain newline characters. If you
import the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for delimiters
by specifying the delprioritychar file type modifier in the IMPORT command.
Federated considerations
If this modifier is specified, and the transaction log is not sufficiently large, the
import operation will fail. The transaction log must be large enough to
accommodate either the number of rows specified by COMMITCOUNT, or the
number of rows in the data file if COMMITCOUNT is not specified. It is
therefore recommended that the COMMITCOUNT option be specified to avoid
transaction log overflow.
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains
the LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the
length of the LOB in bytes. For example, if the string db2exp.001.123.456/ is
stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and
is 456 bytes long.
The LOBS FROM clause specifies where the LOB files are located when the
“lobsinfile” modifier is used. The LOBS FROM clause will implicitly activate the
LOBSINFILE behavior. The LOBS FROM clause conveys to the IMPORT utility
the list of paths to search for the LOB files while importing the data.
To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
Table 15. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the input data set. Converts character data from this code page to the
application code page during the import operation.
A default value of 1 is assigned for each element that is not specified. Some
examples of date formats are:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
A default value of 0 is assigned for each element that is not specified. Some
examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun',
'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive.
The following example illustrates how to import data containing user defined
date and time formats into a table called schedule:
db2 import from delfile2 of del
modified by timestampformat="yyyy.mm.dd hh:mm tt"
insert into schedule
This modifier should be used in conjunction with the delimited data file
generated by drop table recovery only if the table being recovered has graphic
data.
Restrictions
The usegraphiccodepage modifier MUST NOT be specified with DEL files created
by the EXPORT utility, as these files contain data encoded in only one code page.
The usegraphiccodepage modifier is also ignored by the double-byte character
large objects (DBCLOBs) in files.
xmlchar Specifies that XML documents are encoded in the character code page.
This option is useful for processing XML documents that are encoded in the
specified character code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the character code page, otherwise the row containing
the document will be rejected. Note that the character codepage is the value
specified by the codepage file type modifier, or the application codepage if it is
not specified. By default, either the documents are encoded in Unicode, or they
contain a declaration tag with an encoding attribute.
xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.
This option is useful for processing XML documents that are encoded in a specific
graphic code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the graphic code page, otherwise the row containing
the document will be rejected. Note that the graphic code page is the graphic
component of the value specified by the codepage file type modifier, or the
graphic component of the application code page if it is not specified. By default,
documents are either encoded in Unicode, or they contain a declaration tag with
an encoding attribute.
Note: If the xmlgraphic modifier is specified with the IMPORT command, the
XML document to be imported must be encoded in the UTF-16 code page.
Otherwise, the XML document may be rejected with a parsing error, or it may be
imported into the table with data corruption.
Table 16. Valid file type modifiers for the import utility: ASC (non-delimited ASCII) file format
Modifier Description
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully imported if code page conversion causes
the source data to shrink; for example, 4-byte EUC data in the source could
shrink to 2-byte DBCS data in the target, and require half the space. This option
is particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
This modifier is case sensitive for EBCDIC data files, except when the character is
an English letter. For example, if the null indicator character is specified to be the
letter N, then n is also recognized as a null indicator.
reclen=x x is an integer with a maximum value of 32 767. x characters are read for each
row, and a new-line character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field.
If this option is not specified, blank spaces are kept.
This option cannot be specified together with striptnulls. These are mutually
exclusive options. This option replaces the obsolete t option, which is supported
for earlier compatibility only.
striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a
variable-length field. If this option is not specified, NULLs are kept.
This option cannot be specified together with striptblanks. These are mutually
exclusive options. This option replaces the obsolete padwithzero option, which is
supported for earlier compatibility only.
Table 17. Valid file type modifiers for the import utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.34 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter.
In the following example, chardel’’ causes the import utility to interpret any
single quotation mark (') it encounters as a character string delimiter:
db2 "import from myfile.del of del
modified by chardel’’
method p (1, 4) insert into staff (id, years)"
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.34
In the following example, coldel; causes the import utility to interpret any
semicolon (;) it encounters as a column delimiter:
db2 import from myfile.del of del
modified by coldel;
messages msgs.txt insert into staff
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
In the following example, decpt; causes the import utility to interpret any
semicolon (;) it encounters as a decimal point:
db2 "import from myfile.del of del
modified by chardel’’
decpt; messages msgs.txt insert into staff"
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter,
column delimiter. This modifier protects existing applications that depend on the
older priority by reverting the delimiter priorities to: character delimiter, record
delimiter, column delimiter. Syntax:
db2 import ... modified by delprioritychar ...
With the delprioritychar modifier specified, there will be only two rows in this
data file. The second <row delimiter> will be interpreted as part of the first data
column of the second row, while the first and the third <row delimiter> are
interpreted as actual record delimiters. If this modifier is not specified, there will
be three rows in this data file, each delimited by a <row delimiter>.
keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR,
LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks
that are not inside character delimiters are removed, and a NULL is inserted into
the table for all blank fields.
nochardel The import utility will assume all bytes found between the column delimiters to
be part of the column's data. Character delimiters will be parsed as part of
column data. This option should not be specified if the data was exported using
DB2 (unless nochardel was specified at export time). It is provided to support
vendor data files that do not have character delimiters. Improper usage might
result in data loss or corruption.
Table 18. Valid file type modifiers for the import utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress
translation between code pages.
Fixed length target fields are checked to verify that they are large enough for the
data. If nochecklengths is specified, no checking is done, and an attempt is made
to import each row.
indexixf Directs the utility to drop all indexes currently defined on the existing table, and
to create new ones from the index definitions in the PC/IXF file. This option can
only be used when the contents of a table are being replaced. It cannot be used
with a view, or when a insert-column is specified.
Note:
1. The import utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted,
the import operation fails, and an error code is returned.
2. Double quotation marks around the date format string are mandatory. Field
separators cannot contain any of the following: a-z, A-Z, and 0-9. The field
separator should not be the same as the character delimiter or field delimiter
in the DEL file format. A field separator is optional if the start and end
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be
preceded by an escape character (for example, \).
3. Character values provided for the chardel, coldel, or decpt file type modifiers
must be specified in the code page of the source data.
The character code point (instead of the character symbol), can be specified
using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the
code point. For example, to specify the # character as a column delimiter, use
one of the following:
... modified by coldel# ...
... modified by coldel0x23 ...
... modified by coldelX23 ...
4. Delimiter considerations for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
5. The following file type modifiers are not allowed when importing into a
nickname:
v indexixf
v indexschema
v dldelfiletype
v nodefaults
v usedefaults
v no_type_idfiletype
v generatedignore
v generatedmissing
v identityignore
v identitymissing
v lobsinfile
6. The WSF file format is not supported for XML columns.
7. The CREATE mode is not supported for XML columns.
8. All XML data must reside in XML files that are separate from the main data
file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML
column in the main data file.
Quick link to “File type modifiers for the import utility” on page 103.
Authorization
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT and SELECT privilege on each participating table or view
v IMPORT to an existing table using the INSERT_UPDATE option, requires one of
the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT, SELECT, UPDATE and DELETE privilege on each participating table
or view
v IMPORT to an existing table using the REPLACE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on the table or view
– INSERT, SELECT, and DELETE privilege on the table or view
v IMPORT to a new table using the CREATE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
Required connection
Command syntax
IMPORT FROM filename OF filetype
,
METHOD L-method_specs
,
N ( column-name )
,
P ( column-position )
XMLPARSE STRIP WHITESPACE
PRESERVE
ALLOW NO ACCESS
XMLVALIDATE USING XDS-specs ALLOW WRITE ACCESS
SCHEMA schema-sqlid
SCHEMALOCATION HINTS
COMMITCOUNT n RESTARTCOUNT n ROWCOUNT n
AUTOMATIC SKIPCOUNT
WARNINGCOUNT n NOTIMEOUT
create-table-specs:
table-name
,
( insert-column )
hierarchy description AS ROOT TABLE
UNDER sub-table-name
table-name
,
( insert-column )
hierarchy description
hierarchy description:
ALL TABLES
sub-table-list HIERARCHY STARTING sub-table-name
IN traversal-order-list
L-method-specs:
L ( column-start column-end )
,
sub-table-list:
( sub-table-name )
,
( insert-column )
traversal-order-list:
( sub-table-name )
tblspace-specs:
IN tablespace-name
INDEX IN tablespace-name LONG IN tablespace-name
XDS-specs:
XDS
DEFAULT schema-sqlid ,
IGNORE ( schema-sqlid )
Command parameters
ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the
default is to import all tables specified in the traversal order.
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table
is acquired before any rows are inserted. This prevents concurrent
applications from accessing table data. This is the default import behavior.
ALLOW WRITE ACCESS
Runs import in the online mode. An intent exclusive (IX) lock on the target
table is acquired when the first row is inserted. This allows concurrent
readers and writers to access table data. Online mode is not compatible
with the REPLACE, CREATE, or REPLACE_CREATE import options.
Online mode is not supported in conjunction with buffered inserts. The
import operation will periodically commit inserted data to prevent lock
escalation to a table lock and to avoid running out of active log space.
These commits will be performed even if the COMMITCOUNT option
was not used. During each commit, import will lose its IX table lock, and
will attempt to reacquire it after the commit. This parameter is required
when you import to a nickname and COMMITCOUNT must be specified
with a valid number (AUTOMATIC is not considered a valid option).
AS ROOT TABLE
Creates one or more sub-tables as a stand-alone table hierarchy.
COMMITCOUNT n | AUTOMATIC
Performs a COMMIT after every n records are imported. When a number n
is specified, import performs a COMMIT after every n records are
imported. When compound inserts are used, a user-specified commit
frequency of n is rounded up to the first integer multiple of the compound
count value. When AUTOMATIC is specified, import internally determines
when a commit needs to be performed. The utility will commit for either
one of two reasons:
v to avoid running out of active log space
v to avoid lock escalation from row level to table level
If the ALLOW WRITE ACCESS option is specified, and the
COMMITCOUNT option is not specified, the import utility will perform
commits as if COMMITCOUNT AUTOMATIC had been specified.
The ability of the import operation to avoid running out of active log space
is affected by the DB2 registry variable
DB2_FORCE_APP_ON_MAX_LOG:
v If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the
COMMITCOUNT AUTOMATIC command option is specified, the
import utility will be able to automatically avoid running out of active
log space.
v If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the
COMMITCOUNT n command option is specified, the import utility will
Note: If the data was exported from an MVS host database, and it contains
LONGVAR fields whose lengths, calculated on the page size, are more
than 254, CREATE might fail because the rows are too long. See “Imported
table re-creation” for a list of restrictions. In this case, the table should be
created manually, and IMPORT with INSERT should be invoked, or,
alternatively, the LOAD command should be used.
DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is
specified. The schema specified through the DEFAULT clause identifies a
schema to use for validation when the XML Data Specifier (XDS) of an
imported XML document does not contain an SCH attribute identifying an
XML Schema.
The DEFAULT clause takes precedence over the IGNORE and MAP
clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP
specifications will be ignored.
FROM filename
HIERARCHY
Specifies that hierarchical data is to be imported.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is
Note: Specifying which table space will contain an index can only be done
when the table is created.
insert-column
Specifies the name of a column in the table or the view into which data is
to be inserted.
INSERT
Adds the imported data to the table without changing the existing table
data.
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows
(of the target table) with matching primary keys.
INTO table-name
Specifies the database table into which the data is to be imported. This
table cannot be a system table, a declared temporary table or a summary
table.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except
in the case of an earlier server, when the fully qualified or the unqualified
table name should be used. A qualified table name is in the form:
schema.tablename. The schema is the user name under which the table was
created.
LOBS FROM lob-path
The names of the LOB data files are stored in the main data file (ASC,
DEL, or IXF), in the column that will be loaded into the LOB column. The
maximum number of paths that can be specified is 999. This will implicitly
activate the LOBSINFILE behavior.
This parameter is not valid when you import to a nickname.
Note: This method can only be used with ASC files, and is the
only valid option for that file type.
N Specifies the names of the columns in the data file to be imported.
The case of these column names must match the case of the
corresponding names in the system catalogs. Each table column
that is not nullable should have a corresponding entry in the
METHOD N list. For example, given data fields F1, F2, F3, F4, F5,
and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT
NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid
request, while method N (F2, F1) is not valid.
Note: This method can only be used with IXF or DEL files, and is
the only valid option for the DEL file type.
MODIFIED BY filetype-mod
Specifies file type modifier options. See “File type modifiers for the import
utility” on page 103.
NOTIMEOUT
Specifies that the import utility will not time out while waiting for locks.
This option supersedes the locktimeout database configuration parameter.
Other applications are not affected.
Usage notes
Be sure to complete all table operations and release all locks before starting an
import operation. This can be done by issuing a COMMIT after closing all cursors
opened WITH HOLD, or by issuing a ROLLBACK.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
Updates from the IMPORT command will always be committed at the end of an
IMPORT task. The IMPORT command can also perform automatic commits during
its execution to reduce the size of the lock list and the active log space. The
IMPORT command will rollback if the active log becomes full during IMPORT
processing.
v By default, automatic commits are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the
COMMITCOUNT parameter is not zero.
v Offline import does not perform automatic COMMITs if any of the following
conditions are true:
– The target is a view, not a table
– Compound inserts are used
– Buffered inserts are used
v By default, online import performs automatic commit to free both the active log
space and the lock list. Automatic commits are not performed only if a
COMMITCOUNT value of zero is specified.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when
recreating tables from PC/IXF files. (Primary key definitions are preserved if the
data was previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60 KB), the message file returned to the user on the
client might be missing messages from the middle of the import operation. The
first 30 KB of message information and the last 30 KB of message information are
always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on
a hard drive rather than on diskettes.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand. The file copying step is not necessary if the
source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
for a regular application. If the two differ, and the FORCEIN option is specified,
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
For table objects on an 8 KB page that are close to the limit of 1012 columns,
import of PC/IXF data files might cause DB2 to return an error, because the
maximum size of an SQL statement was exceeded. This situation can occur only if
the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not
apply to import of DEL or ASC files. If PC/IXF files are being used to create a
new table, an alternative is use db2look to dump the DDL statement that created
the table, and then to issue that statement through the CLP.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import
(INSERT option) is supported. The RESTARTCOUNT parameter, but not the
COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF
file. The table returns to the state it was in when exported.
Importing a multiple-part PC/IXF file whose individual parts are copied from a
Windows system to an AIX system is supported. Only the name of the first file
must be specified in the IMPORT command. For example, IMPORT FROM data.ixf
OF IXF INSERT INTO TABLE1. The file data.002, etc should be available in the same
directory as data.ixf.
Security labels in their internal format might contain newline characters. If you
import the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for delimiters
by specifying the delprioritychar file type modifier in the IMPORT command.
Federated considerations
If this modifier is specified, and the transaction log is not sufficiently large, the
import operation will fail. The transaction log must be large enough to
accommodate either the number of rows specified by COMMITCOUNT, or the
number of rows in the data file if COMMITCOUNT is not specified. It is
therefore recommended that the COMMITCOUNT option be specified to avoid
transaction log overflow.
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains
the LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the
length of the LOB in bytes. For example, if the string db2exp.001.123.456/ is
stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and
is 456 bytes long.
The LOBS FROM clause specifies where the LOB files are located when the
“lobsinfile” modifier is used. The LOBS FROM clause will implicitly activate the
LOBSINFILE behavior. The LOBS FROM clause conveys to the IMPORT utility
the list of paths to search for the LOB files while importing the data.
To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
Table 21. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the input data set. Converts character data from this code page to the
application code page during the import operation.
A default value of 1 is assigned for each element that is not specified. Some
examples of date formats are:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
A default value of 0 is assigned for each element that is not specified. Some
examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun',
'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive.
The following example illustrates how to import data containing user defined
date and time formats into a table called schedule:
db2 import from delfile2 of del
modified by timestampformat="yyyy.mm.dd hh:mm tt"
insert into schedule
This modifier should be used in conjunction with the delimited data file
generated by drop table recovery only if the table being recovered has graphic
data.
Restrictions
The usegraphiccodepage modifier MUST NOT be specified with DEL files created
by the EXPORT utility, as these files contain data encoded in only one code page.
The usegraphiccodepage modifier is also ignored by the double-byte character
large objects (DBCLOBs) in files.
xmlchar Specifies that XML documents are encoded in the character code page.
This option is useful for processing XML documents that are encoded in the
specified character code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the character code page, otherwise the row containing
the document will be rejected. Note that the character codepage is the value
specified by the codepage file type modifier, or the application codepage if it is
not specified. By default, either the documents are encoded in Unicode, or they
contain a declaration tag with an encoding attribute.
xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.
This option is useful for processing XML documents that are encoded in a specific
graphic code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the graphic code page, otherwise the row containing
the document will be rejected. Note that the graphic code page is the graphic
component of the value specified by the codepage file type modifier, or the
graphic component of the application code page if it is not specified. By default,
documents are either encoded in Unicode, or they contain a declaration tag with
an encoding attribute.
Note: If the xmlgraphic modifier is specified with the IMPORT command, the
XML document to be imported must be encoded in the UTF-16 code page.
Otherwise, the XML document may be rejected with a parsing error, or it may be
imported into the table with data corruption.
Table 22. Valid file type modifiers for the import utility: ASC (non-delimited ASCII) file format
Modifier Description
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully imported if code page conversion causes
the source data to shrink; for example, 4-byte EUC data in the source could
shrink to 2-byte DBCS data in the target, and require half the space. This option
is particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
This modifier is case sensitive for EBCDIC data files, except when the character is
an English letter. For example, if the null indicator character is specified to be the
letter N, then n is also recognized as a null indicator.
reclen=x x is an integer with a maximum value of 32 767. x characters are read for each
row, and a new-line character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field.
If this option is not specified, blank spaces are kept.
This option cannot be specified together with striptnulls. These are mutually
exclusive options. This option replaces the obsolete t option, which is supported
for earlier compatibility only.
striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a
variable-length field. If this option is not specified, NULLs are kept.
This option cannot be specified together with striptblanks. These are mutually
exclusive options. This option replaces the obsolete padwithzero option, which is
supported for earlier compatibility only.
Table 23. Valid file type modifiers for the import utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.34 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter.
In the following example, chardel’’ causes the import utility to interpret any
single quotation mark (') it encounters as a character string delimiter:
db2 "import from myfile.del of del
modified by chardel’’
method p (1, 4) insert into staff (id, years)"
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.34
In the following example, coldel; causes the import utility to interpret any
semicolon (;) it encounters as a column delimiter:
db2 import from myfile.del of del
modified by coldel;
messages msgs.txt insert into staff
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
In the following example, decpt; causes the import utility to interpret any
semicolon (;) it encounters as a decimal point:
db2 "import from myfile.del of del
modified by chardel’’
decpt; messages msgs.txt insert into staff"
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter,
column delimiter. This modifier protects existing applications that depend on the
older priority by reverting the delimiter priorities to: character delimiter, record
delimiter, column delimiter. Syntax:
db2 import ... modified by delprioritychar ...
With the delprioritychar modifier specified, there will be only two rows in this
data file. The second <row delimiter> will be interpreted as part of the first data
column of the second row, while the first and the third <row delimiter> are
interpreted as actual record delimiters. If this modifier is not specified, there will
be three rows in this data file, each delimited by a <row delimiter>.
keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR,
LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks
that are not inside character delimiters are removed, and a NULL is inserted into
the table for all blank fields.
nochardel The import utility will assume all bytes found between the column delimiters to
be part of the column's data. Character delimiters will be parsed as part of
column data. This option should not be specified if the data was exported using
DB2 (unless nochardel was specified at export time). It is provided to support
vendor data files that do not have character delimiters. Improper usage might
result in data loss or corruption.
Table 24. Valid file type modifiers for the import utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress
translation between code pages.
Fixed length target fields are checked to verify that they are large enough for the
data. If nochecklengths is specified, no checking is done, and an attempt is made
to import each row.
indexixf Directs the utility to drop all indexes currently defined on the existing table, and
to create new ones from the index definitions in the PC/IXF file. This option can
only be used when the contents of a table are being replaced. It cannot be used
with a view, or when a insert-column is specified.
Note:
1. The import utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted,
the import operation fails, and an error code is returned.
2. Double quotation marks around the date format string are mandatory. Field
separators cannot contain any of the following: a-z, A-Z, and 0-9. The field
separator should not be the same as the character delimiter or field delimiter
in the DEL file format. A field separator is optional if the start and end
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be
preceded by an escape character (for example, \).
3. Character values provided for the chardel, coldel, or decpt file type modifiers
must be specified in the code page of the source data.
The character code point (instead of the character symbol), can be specified
using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the
code point. For example, to specify the # character as a column delimiter, use
one of the following:
... modified by coldel# ...
... modified by coldel0x23 ...
... modified by coldelX23 ...
4. Delimiter considerations for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
5. The following file type modifiers are not allowed when importing into a
nickname:
v indexixf
v indexschema
v dldelfiletype
v nodefaults
v usedefaults
v no_type_idfiletype
v generatedignore
v generatedmissing
v identityignore
v identitymissing
v lobsinfile
6. The WSF file format is not supported for XML columns.
7. The CREATE mode is not supported for XML columns.
8. All XML data must reside in XML files that are separate from the main data
file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML
column in the main data file.
Authorization
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view or nickname
– INSERT and SELECT privilege on each participating table or view
v IMPORT to an existing table using the INSERT_UPDATE option, requires one of
the following:
– sysadm
– dbadm
– CONTROL privilege on the table, view or nickname
– INSERT, SELECT, UPDATE and DELETE privilege on each participating table
or view
v IMPORT to an existing table using the REPLACE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on the table or view
– INSERT, SELECT, and DELETE privilege on the table or view
v IMPORT to a new table using the CREATE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
Required connection
SQL_API_RC SQL_API_FN
db2gImport (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
Note: If an error occurs after the existing data is deleted, that data
is lost.
This parameter is not valid when you import to a nickname.
tname The name of the table, typed table, view, or object view into which
the data is to be inserted. An alias for REPLACE,
INSERT_UPDATE, or INSERT can be specified, except in the case
of an earlier server, when a qualified or unqualified name should
be specified. If it is a view, it cannot be a read-only view.
tcolumn-list
A list of table or view column names into which the data is to be
inserted. The column names must be separated by commas. If
column names are not specified, column names as defined in the
CREATE TABLE or the ALTER TABLE statement are used. If no
column list is specified for typed tables, data is inserted into all
columns within each sub-table.
sub-table-name
Specifies a parent table when creating one or more sub-tables
under the CREATE option.
ALL TABLES
An implicit keyword for hierarchy only. When importing a
hierarchy, the default is to import all tables specified in the
traversal-order-list.
HIERARCHY
Specifies that hierarchical data is to be imported.
Usage notes
Before starting an import operation, you must complete all table operations and
release all locks in one of two ways:
v Close all open cursors that were defined with the WITH HOLD clause, and
commit the data changes by executing the COMMIT statement.
v Roll back the data changes by executing the ROLLBACK statement.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
By default, automatic COMMITs are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the *piCommitcount
parameter is not zero. A full log results in a ROLLBACK.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when creating
tables from PC/IXF files. (Primary key definitions are preserved if the data was
previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60 KB), the message file returned to the user on the
client may be missing messages from the middle of the import operation. The first
30 KB of message information and the last 30 KB of message information are
always retained.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
for a regular application. If the two differ, and the FORCEIN option is specified,
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
the FORCEIN option is not specified, and there is no conversion table, the import
operation will fail. This applies only to PC/IXF files on DB2 for AIX clients.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import
(INSERT option) is supported. The restartcnt parameter, but not the commitcnt
parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file. The
import utility can be used to recover a table previously exported to a PC/IXF file.
The table returns to the state it was in when exported.
Federated considerations
When using the db2Import API and the INSERT, UPDATE, or INSERT_UPDATE
parameters, you must ensure that you have CONTROL privilege on the
participating nickname. You must ensure that the nickname you wish to use when
doing an import operation already exists.
SQL3150N The H record in the PC/IXF file has product "DB2 01.00", date
"19970220", and time "140848".
SQL3110N The utility has completed processing. "58" rows were read from the
input file.
Example 2
The following example shows how to import into a table that has identity
columns:
The following command generates identity values for rows 1 and 2, since no
identity values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however,
are assigned the user-supplied identity values of 100 and 101, respectively.
db2 import from datafile1.del of del replace into table1
To import DATAFILE1 into TABLE1 so that identity values are generated for all
rows, issue one of the following commands:
db2 import from datafile1.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
db2 import from datafile1.del of del modified by identityignore
replace into table1
To import DATAFILE2 into TABLE1 so that identity values are generated for each
row, issue one of the following commands:
db2 import from datafile2.del of del replace into table1 (c1, c3, c4)
db2 import from datafile2.del of del modified by identitymissing
replace into table1
Example 3
The following example shows how to import into a table that has null indicators:
Data Records:
1...5....10...15...20...25...30...35...40
Test data 1 XXN 123abcdN
Test data 2 and 3 QQY wxyzN
Test data 4,5 and 6 WWN6789 Y
Note:
1. Since COL4 is not provided in the input file, it will be inserted into TABLE1
with its default value (it is defined NOT NULL WITH DEFAULT).
2. Positions 23 and 32 are used to indicate whether COL2 and COL3 of TABLE1
will be loaded NULL for a given row. If there is a Y in the column's null
indicator position for a given record, the column will be NULL. If there is an N,
the data values in the column's data positions of the input record (as defined in
L(........)) are used as the source of column data for the row. In this example,
neither column in row 1 is NULL; COL2 in row 2 is NULL; and COL3 in row 3
is NULL.
3. In this example, the NULL INDICATORS for COL1 and COL5 are specified as
0 (zero), indicating that the data is not nullable.
4. The NULL INDICATOR for a given column can be anywhere in the input
record, but the position must be specified, and the Y or N values must be
supplied.
The load process consists of four distinct phases (see Figure 3):
1. Load
During the load phase, data is loaded into the table, and index keys and table
statistics are collected, if necessary. Save points, or points of consistency, are
established at intervals specified through the SAVECOUNT parameter in the
LOAD command. Messages are generated, indicating how many input rows
were successfully loaded at the time of the save point.
2. Build
During the build phase, indexes are produced based on the index keys
collected during the load phase. The index keys are sorted during the load
phase, and index statistics are collected (if the STATISTICS USE PROFILE
option was specified, and profile indicates collecting index statistics). The
statistics are similar to those collected through the RUNSTATS command.
3. Delete
During the delete phase, the rows that caused a unique or primary key
violation are removed from the table. These deleted rows are stored in the load
exception table, if one was specified.
4. Index copy
During the index copy phase, the index data is copied from a system
temporary table space to the original table space. This will only occur if a
system temporary table space was specified for index creation during a load
operation with the READ ACCESS option specified.
Load Load Build Build Delete Delete Index Copy Index Copy
Phase Phase Phase Phase Phase Phase Phase Phase
Starts Ends Starts Ends Starts Ends Starts Ends
Figure 3. The Four Phases of the Load Process: Load, Build, Delete, and Index Copy
Note: After you invoke the load utility, you can use the LIST UTILITIES command
to monitor the progress of the load operation.
Load modes
v INSERT
In this mode, load appends input data to the table without making any changes
to the existing data.
v REPLACE
In this mode, load deletes existing data from the table and populates it with the
input data.
v RESTART
In this mode, an interrupted load is resumed. In most cases, the load is resumed
from the phase it failed in. If that phase was the load phase, the load is resumed
from the last successful consistency point.
v TERMINATE
In this mode, a failed load operation is rolled back.
Note: The load utility does not honor the COMPACT lob option.
v A message file name. During load operations, you can specify that message files
be created to contain the error, warning, and informational messages associated
with those operations. Specify the name of these files with the MESSAGES
parameter.
Note:
1. You can only view the contents of a message file after the operation is
finished. If you wish to view load messages while a load operation is
running, you can use the LOAD QUERY command.
2. Each message in a message file begins on a new line and contains
information provided by the DB2 message retrieval facility.
v Whether column values being loaded have implied decimal points.
v Whether the utility should modify the amount of free space available after a
table is loaded.
v Whether statistics are to be gathered during the load process. This option is only
supported if the load operation is running in REPLACE mode. Statistics are
(recovery time-line)
v The fully qualified path to be used when creating temporary files during a load
operation. The name is specified by the TEMPFILES PATH parameter of the
LOAD command. The default value is the database path. The path resides on
the server machine, and is accessed by the DB2 instance exclusively. Therefore,
any path name qualification given to this parameter must reflect the directory
structure of the server, not the client, and the DB2 instance owner must have
read and write permission on the path.
To load data into a table, you must have one of the following:
v SYSADM authority
v DBADM authority
v LOAD authority on the database and
– INSERT privilege on the table when the load utility is invoked in INSERT
mode, TERMINATE mode (to terminate a previous load insert operation), or
RESTART mode (to restart a previous load insert operation)
– INSERT and DELETE privilege on the table when the load utility is invoked
in REPLACE mode, TERMINATE mode (to terminate a previous load replace
operation), or RESTART mode (to restart a previous load replace operation)
– INSERT privilege on the exception table, if such a table is used as part of the
load operation.
– SELECT privilege on SYSCAT.TABLES is required in some cases where LOAD
queries the catalog tables.
Since all load processes (and all DB2 server processes, in general), are owned by
the instance owner, and all of these processes use the identification of the instance
owner to access needed files, the instance owner must have read access to input
data files. These input data files must be readable by the instance owner, regardless
of who invokes the command.
If the REPLACE option is specified, the session authorization ID must have the
authority to drop the table.
Note:
v To load data into a table that has protected columns, the session authorization
ID must have LBAC credentials that allow write access to all protected columns
in the table.
v To load data into a table that has protected rows, the session authorization ID
must have been granted a security label for write access that is part of the
security policy protecting the table.
LOAD authority
Users having LOAD authority at the database level, as well as INSERT privilege on
a table, can use the LOAD command to load data into a table.
Users having LOAD authority at the database level, as well as INSERT privilege on
a table, can LOAD RESTART or LOAD TERMINATE if the previous load operation
is a load to insert data.
Users having LOAD authority at the database level, as well as the INSERT and
DELETE privileges on a table, can use the LOAD REPLACE command.
If the previous load operation was a load replace, the DELETE privilege must also
have been granted to that user before the user can LOAD RESTART or LOAD
TERMINATE.
If the exception tables are used as part of a load operation, the user must have
INSERT privilege on the exception tables.
The user with this authority can perform QUIESCE TABLESPACES FOR TABLE,
RUNSTATS, and LIST TABLESPACES commands.
Loading data
The load utility is capable of efficiently moving large quantities of data into newly
created tables, or into tables that already contain data.
Before invoking the load utility, you must be connected to (or be able to implicitly
connect to) the database into which the data will be loaded. Since the utility will
issue a COMMIT statement, you should complete all transactions and release all
locks by issuing either a COMMIT or a ROLLBACK statement before invoking the
load utility. Data is loaded in the sequence that appears in the input file, except
when using multidimensional clustering (MDC) tables, partitioned tables, or the
anyorder file type modifier. If a particular sequence is desired, sort the data before
attempting a load operation. If clustering is required, the data should be sorted on
the clustering index prior to loading. When loading data into multidimensional
clustered tables (MDC), sorting is not required prior to the load operation, and
data is clustered according to the MDC table definition. When loading data into
partitioned tables, sorting is not required prior to the load operation, and data is
partitioned according to the table definition.
These are some of the restrictions that apply to the load utility (i.e., this list is not
exhaustive):
v Loading data into nicknames is not supported.
v Loading data into typed tables, or tables with structured type columns, is not
supported.
v Loading data into declared temporary tables is not supported.
v XML data can only be read from the server side; if you want to have the XML
files read from the client, use the import utility.
v You cannot create or drop tables in a table space that is in Backup Pending state.
v You cannot load data into a database accessed through DB2 Connect or a server
level prior to DB2 Version 2. Options that are only available with the current
cannot be used with a server from the previous release.
v If an error occurs during a LOAD REPLACE operation, the original data in the
table is lost. Retain a copy of the input data to allow the load operation to be
restarted.
v Triggers are not activated on newly loaded rows. Business rules associated with
triggers are not enforced by the load utility.
v Loading encrypted data is not supported.
These are some of the restrictions that apply to the load utility when loading into a
partitioned table (i.e., this list is not exhaustive):
v Consistency points are not supported when the number of partitioning agents is
greater than one.
v Loading data into a subset of data partitions while keeping the remaining data
partitions fully online is not supported.
v The exception table used by a load operation or a set integrity pending
operation cannot be partitioned.
v A unique index cannot be rebuilt when the load utility is running in insert mode
or restart mode, and the load target table has any detached dependents.
The load utility can be invoked through the command line processor (CLP), the
Load wizard in the Control Center, or an application programming interface (API)
db2Load.
Procedure
1. From the Control Center, expand the object tree until you find the Tables folder.
2. Click on the Tables folder. Any existing tables are displayed in the pane on the
right side of the window (the contents pane).
3. In the contents pane, right-click on the table you want, and select Load from
the pop-up menu. The Load wizard opens.
4. Specify the required information on each page of the wizard to successfully
load your data.
Results
Detailed information about the Load wizard is provided through its online help
facility.
In this example:
v Any warning or error messages are placed in the staff.msgs file.
v A copy of the changes made is stored in Tivoli® Storage Manager (TSM).
v Four thousand pages of buffer space are to be used during the load operation.
The following is another example of a LOAD command issued through the CLP:
db2 load from stafftab.ixf of ixf messages staff.msgs
tempfiles path /u/myuser replace into staff
In this example:
v The table data is being replaced.
v The TEMPFILES PATH parameter is used to specify /u/myuser as the server path
into which temporary files will be written.
Note: These examples use relative path names for the load input file. Relative path
names are only allowed on calls from a client on the same database partition as the
database. The use of fully qualified path names is recommended.
What to do next
After you invoke the load utility, you can use the LIST UTILITIES command to
monitor the progress of the load operation. In the case of a load operation
performed in either INSERT mode, REPLACE mode, or RESTART mode, detailed
progress monitoring support is available. Issue the LIST UTILITIES command with
the SHOW DETAILS option to view detailed information about the current load
phase. Details are not available for a load operation performed in TERMINATE
mode. The LIST UTILITIES command will simply show that a load terminate
utility is currently running.
When loading data into an XML table column, you can use the XML FROM option
to specify the paths of the input XML data file or files. For example, to load data
from an XML file "/home/user/xmlpath/xmlfile1.xml" you could use the
following command:
LOAD FROM data1.del OF DEL XML FROM /home/user/xmlpath INSERT INTO USER.T1
In this case, the XDS contains an SCH attribute with the fully qualified SQL
identifier of the XML schema to use for validation, "S1.SCHEMA_A":
<XDS FIL=’xmldata.ext’ OFF=’123’ LEN=’456’ SCH=’S1.SCHEMA_A’ />
You can use the XMLPARSE option to specify whether whitespace in the loaded
XML documents is preserved or stripped. In the following example, all loaded
XML documents are validated against the schema with SQL identifier
"S2.SCHEMA_A" and these documents are parsed with whitespace preserved:
LOAD FROM data2.del OF DEL XML FROM /home/user/xmlpath XMLPARSE PRESERVE
WHITESPACE XMLVALIDATE USING SCHEMA S2.SCHEMA_A INSERT INTO USER.T1
The load utility does not access any detached or attached data partitions. Data is
inserted into visible data partitions only. Visible data partitions are neither attached
Because exception table inserts are expensive, you can control which constraint
violations are inserted into the exception table. For instance, the default behavior of
the load utility is to insert rows that were rejected because of a range constraint or
unique constraint violation, but were otherwise valid, into the exception table. You
can turn off this behavior by specifying, respectively, NORANGEEXC or
NOUNIQUEEXC with the FOR EXCEPTION clause. If you specify that these
constraint violations should not be inserted into the exception table, or you do not
specify an exception table, information about rows violating the range constraint or
unique constraint is lost.
History file
If the target table is partitioned, the corresponding history file entry does not
include a list of the table spaces spanned by the target table. A different operation
granularity identifier ('R' instead of 'T') indicates that a load operation ran against
a partitioned table.
Generated columns
If a generated column is in any of the partitioning, dimension, or distribution keys,
the generatedoverride file type modifier is ignored and the load utility generates
values as if the generatedignore file type modifier is specified. Loading an
incorrect generated column value in this case can place the record in the wrong
physical location, such as the wrong data partition, MDC block or database
partition. For example, once a record is on a wrong data partition, set integrity has
to move it to a different physical location, which cannot be accomplished during
online set integrity operations.
Data availability
The current ALLOW READ ACCESS load algorithm extends to partitioned tables.
An ALLOW READ ACCESS load operation allows concurrent readers to access the
whole table, including both loading and non-loading data partitions.
Error isolation
Error isolation at the data partition level is not supported. Isolating the errors
means continuing a load on data partitions that did not run into an error and
stopping on data partitions that did run into an error. Errors can be isolated
between different database partitions, but the load utility cannot commit
transactions on a subset of visible data partitions and roll back the remaining
visible data partitions.
Other considerations
v Incremental indexing is not supported if any of the indexes are marked invalid.
An index is considered invalid if it requires a rebuild or if detached dependents
require validation with the SET INTEGRITY statement.
v Loading into tables partitioned using any combination of partitioned by range,
distributed by hash, or organized by dimension algorithms is also supported.
v For log records which include the list of object and table space IDs affected by
the load, the size of these log records (LOAD START and COMMIT (PENDING
LIST)) could grow considerably and hence reduce the amount of active log space
available to other applications.
v When a table is both partitioned and distributed, a partitioned database load
might not affect all database partitions. Only the objects on the output database
partitions are changed.
v During a load operation, memory consumption for partitioned tables increases
with the number of tables. Note, that the total increase is not linear as only a
small percentage of the overall memory requirement is proportional to the
number of data partitions.
If you do not have valid LBAC credentials, the load fails and an error (SQLSTATE
42512) is returned. In cases where the input data does not contain a security label
or that security label is not in its internal binary format, you can use several file
type modifiers to allow your load to proceed.
When you load data into a table with protected rows, the target table has one
column with a data type of DB2SECURITYLABEL. If the input row of data does
not contain a value for that column, that row is rejected unless the usedefaults file
type modifier is specified in the load command, in which case the security label
you hold for write access from the security policy protecting the table is used. If
you do not hold a security label for write access, the row is rejected and processing
continues on to the next row.
When you load data into a table that has protected rows and the input data does
include a value for the column with a data type of DB2SECURITYLABEL, the
same rules are followed as when you insert data into that table. If the security
label protecting the row being loaded (the one in that row of the data file) is one
that you are able to write to, then that security label is used to protect the row. (In
other words, it is written to the column that has a data type of
DB2SECURITYLABEL.) If you are not able to write to a row protected by that
security label, what happens depends on how the security policy protecting the
source table was created:
v If the CREATE SECURITY POLICY statement that created the policy included
the option RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL, the row
is rejected.
v If the CREATE SECURITY POLICY statement did not include the option or if it
instead included the OVERRIDE NOT AUTHORIZED WRITE SECURITY
LABEL option, the security label in the data file for that row is ignored and the
security label you hold for write access is used to protect that row. No error or
warning is issued in this case. If you do not hold a security label for write
access, the row is rejected and processing continues on to the next row.
Delimiter considerations
When loading data into a column with a data type of DB2SECURITYLABEL, the
value in the data file is assumed by default to be the actual bytes that make up the
internal representation of that security label. However, some raw data might
contain newline characters which could be misinterpreted by the LOAD command
as delimiting the row. If you have this problem, use the delprioritychar file type
modifier to ensure that the character delimiter takes precedence over the row
delimiter. When you use delprioritychar, any record or column delimiters that
are contained within character delimiters are not recognized as being delimiters.
Using the delprioritychar file type modifier is safe to do even if none of the
values contain a newline character, but it does slow the load down slightly.
If the data being loaded is in ASC format, you might have to take an extra step in
order to prevent any trailing white space from being included in the loaded
security labels and security label names. ASCII format uses column positions as
delimiters, so this might occur when loading into variable-length fields. Use the
striptblanks file type modifier to truncate any trailing blank spaces.
You can also load a data file in which the values of the security label column are
security label names. To load this sort of file you must use the file type modifier
seclabelname. When you use seclabelname, all values for columns with a data type
of DB2SECURITYLABEL are assumed to be string constants containing the names
of existing security labels. If no security label exists with the indicated name for
the security policy protecting the table, the row is not loaded and a warning
(SQLSTATE 01H53) is returned.
Rejected rows
Rows that are rejected during the load are sent to either a dumpfile or an
exception table (if they are specified in the LOAD command), depending on the
reason why the rows were rejected. Rows that are rejected due to parsing errors
are sent to the dumpfile. Rows that violate security policies are sent to the
exception table.
Examples
For all examples, the input data file myfile.del is in DEL format. All are loading
data into a table named REPS, which was created with this statement:
create table reps (row_label db2securitylabel,
id integer,
name char(30))
security policy data_access_policy
For this example, the input file is assumed to contain security labels in the default
format:
db2 load from myfile.del of del modified by delprioritychar insert into reps
For this example, the input file is assumed to contain security labels in the security
label string format:
db2 load from myfile.del of del modified by seclabelchar insert into reps
For this example, the input file is assumed to contain security labels names for the
security label column:
db2 load from myfile.del of del modified by seclabelname insert into reps
If no identity-related file type modifiers are used, the utility works according to the
following rules:
The load utility does not perform any extra validation of user-supplied identity
values beyond what is normally done for values of the identity column's data type
(that is, SMALLINT, INT, BIGINT, or DECIMAL). Duplicate values are not
reported.
In most cases the load utility cannot guarantee that identity column values are
assigned to rows in the same order that these rows appear in the data file. Because
the assignment of identity column values is managed in parallel by the load utility,
those values are assigned in arbitrary order. The exceptions to this are as follows:
v In single-partition databases, rows are not processed in parallel when
CPU_PARALLELISM is set to 1. In this case, identity column values are
implicitly assigned in the same order that rows appear in the data file
parameter.
v In multi-partition databases, identity column values are assigned in the same
order that the rows appear in the data file if the identity column is in the
distribution key and if there is a single partitioning agent (that is, if you do not
specify multiple partitioning agents or the anyorder file type modifier).
When loading a table in a partitioned database where the table has an identity
column in the partitioning key and the identityoverride modifier is not specified,
the SAVECOUNT option cannot be specified. When there is an identity column in
the partitioning key and identity values are being generated, restarting a load from
the load phase on at least one database partition requires restarting the whole load
from the beginning of the load phase, which means that there can't be any
consistency points.
Note: A load RESTART operation is not permitted if all of the following criteria
are met:
v The table being loaded is in a partitioned database environment, and it contains
at least one identity column that is either in the distribution key or is referenced
by a generated column that is part of the distribution key.
v The identityoverride modifier is not specified.
v The previous load operation that failed included loading database partitions that
failed after the load phase.
A load TERMINATE or REPLACE operation should be issued instead.
There are three mutually exclusive ways you can simplify the loading of data into
tables that contain an identity column: the identitymissing, the identityignore,
and the identityoverride file type modifiers.
If you want to load TABLE1 with data from a file (load.del) that has been
exported from a table that does not have an identity column, see the following
example:
Robert, 45.2, J
Mike, 76.9, K
Leo, 23.4, I
One way to load this file would be to explicitly list the columns to be loaded
through the LOAD command as follows:
db2 load from load.del of del replace into table1 (c1, c3, c4)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of loading the file is to use the
identitymissing file type modifier as follows:
db2 load from load.del of del modified by identitymissing
replace into table1
This command would result in the three columns in the data file being loaded into
c1, c3, and c4 of TABLE1. A value will be generated for each row in c2.
If the user-supplied values of 1, 2, and 3 are not used for the identity column, you
can issue the following LOAD command:
db2 load from load.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
Again, this approach might be cumbersome and prone to error if the table has
many columns. The identityignore modifier simplifies the syntax as follows:
db2 load from load.del of del modified by identityignore
replace into table1
If no generated column-related file type modifiers are used, the load utility works
according to the following rules:
v Values are created for generated columns when the corresponding row of the
data file is missing a value for the column or a NULL value is supplied. If a
non-NULL value is supplied for a generated column, the row is rejected
(SQL3550W).
v If a NULL value is created for a generated column that is not nullable, the entire
row of data is rejected (SQL0407N). This could occur if, for example, a
non-nullable generated column is defined as the sum of two table columns that
include NULL values in the data file.
There are three mutually exclusive ways you can simplify the loading of data into
tables that contain a generated column: the generatedmissing, the
generatedignore, and the generatedoverride file type modifiers.
If you want to load TABLE1 with data from a file (load.del) that has been
exported from a table that does not have any generated columns, see the following
example:
1, 5, J
2, 6, K
3, 7, I
One way to load this file would be to explicitly list the columns to be loaded
through the LOAD command as follows:
DB2 LOAD FROM load.del of del REPLACE INTO table1 (c1, c2, c3)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of loading the file is to use the
generatedmissing file type modifier as follows:
DB2 LOAD FROM load.del of del MODIFIED BY generatedmissing
REPLACE INTO table1
This command will result in the three columns of data file being loaded into c1, c2,
and c3 of TABLE1. Due to the generatedmissing modifier, values for columns g1
and g2 of TABLE1 will be generated automatically and will not map to any of the
data file columns.
The user-supplied, non-NULL values of 10, 11, and 12 (for g1), and 15, 16, and 17
(for g2) result in the row being rejected (SQL3550W) if no generated-column
related file type modifiers are used. To avoid this, the user could issue the
following LOAD command:
DB2 LOAD FROM load.del of del method P(1, 2, 5)
REPLACE INTO table1 (c1, c2, c3)
Again, this approach might be cumbersome and prone to error if the table has
many columns. The generatedignore modifier simplifies the syntax as follows:
DB2 LOAD FROM load.del of del MODIFIED BY generatedignore
REPLACE INTO table1
This command will result in the columns of data file being loaded into c1 (with the
data 1, 2, 3), c2 (with the data 5,6,7), and c3 (with the data J, K, I) of TABLE1. Due
to the generatedignore modifier, values for columns g1 and g2 of TABLE1 will be
generated automatically and the data file columns (10, 11, 12 and 15, 16, 17) will be
ignored.
When this modifier is used, the table is placed in the Set Integrity Pending state
after the load operation. To take the table out of Set Integrity Pending state
without verifying the user-supplied values, issue the following command:
SET INTEGRITY FOR table-name GENERATED COLUMN IMMEDIATE
UNCHECKED
To take the table out of the Set Integrity Pending state and force verification of the
user-supplied values, issue the following command:
SET INTEGRITY FOR table-name IMMEDIATE CHECKED
Note: There is one case where load does NOT support generating column values:
when one of the generated column expressions contains a user-defined function
that is FENCED. If you attempt to load into such a table the load operation fails.
However, you can provide your own values for these types of generated columns
by using the generatedoverride file type modifier.
Additionally, you can load data from another database by referencing a nickname
within the SQL query, by using the DATABASE option within the DECLARE
CURSOR statement, or by using the sqlu_remotefetch_entry media entry when
using the API interface.
There are three approaches for moving data using the CURSOR file type. The first
approach uses the Command Line Processor (CLP), the second the API, and the
third uses the ADMIN_CMD procedure. The key differences between the CLP and
the ADMIN_CMD procedure are outlined in the following table.
Table 26. Differences between the CLP and ADMIN_CMD procedure.
Differences CLP ADMIN_CMD_procedure
Syntax The query statement as well The query statement as well
as the source database used as the source database used
by the cursor are defined by the cursor is defined
outside of the LOAD within the LOAD command
command using a DECLARE using the LOAD from
CURSOR statement. (DATABASE database-alias
query-statement)
To execute a LOAD FROM CURSOR operation from the CLP, a cursor must first be
declared against an SQL query. Once this is declared, you can issue the LOAD
command using the declared cursor's name as the cursorname and CURSOR as the
file type.
For example:
1. Suppose a source and target table both reside in the same database with the
following definitions:
Table ABC.TABLE1 has 3 columns:
v ONE INT
v TWO CHAR(10)
v THREE DATE
Table ABC.TABLE2 has 3 columns:
v ONE VARCHAR
v TWO INT
v THREE DATE
Executing the following CLP commands will load all the data from
ABC.TABLE1 into ABC.TABLE2:
DECLARE mycurs CURSOR FOR SELECT TWO, ONE, THREE FROM abc.table1
LOAD FROM mycurs OF cursor INSERT INTO abc.table2
Note: The above example shows how to load from an SQL query through the
CLP. However, loading from an SQL query can also be accomplished through
the db2Load API. Define the piSourceList of the sqlu_media_list structure to use
the sqlu_statement_entry structure and SQLU_SQL_STMT media type and define
the piFileType value as SQL_CURSOR.
Provided that you have enabled federation and cataloged the data source
('dsdbsource'), you can declare a nickname against the source database, then
declare a cursor against this nickname, and invoke the LOAD command with the
FROM CURSOR option, as demonstrated in the following example:
CREATE NICKNAME myschema1.table1 FOR dsdbsource.abc.table1
DECLARE mycurs CURSOR FOR SELECT TWO,ONE,THREE FROM myschema1.table1
LOAD FROM mycurs OF cursor INSERT INTO abc.table2
Or, you can use the DATABASE option of the DECLARE CURSOR statement, as
demonstrated in the following example:
DECLARE mycurs CURSOR DATABASE dbsource USER dsciaraf USING mypasswd
FOR SELECT TWO,ONE,THREE FROM abc.table1
LOAD FROM mycurs OF cursor INSERT INTO abc.table2
Using the DATABASE option of the DECLARE CURSOR statement (also known as
the remotefetch media type when using the Load API) has some benefits over the
nickname approach:
Performance
Fetching of data using the remotefetch media type is tightly integrated within a
load operation. There are fewer layers of transition to fetch a record compared to
the nickname approach. Additionally, when source and target tables are distributed
identically in a multi-partition database, the load utility can parallelize the fetching
of data, which can further improve performance.
Ease of use
While this method can be used with cataloged databases, the use of nicknames
provides a robust facility for fetching from various data sources which cannot
simply be cataloged.
Restrictions
When loading from a cursor defined using the DATABASE option (or equivalently
when using the sqlu_remotefetch_entry media entry with the db2Load API), the
following restrictions apply:
1. The SOURCEUSEREXIT option cannot be specified concurrently.
2. The METHOD N option is not supported.
3. The usedefaults file type modifier is not supported.
If a table is in incomplete state and the INCREMENTAL option has been specified,
but incremental propagation of the table is not possible, an error is returned. If any
of the following have taken place, the system turns off immediate data propagation
and sets the table state to incomplete:
v A load replace operation has taken place on an underlying table of the staging
table, or the NOT LOGGED INITIALLY WITH EMPTY TABLE option has been
activated after the last integrity check on the underlying table.
v The dependent materialized query table of the staging table, or the staging table
has been loaded in REPLACE or INSERT mode.
v An underlying table has been taken out of Set Integrity Pending state before the
staging table has been propagated by using the FULL ACCESS option during
integrity checking.
v An underlying table of the staging table has been checked for integrity
non-incrementally.
The following example illustrates a load insert operation into the underlying table
UT1 of staging table G1 and its dependent deferred materialized query table AST1.
In this scenario, both the integrity checking for UT1 and the refreshing of AST1 are
processed incrementally:
LOAD FROM IMTFILE1.IXF of IXF INSERT INTO UT1;
LOAD FROM IMTFILE2.IXF of IXF INSERT INTO UT1;
SET INTEGRITY FOR UT1,G1 IMMEDIATE CHECKED;
If the materialized query table has one or more W values in the CONST_CHECKED
column of the SYSCAT.TABLES catalog, and if the NOT INCREMENTAL option is
not specified in the SET INTEGRITY statement, the table is incrementally refreshed
The following example illustrates a load insert operation into the underlying table
UTI of the materialized query table AST1. UT1 is checked for data integrity and is
placed in the no data movement mode. UT1 is put back into full access state once
the incremental refresh of AST1 is complete. In this scenario, both the integrity
checking for UT1 and the refreshing of AST1 are processed incrementally.
LOAD FROM IMTFILE1.IXF of IXF INSERT INTO UT1;
LOAD FROM IMTFILE2.IXF of IXF INSERT INTO UT1;
SET INTEGRITY FOR UT1 IMMEDIATE CHECKED;
REFRESH TABLE AST1;
When using the LOAD command with an MDC table, violations of unique
constraints are be handled as follows:
v If the table included a unique key prior to the load operation and duplicate
records are loaded into the table, the original record remains and the new
records are deleted during the delete phase.
v If the table did not include a unique key prior to the load operation and both a
unique key and duplicate records are loaded into the table, only one of the
records with the unique key is loaded and the others are deleted during the
delete phase.
Note: There is no explicit technique for determining which record is loaded and
which is deleted.
Performance Considerations
To improve the performance of the load utility when loading MDC tables, the
util_heap_sz database configuration parameter value should be increased. The
mdc-load algorithm performs significantly better when more memory is available
to the utility. This reduces disk I/O during the clustering of data that is performed
during the load phase. Beginning in version 9.5, the value of the DATA BUFFER
option of the LOAD command can temporarily exceed util_heap_sz if more memory
is available in the system.
MDC load operations always have a build phase since all MDC tables have block
indexes.
During the load phase, extra logging for the maintenance of the block map is
performed. There are approximately two extra log records per extent allocated. To
ensure good performance, the logbufsz database configuration parameter should be
set to a value that takes this into account.
The purpose of the user exit is to populate one or more named pipes with data
that is simultaneously read from by the load utility. In a multi-partition database,
multiple instances of the user exit can be invoked concurrently to achieve
parallelism of the input data.
As Figure 5 shows, the load utility creates a one or more named pipes and spawns
a process to execute your customized executable. Your user exit feeds data into the
named pipe(s) while the load utility simultaneously reads.
Figure 5. The load utility reads from the pipe and processes the incoming data.
The data fed into the pipe must reflect the load options specified, including the file
type and any file type modifiers. The load utility does not directly read the data
files specified. Instead, the data files specified are passed as arguments to your
user exit when it is executed.
The user exit must reside in the bin subdirectory of the DB2 installation directory
(often known as sqllib). The load utility invokes the user exit executable with the
following command line arguments:
<base pipename> <number of source media>
<source media 1> <source media 2> ... <user exit ID>
<number of user exits> <database partition number>
Where:
<base pipename >
Is the base name for named-pipes that the load utility creates and reads
data from. The utility creates one pipe for every source file provided to the
LOAD command, and each of these pipes is appended with .xxx, where
xxx is the index of the source file provided. For example, if there are 2
source files provided to the LOAD command, and the <base pipename>
argument passed to the user exit is pipe123, then the two named pipes that
your user exit should feed with data are pipe123.000 and pipe123.001. In
a partitioned database environment, the load utility appends the database
partition (DBPARTITION) number .yyy to the base pipe name, resulting in
the pipe name pipe123.xxx.yyy..
<number of source media>
Is the number of media arguments which follow.
<source media 1> <source media 2> ...
Is the list of one or more source files specified in the LOAD command.
Each source file is placed inside double quotation marks.
<user exit ID>
Is a special value useful when the PARALLELIZE option is enabled. This
integer value (from 1 to N, where N is the total number of user exits being
spawned) identifies a particular instance of a running user exit. When the
PARALLELIZE option is not enabled, this value defaults to 1.
<number of user exits>
Is a special value useful when the PARALLELIZE option is enabled. This
value represents the total number of concurrently running user exits. When
the PARALLELIZE option is not enabled, this value defaults to 1.
<database partition number>
Is a special value useful when the PARALLELIZE option is enabled. This is
the database partition (DBPARTITION) number on which the user exit is
executing. When the PARALLELIZE option is not enabled, this value
defaults to 0.
A user could pass this information using the INPUT FROM BUFFER
option as shown in the following LOAD command:
LOAD FROM myfile1 OF DEL INSERT INTO table1
SOURCEUSEREXIT myuserexit1 REDIRECT INPUT FROM BUFFER myuseridmypasswd
Note: The load utility limits the size of <buffer> to the maximum size of a
LOB value. However, from within the command line processor (CLP), the
size of <buffer> is restricted to the maximum size of a CLP statement.
From within CLP, it is also recommended that <buffer> contain only
traditional ASCII characters. These issues can be avoided if the load utility
is invoked using the db2Load API, or if the INPUT FROM FILE option is
used instead.
INPUT FROM FILE <filename>
Allows you to pass the contents of a client side file directly into the STDIN
input stream of your user exit. This option is almost identical to the INPUT
FROM BUFFER option, however this option avoids the potential CLP
limitation. The filename must be a fully qualified client side file and must
not be larger than the maximum size of a LOB value.
OUTPUT TO FILE <filename>
Allows you to capture the STDOUT and STDERR streams from your user
exit process into a server side file. After spawning the process which
executes the user exit executable, the load utility redirects the STDOUT
and STDERR handles from this new process into the filename specified.
This option is useful for debugging and logging errors and activity within
your user exit. The filename must be a fully qualified server side file.
When the PARALLELIZE option is enabled, one file exists per user exit
and each file appends a three-digit numeric identifier, such as filename.000.
PARALLELIZE
This option can increase the throughput of data coming into the load
utility by invoking multiple user exit processes simultaneously. This option
is only applicable to a multi-partition database. The number of user exit
instances invoked is equal to the number of partitioning agents if data is to
be distributed across multiple database partitions during the load
operation, otherwise it is equal to the number of loading agents.
The <user exit ID>, <number of user exits>, and <database partition number>
arguments passed into each user exit reflect the unique identifier (1 to N), the total
number of user exits (N), and the database partition (DBPARTITION) number on
which the user exit instance is running, respectively. You should ensure that any
data written to the named pipe by each user exit process is not duplicated by the
other concurrent processes. While there are many ways your user exit application
might accomplish this, these values could be helpful to ensure data is not
duplicated. For example, if each record of data contains a unique integer column
value, your user exit application could use the <user exit ID> and <number of user
foreach record
{
if ((unique-integer MOD N) == i)
{
write this record to my named-pipe
}
}
The number of user exit processes spawned depends on the distribution mode
specified for database partitioning:
1. As Figure 6 on page 155 shows, one user exit process is spawned for every
pre-partitioning agent when PARTITION_AND_LOAD (default) or
PARTITION_ONLY without PARALLEL is specified. .
2. As Figure 7 on page 156 shows, one user exit process is spawned for every
partitioning agent when PARTITION_AND_LOAD (default) or
PARTITION_ONLY with PARALLEL is specified.
3. As Figure 8 on page 157 shows, one user exit process is spawned for every load
agent when LOAD_ONLY or LOAD_ONLY_VERIFY_PART is specified.
Restrictions
v The LOAD_ONLY and LOAD_ONLY_VERIFY_PART partitioned-db-cfg mode
options are not supported when the SOURCEUSEREXIT PARALLELIZE option
is not specified.
Examples
Example 1: A Load userexit script that replaces all tab characters '\t' with comma
characters ',' from every record of the source media file. To invoke the Load utility
using this userexit script, use a command similar to the following:
DB2 LOAD FROM /path/file1 OF DEL INSERT INTO schema1.table1
SOURCEUSEREXIT example1.pl REDIRECT OUTPUT TO FILE /path/ue_msgs.txt
Note that the userexit must be placed into the sqllib/bin/ folder, and requires
execute permissions.
example1.pl:
#!/bin/perl
# Filename: example1.pl
#
# This script is a simple example of a userexit for the Load utility
# SOURCEUSEREXIT feature. This script will replace all tab characters ’\t’
# with comma characters ’,’ from every record of the source media file.
# Open the output fifo file (the Load utility is reading from this pipe)
#-----------------------------------------------------------------------
$basePipeName = $ARGV[0];
$outputPipeName = sprintf("%s.000", $basePipeName);
open(PIPETOLOAD, ’>’, $outputPipeName) || die "Could not open $outputPipeName";
# Open each media file, read the contents, replace ’\t’ with ’,’, send to Load
#-----------------------------------------------------------------------------
for ($i=0; $i<$NumMediaFiles; $i++)
{
# Open the media file
#--------------------
$mediaFileName = $ARGV[2+$i];
open(MEDIAFILETOREAD, ’<’, $mediaFileName) || die "Could not open $mediaFileName";
exit 0;
There are several ways in which parallel processing of large amounts of data can
take place using the load utility. One way is through the use of multiple storage
devices, which allows for I/O parallelism during the load operation (see Figure 9).
Another way involves the use of multiple processors in an SMP environment,
which allows for intra-partition parallelism (see Figure 10). Both can be used
together to provide even faster loading of data.
Database
Load operations that specify the ALLOW READ ACCESS option require special
consideration in terms of space usage and logging depending on the type of
indexing mode chosen. When the ALLOW READ ACCESS option is specified, the
load utility keeps indexes available for queries even while they are being rebuilt.
When a load operation in ALLOW READ ACCESS mode specifies the INDEXING
MODE INCREMENTAL option, the load utility writes some log records that
protect the integrity of the index tree. The number of log records written is a
fraction of the number of inserted keys and is a number considerably less than
would be needed by a similar SQL insert operation. A load operation in ALLOW
NO ACCESS mode with the INDEXING MODE INCREMENTAL option specified
writes only a small log record beyond the normal space allocation logs.
Note: This is only true if you did not specify COPY YES and have the
logindexrebuild configuration parameter set to ON.
When a load operation in ALLOW READ ACCESS mode specifies the INDEXING
MODE REBUILD option, new indexes are built as a shadow either in the same table
space as the original index or in a system temporary table space. The original
indexes remain intact and are available during the load operation and are only
replaced by the new indexes at the end of the load operation while the table is
exclusively locked. If the load operation fails and the transaction is rolled back, the
original indexes remain intact.
By default, the shadow index is built in the same table space as the original index.
Since both the original index and the new index are maintained simultaneously,
there must be sufficient table space to hold both indexes at the same time. If the
load operation is aborted, the extra space used to build the new index is released.
If the load operation commits, the space used for the original index is released and
the new index becomes the current index. When the new indexes are built in the
same table space as the original indexes, replacing the original indexes takes place
almost instantaneously.
If the indexes are built within an SMS table space, you can see index files in the
table space directory with the .IN1 suffix and the .INX suffix. These suffixes do not
indicate which is the original index and which is the shadow index. However, if
the indexes are built in a DMS table space, you cannot see the new shadow index.
A load restart operation can use an alternate table space for building an index,
even if the original load operation did not use an alternate table space. A load
restart operation cannot be issued in ALLOW READ ACCESS mode if the original
load operation was not issued in ALLOW READ ACCESS mode. Load terminate
operations do not rebuild indexes, so the USE <tablespace-name> is ignored.
During the build phase of the load operation, the indexes are built in the system
temporary table space. Then, during the index copy phase, the index is copied
from the system temporary table space to the original index table space. To make
sure that there is sufficient space in the original index table space for the new
index, space is allocated in the original table space during the build phase. So, if
the load operation runs out of index space, it will do so during the build phase. If
this happens, the original index is not lost.
The index copy phase occurs after the build and delete phases. Before the index
copy phase begins, the table is locked exclusively. That is, it is unavailable for read
access throughout the index copy phase. Since the index copy phase is a physical
copy, the table might be unavailable for a significant amount of time.
Note: If either the system temporary table space or the index table space are DMS
table spaces, the read from the system temporary table space can cause random
I/O on the system temporary table space and can cause a delay. The write to the
index table space is still optimized and the DISK_PARALLELISM values are used.
Time
Time
Figure 11. Increasing load performance through concurrent indexing and statistics collection. Tables are normally built
in three steps: data loading, index building, and statistics collection. This causes multiple data I/O during the load
operation, during index creation (there can be several indexes for each table), and during statistics collection (which
causes I/O on the table data and on all of the indexes). A much faster alternative is to let the load utility complete all of
these tasks in one pass through the data.
At certain times, deferring index creation and invoking the CREATE INDEX
statement can improve performance. Sorting during index rebuild uses up to
sortheap pages. If more space is required, TEMP buffer pool is used and
(eventually) spilled to disk. If load spills, and thus decreases performance, it might
be advisable to run LOAD with INDEXING MODE DEFERRED and re-create the
index later. CREATE INDEX creates one index at a time, reducing memory usage
while scanning the table multiple times to collect keys.
If a load operation returns the error message SQL20305N (sqlcode -20305), this
indicates that one or more XML node values could not be indexed. The error
message will output the reason code for the error. Enter ? SQL20305N in the
command line processor to look up the explanation and user response for the
corresponding reason code.
For indexing problems during load operations, however, the generated XQuery
statements are not output to the db2diag.log logfile. To generate these XQuery
statements the import utility must be run on the failing rows that were not loaded.
Because the rejected rows do not exist in the table, the XQuery statements cannot
be run on the failing documents. To solve this problem, a new table with the same
definition must be created without any indexes. The failing rows can then be
loaded into the new table, and the XQuery statements can then be run on the new
table to locate the failing XML node values within the documents.
Procedure
1. Determine which rows were rejected during the load operation using the record
numbers in the output information.
2. Create a .del file containing only the rejected rows.
3. Create a new table (for example, T2) with the same columns as the original
table (T1). Do not create any indexes on the new table.
4. Load the rejected rows into the new table T2.
5. For each rejected row in the original table T1:
a. Import the rejected rows to T1 to get the SQL20305N message. The import
will stop on the first error that it encounters.
b. Look in the db2diag.log logfile and get the generated XQuery statement. To
find the failing node values in the input document, search for the string
'SQL20305N' in the db2diag.log logfile and match the reason code number.
Following the reason code, you will find a set of instructions and then a
generated XQuery statement that you can use to locate the problem value in
the document that caused the error.
c. Modify the XQuery statement to use the new table T2.
d. Run the XQuery statement on T2 to locate the problem value in the
document.
e. Fix the problem value in the .xml file containing the document.
f. Return to Step a and import the rejected rows to T1 again. The row that
caused the import to stop should now be inserted successfully. If there is
another rejected row in the .del file, the import utility will stop on the next
error and output another SQL20305N message. Continue these steps until
the import runs successfully.
Example
In the following example, the index BirthdateIndex has been created on the date
data type. The REJECT INVALID VALUES option is specified, so the XML pattern
values for /Person/Confidential/Birthdate must all be valid for the date data type. If
any XML pattern value cannot be cast to this data type, an error is returned.
Using the XML documents below, five rows are supposed to be loaded but the first
and the fourth rows will be rejected because the Birthdate values cannot be
indexed. In the file person1.xml, the value March 16, 2002 is not in the correct date
format. In the file person4.xml, the value 20000-12-09 has an extra zero for the
To resolve the indexing errors that would occur when you attempt to load the set
of XML files above, you would perform the following steps:
1. Determine which rows were rejected during the load operation using the record
numbers in the output information. In the following output, record number 1
and record number 4 were rejected.
SQL20305N An XML value cannot be inserted or updated because of an error
detected when inserting or updating the index identified by "IID = 3" on table
"LEECM.T1". Reason code = "5". For reason codes related to an XML schema the
XML schema identifier = "*N" and XML schema data type = "*N". SQLSTATE=23525
SQL3185W The previous error occurred while processing data from row "F0-1" of
the input file.
SQL3185W The previous error occurred while processing data from row "F0-4" of
the input file.
SQL3306N An SQL error "-20305" occurred while inserting a row into the
table.
SQL3110N The utility has completed processing. "1" rows were read from
the input file.
b. Look in the db2diag.log logfile and get the generated XQuery statement.
FUNCTION: DB2 UDB, Xml Storage and Index Manager, xmlsDumpXQuery, probe:608
DATA #1 : String, 36 bytes
SQL Code: SQL20305N ; Reason Code: 5
DATA #2 : String, 265 bytes
To locate the value in the document that caused the error, create a
table with one XML column and insert the failing document in the table.
Replace the table and column name in the query below with the created
table and column name and execute the following XQuery.
DATA #3 : String, 247 bytes
xquery for $i in db2-fn:xmlcolumn(
"LEECM.T1.XMLDOC")[/*:Person/*:Confidential/*:Birthdate="March 16, 2002"]
return
<Result>
<ProblemDocument> {$i} </ProblemDocument>
<ProblemValue>{$i/*:Person/*:Confidential/*:Birthdate/..} </ProblemValue>
</Result>;
c. Modify the XQuery statement to use the new table T2.
xquery for $i in db2-fn:xmlcolumn(
"LEECM.T2.XMLDOC")[/*:Person/*:Confidential/*:Birthdate="March 16, 2002"]
return
<Result>
<ProblemDocument> {$i} </ProblemDocument>
<ProblemValue>{$i/*:Person/*:Confidential/*:Birthdate/..} </ProblemValue>
</Result>;
d. Run the XQuery statement on table T2 to locate the problem value in the
document.
<Result><ProblemDocument><Person gender="Male">
<Name>
<Last>Cool</Last>
SQL3306N An SQL error "-20305" occurred while inserting a row into the
table.
SQL3110N The utility has completed processing. "2" rows were read from
the input file.
b. Look in the db2diag.log logfile and get the generated XQuery statement.
FUNCTION: DB2 UDB, Xml Storage and Index Manager, xmlsDumpXQuery, probe:608
DATA #1 : String, 36 bytes
SQL Code: SQL20305N ; Reason Code: 4
DATA #2 : String, 265 bytes
To locate the value in the document that caused the error, create a
table with one XML column and insert the failing document in the table.
Replace the table and column name in the query below with the created
table and column name and execute the following XQuery.
DATA #3 : String, 244 bytes
xquery for $i in db2-fn:xmlcolumn("LEECM.T1.XMLDOC")
[/*:Person/*:Confidential/*:Birthdate="20000-12-09"]
return
SQL3110N The utility has completed processing. "2" rows were read from
the input file.
SQL3149N "2" rows were processed from the input file. "2" rows were
successfully inserted into the table. "0" rows were rejected.
When ADC occurs on range partitioned tables, each partition is treated like an
individual table. There will not be any cross-partition dictionaries. ADC does not
occur on partitions already containing dictionaries, and the dictionary generated
for each partition is based only on the preexisting data (and, if necessary, the
loaded data) in that partition. In Version 9.5 Fix Pack 5 and later, if the preexisting
data in a partition is less than the minimum threshold, the dictionary is generated
based only on the loaded data.
Note: If you issue a LOAD TERMINATE operation after a LOAD REPLACE with
the RESETDICTIONARY option, an existing compression dictionary will be
deleted and not replaced.
Performance impact
ADC affects the performance of a load operation as a result of:
v The initial scan of table data
For LOAD INSERT operations in DB2 Version 9.5 Fix Pack 4 and earlier, all of
the preexisting table data, not just the minimum threshold for ADC, is scanned
prior to building the compression dictionary. Therefore, the time used for this
scan increases with table size.
v The additional processing to build the compression dictionary
The time actually used for building the dictionary is minimal. Moreover, once
the dictionary has been built, ADC is turned off, by default.
Command parameters
The load utility attempts to deliver the best performance possible by determining
optimal values for DISK_PARALLELISM, CPU_PARALLELISM, and DATA
BUFFER, if these parameters have not be specified by the user. Optimization is
done based on the size and the free space available in the utility heap. Consider
using the autonomic DISK_PARALLELISM and CPU_PARALLELISM settings
before attempting to tune these parameters for your particular needs.
Figure 12. Record Order in the Source Data is Preserved When the Number of Processes Running Per Database
Partition is Exploited During a Load Operation
DATA BUFFER
The DATA BUFFER parameter specifies the total amount of memory, in 4
KB units, allocated to the load utility as a buffer. It is recommended that
this buffer be several extents in size. The data buffer is allocated from the
utility heap; however, the data buffer can exceed the setting for the
util_heap_sz database configuration parameter as long as there is available
memory in the system
DISK_PARALLELISM
The DISK_PARALLELISM parameter specifies the number of processes or
threads used by the load utility to write data records to disk. Use this
parameter to exploit available containers when loading data, and
significantly improve load performance. The maximum number allowed is
the greater of four times the CPU_PARALLELISM value (actually used by
the load utility), or 50. By default, DISK_PARALLELISM is equal to the
sum of the table space containers on all table spaces containing objects for
the table being loaded, except where this value exceeds the maximum
number allowed.
NONRECOVERABLE
If forward recovery is enabled, use this parameter if you do not need to be
ANYORDER
By default, the load utility preserves record order of source data. When load is
operating under an SMP environment, synchronization between parallel processing
is required to ensure that order is preserved.
In an SMP environment, specifying the anyorder file type modifier instructs the
load utility to not preserve the order, which improves efficiency by avoiding the
synchronization necessary to preserve that order. However, if the data to be loaded
is presorted, anyorder might corrupt the presorted order, and the benefits of
presorting are lost for subsequent queries.
Note:
v When the binarynumerics, packeddecimal, or zoneddecimal file type modifiers
are specified, numeric data is interpreted in big-endian (high byte first) format,
regardless of platform.
v The packeddecimal and zoneddecimal file type modifiers are mutually exclusive.
v The packeddecimal and zoneddecimal file type modifiers only apply to the
decimal target columns, and the binary data must match the target column
definitions.
v The reclen file type modifier must be specified when the binarynumerics,
packeddecimal, or zoneddecimal file type modifiers are specified.
NOROWWARNINGS
During a load operation, warning messages about rejected rows are written to a
specified file. However, if the load utility has to process a large volume of rejected,
invalid or truncated records, it can adversely affect load's performance. In cases
where many warnings are anticipated, it is useful to use the norowwarnings file
type modifier to suppress the recording of these warnings.
If the loaded table has constraints as well as descendent foreign key tables,
dependent materialized query tables and dependent staging tables, and if all of the
tables are in normal state prior to the load operation, the following will result
based on the load parameters specified:
INSERT, ALLOW READ ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The loaded table, its dependent materialized query tables and dependent
staging tables are placed in set integrity pending state with read access.
INSERT, ALLOW READ ACCESS, and SET INTEGRITY PENDING CASCADE
DEFERRED
Only the loaded table is placed in set integrity pending with read access.
Descendent foreign key tables, descendent materialized query tables and
descendent staging tables remain in their original states.
INSERT, ALLOW NO ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The loaded table, its dependent materialized query tables and dependent
staging tables are placed in set integrity pending state with no access.
INSERT or REPLACE, ALLOW NO ACCESS, and SET INTEGRITY PENDING
CASCADE DEFERRED
Only the loaded table is placed in set integrity pending state with no
access. Descendent foreign key tables, descendent immediate materialized
query tables and descendent immediate staging tables remain in their
original states.
REPLACE, ALLOW NO ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The table and all its descendent foreign key tables, descendent immediate
materialized query tables, and descendent immediate staging tables are
placed in set integrity pending state with no access.
Note: Specifying the ALLOW READ ACCESS option in a load replace operation
results in an error.
To remove the set integrity pending state, use the SET INTEGRITY statement. The
SET INTEGRITY statement checks a table for constraints violations, and takes the
table out of set integrity pending state. If all the load operations are performed in
INSERT mode, the SET INTEGRITY statement can be used to incrementally process
the constraints (that is, it checks only the appended portion of the table for
constraints violations). For example:
db2 load from infile1.ixf of ixf insert into table1
db2 set integrity for table1 immediate checked
You can override the No Data Movement state by specifying the FULL ACCESS
option when you issue the SET INTEGRITY statement. The table is fully accessible,
however a full re-computation of the dependent materialized query tables takes
place in subsequent REFRESH TABLE statements and the dependent staging tables
are forced into an incomplete state.
If the ALLOW READ ACCESS option is specified for a load operation, the table
remains in read access state until the SET INTEGRITY statement is used to check
for constraints violations. Applications can query the table for data that existed
prior to the load operation once it has been committed, but will not be able to
view the newly loaded data until the SET INTEGRITY statement is issued.
Several load operations can take place on a table before checking for constraints
violations. If all of the load operations are completed in ALLOW READ ACCESS
mode, only the data that existed in the table prior to the first load operation is
available for queries.
When issuing the SET INTEGRITY statement, you can specify the INCREMENTAL
option to explicitly request incremental processing. In most cases, this option is not
needed, because the DB2 database selects incremental processing. If incremental
processing is not possible, full processing is used automatically. When the
INCREMENTAL option is specified, but incremental processing is not possible, an
error is returned if:
v New constraints are added to the table while it is in set integrity pending state.
v A load replace operation takes place, or the NOT LOGGED INITIALLY WITH
EMPTY TABLE option is activated, after the last integrity check on the table.
v A parent table is load replaced or checked for integrity non-incrementally.
v The table is in set integrity pending state before migration. Full processing is
required the first time the table is checked for integrity after migration.
v The table space containing the table or its parent is rolled forward to a point in
time and the table and its parent reside in different table spaces.
The SET INTEGRITY statement does not activate any DELETE triggers as a result
of deleting rows that violate constraints, but once the table is removed from set
integrity pending state, triggers are active. Thus, if you correct data and insert
rows from the exception table into the loaded table, any INSERT triggers defined
on the table are activated. The implications of this should be considered. One
option is to drop the INSERT trigger, insert rows from the exception table, and
then recreate the INSERT trigger.
The load operation causes a table to be put into Set Integrity Pending state
automatically if the table has constraints defined on it or if it has dependent
foreign key tables, dependent materialized query tables, or dependent staging
Note: If you turn off constraint checking for a parent table and specify that
you want to cascade the changes to foreign key tables, the foreign key
constraints of all of its descendent foreign key tables are also turned off. If
you turn off constraint checking for a underlying table and specify that you
want to cascade the check pending state to materialized query tables, the
refresh immediate properties of all its dependent materialized query tables
are also turned off. If you turn off constraint checking for a underlying table
and specify that you want to cascade the Set Integrity Pending state to
staging tables the propagate immediate properties of all its dependent
staging tables are also turned off.
To check for constraint violations using the command line, use the SET
INTEGRITY statement.
Troubleshooting tip
Symptom
You receive the following error message when you try to turn on
constraints checking, immediate refresh, or immediate propagation for a
table:
DB2 Message
Cannot check a dependent table TABLE1 using the SET
INTEGRITY statement while the parent table or underlying table
TABLE2 is in the Set Integrity Pending state or if it will be put into
the Set Integrity Pending state by the SET INTEGRITY statement.
Where TABLE1 is the table for which you are trying to turn on
constraints checking, immediate refresh, or immediate propagation
and it is dependent on TABLE2.
Possible cause
Constraint checking, immediate refresh, or immediate propagation cannot
be turned on for a table that has a parent or underlying table in Set
Integrity Pending.
Action
Bring the parent or underlying table out of Set Integrity Pending by
turning on constraint checking for the table. Begin with the table identified
as the parent or underlying table in the DB2 message. If that table is
Before a load operation in ALLOW READ ACCESS mode begins, the load utility
waits for all applications that began before the load operation to release their locks
on the target table. At the beginning of the load operation, the load utility acquires
an update lock (U-lock) on the table. It holds this lock until the data is being
committed. When the load utility acquires the U-lock on the table, it waits for all
applications that hold locks on the table prior to the start of the load operation to
release them, even if they have compatible locks. This is achieved by temporarily
upgrading the U-lock to a Z-lock which does not conflict with new table lock
requests on the target table as long as the requested locks are compatible with the
load operation's U-lock. When data is being committed, the load utility upgrades
the lock to a Z-lock, so there can be some delay in commit time while the load
utility waits for applications with conflicting locks to finish.
Note: The load operation can time out while it waits for the applications to release
their locks on the table prior to loading. However, the load operation does not time
out while waiting for the Z-lock needed to commit the data.
Applications holding conflicting locks on the system catalog tables are not forced
off by the load utility. If an application is forced off the system by the load utility,
the application loses its database connection, and an error is returned (SQL1224N).
When you specify the COPY NO option for a load operation on a recoverable
database, all objects in the target table space are locked in share mode before the
table space is placed in the Backup Pending state. This occurs regardless of the
access mode. If you specify the LOCK WITH FORCE option, all applications
holding locks on objects in the table space that conflict with a share lock are forced
off.
The ALLOW NO ACCESS option is the default behavior. The ALLOW READ
ACCESS option prevents all write access to the table by other applications, but
allows read access to preexisting data. This section deals with the ALLOW READ
ACCESS option.
Table data and index data that exist prior to the start of a load operation are visible
to queries while the load operation is in progress. Consider the following example:
1. Create a table with one integer column:
create table ED (ed int)
2. Load three rows:
load from File1 of del insert into ED
...
Number of rows read = 3
Number of rows skipped = 0
Number of rows loaded = 3
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 3
3. Query the table:
select * from ED
ED
-----------
1
2
3
3 record(s) selected.
4. Perform a load operation with the ALLOW READ ACCESS option specified
and load two more rows of data:
load from File2 of del insert into ED allow read access
5. At the same time, on another connection query the table while the load
operation is in progress:
select * from ED
ED
-----------
1
2
3
3 record(s) selected.
6. Wait for the load operation to finish and then query the table:
select * from ED
ED
-----------
1
2
3
5 record(s) selected.
The ALLOW READ ACCESS option is very useful when loading large amounts of
data because it gives users access to table data at all times, even when the load
operation is in progress or after a load operation has failed. The behavior of a load
operation in ALLOW READ ACCESS mode is independent of the isolation level of
the application. That is, readers with any isolation level can always read the
preexisting data, but they are not be able to read the newly loaded data until the
load operation has finished.
Read access is provided throughout the load operation except for two instances: at
the beginning and at the end of the operation.
Firstly, the load operation acquires a special Z-lock for a short duration of time
near the end of its setup phase. If an application holds an incompatible lock on the
table prior to the load operation requesting this special Z-lock, then the load
operation waits a finite amount of time for this incompatible lock to be released
before timing out and failing. The amount of time is determined by the locktimeout
database configuration parameter. If the LOCK WITH FORCE option is specified
then the load operation forces other applications off to avoid timing out. The load
operation acquires the special Z-lock, commits the phase, releases the lock, and
then continues onto the load phase. Any application that requests a lock on the
table for reading after the start of the load operation in ALLOW READ ACCESS
mode is granted the lock, and it does not conflict with this special Z-lock. New
applications attempting to read existing data from the target table are able to do
so.
Secondly, before data is committed at the end of the load operation, the load utility
acquires an exclusive lock (Z-lock) on the table. The load utility waits until all
applications that hold locks on the table release them. This can cause a delay
before the data is committed. The LOCK WITH FORCE option is used to force off
conflicting applications, and allow the load operation to proceed without having to
wait. Usually, a load operation in ALLOW READ ACCESS mode acquires an
exclusive lock for a short amount of time; however, if the USE <tablespace-name>
option is specified, the exclusive lock lasts for the entire period of the index copy
phase.
When the load utility is running against a table defined on multiple database
partitions, the load process model executes on each individual database partition,
meaning that locks are acquired and released independently of other db-partitions.
Thus, if a query or other operation is executed concurrently and is competing for
the same locks, there is a chance for deadlocks. For example, suppose that
operation A is granted a table lock on db-partition 0 and the load operation is
granted a table lock on db-partition 1. A deadlock can occur because operation A is
waiting to be granted a table lock on db-partition 1, while the load operation is
waiting for a table lock on db-partition 0. In this case, the deadlock detector will
arbitrarily roll back one of the operations.
Note:
1. If a load operation is interrupted or fails, it remains at the same access level
that was specified when the load operation was issued. That is, if a load
operation in ALLOW NO ACCESS mode fails, the table data is inaccessible
Generally, if table data is taken offline, read access is not available during a load
operation until the table data is back online.
The load utility does not quiesce (put persistent locks on) the table spaces involved
in the load operation and uses table space states only for load operations for which
you specify the COPY NO parameter.
You can check table space states by using the LIST TABLESPACES command. Table
spaces can be in multiple states simultaneously. The states returned by LIST
TABLESPACES are as follows:
Normal
The Normal state is the initial state of a table space after it is created, indicating
that no (abnormal) states currently affect it.
Load in Progress
The Load in Progress state indicates that there is a load in progress on the table
space. This state prevents the backup of dependent tables during the load. The
table space state is distinct from the Load in Progress table state (which is used in
all load operations) because the load utility places table spaces in the Load in
Progress state only when you specify the COPY NO parameter for a recoverable
database. The table spaces remain in this state for the duration of the load
operation.
Restore Pending
If you perform a successful load operation with the COPY NO option, restore the
database, and then rollforward through that operation, the associated table spaces
are placed in the Restore Pending state. To remove the table spaces from the
Restore Pending state, you must perform a restore operation.
and you open another session and issue the following commands,
connect to sample;
list tablespaces;
connect reset;
USERSPACE1 (the default table space for the sample database) is in the Load in
Progress state and, after the first commit, the Backup Pending state as well. After
the load operation finishes, the LIST TABLESPACES command reveals that
USERSPACE1 is now in the Backup Pending state:
Tablespace ID = 2
Name = USERSPACE1
Type = Database managed space
Contents = All permanent data. Large table space.
State = 0x0020
Detailed explanation:
Backup pending
To determine the state of a table, issue the LOAD QUERY command, which also
checks the status of a load operation. Tables can be in a number of states
simultaneously. The states returned by LOAD QUERY are as follows:
Normal State
The Normal state is the initial state of a table after it is created, indicating that no
(abnormal) states currently affect the table.
Load in Progress
The Load in Progress table state indicates that there is a load in progress on the
table. The load utility removes this transient state after the load is successfully
completed. However, if the load operation fails or is interrupted, the table state
will change to Load Pending.
Redistribute in Progress
The Redistribute in Progress table state indicates that there is a redistribute in
progress on the table. The redistribute utility removes this transient state after it
has successfully completed processing the table. However, if the redistribute
operation fails or is interrupted, the table state will change to Redistribute Pending.
Load Pending
The Load Pending table state indicates that a load operation failed or was
interrupted. You can take one of the following steps to remove the Load Pending
state:
v Address the cause of the failure. For example, if the load utility ran out of disk
space, add containers to the table space. Then, restart the load operation.
v Terminate the load operation.
v Run a load REPLACE operation against the same table on which the load
operation failed.
v Recover table spaces for the loading table by using the RESTORE DATABASE
command with the most recent table space or database backup, then carry out
further recovery actions.
Redistribute Pending
The Redistribute Pending table state indicates that a redistribute operation failed or
was interrupted. You can perform a REDISTRIBUTE CONTINUE or
REDISTRIBUTE ABORT operation to remove the Redistribute Pending state.
Type-1 Indexes
The Type-1 Indexes state indicates that the table currently uses type-1 indexes,
Unavailable
Rolling forward through an unrecoverable load operation places a table in the
Unavailable state. In this state, the table is unavailable; you must drop it or restore
it from a backup.
If you load an input file (staffdata.del) with a substantial amount of data into a
table NEWSTAFF, as follows: