IBM Informix Guide To SQL - Syntax
IBM Informix Guide To SQL - Syntax
Version 10.0/8.5
G251-2284-02
Version 10.0/8.5
G251-2284-02
Note: Before using this information and the product it supports, read the information in Notices on page D-1.
Third Edition (December 2005) This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright International Business Machines Corporation 1996, 2005. All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
In This Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Types of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Software Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Assumptions About Your Locale . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Demonstration Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x New Features in Dynamic Server, Version 10.0 . . . . . . . . . . . . . . . . . . . . . . . . xi 3 New Features in Dynamic Server, Version 10.00.xC3 . . . . . . . . . . . . . . . . . . . . . . xii 4 New Features in Dynamic Server, Version 10.00.xC4 . . . . . . . . . . . . . . . . . . . . . xiii New Features in Extended Parallel Server, Version 8.5 . . . . . . . . . . . . . . . . . . . . . xiii Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Feature, Product, and Platform Markup . . . . . . . . . . . . . . . . . . . . . . . . . xiv Syntax Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Example Code Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Additional Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix IBM Informix Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Installation Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Online Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Informix Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii IBM Informix Dynamic Server Version 10.0 and CSDK Version 2.90 Documentation Set . . . . . . . . . xxii Compliance with Industry Standards . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv IBM Welcomes Your Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
iii
In This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 ALLOCATE COLLECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4 ALLOCATE DESCRIPTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 ALLOCATE ROW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 ALTER ACCESS_METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9 ALTER FRAGMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11 ALTER FUNCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31 ALTER INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33 ALTER PROCEDURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35 ALTER ROUTINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37 ALTER SEQUENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-40 ALTER TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43 BEGIN WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-66 CLOSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-68 CLOSE DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-71 COMMIT WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73 CONNECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-75 CREATE ACCESS_METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-82 CREATE AGGREGATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-84 CREATE CAST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-87 CREATE DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-90 CREATE DISTINCT TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-93 CREATE DUPLICATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-96 CREATE EXTERNAL TABLE (XPS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-98 CREATE FUNCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-107 CREATE FUNCTION FROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-114 CREATE INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-116 CREATE OPAQUE TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-136 CREATE OPCLASS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-141 CREATE PROCEDURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-145 CREATE PROCEDURE FROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-153 CREATE ROLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-155 CREATE ROUTINE FROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-157 CREATE ROW TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-158 CREATE SCHEMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-162 CREATE SCRATCH TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-164 CREATE SEQUENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-165 CREATE SYNONYM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-168 CREATE TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-171 CREATE TEMP TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-208 CREATE Temporary TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-209 CREATE TRIGGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-216 CREATE VIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-247 3 CREATE XADATASOURCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-252 3 CREATE XADATASOURCE TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-253 DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-255 DEALLOCATE COLLECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-257 DEALLOCATE DESCRIPTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-258 DEALLOCATE ROW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-259 DECLARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-260 DELETE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-275 DESCRIBE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-281 DESCRIBE INPUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-286 DISCONNECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-291 DROP ACCESS_METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-294 DROP AGGREGATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-295 DROP CAST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-296 DROP DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-297 DROP DUPLICATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-299 DROP FUNCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-300 DROP INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-302
iv
DROP OPCLASS . . . . . DROP PROCEDURE . . . . DROP ROLE . . . . . . . DROP ROUTINE . . . . . DROP ROW TYPE . . . . . DROP SEQUENCE . . . . . DROP SYNONYM . . . . . DROP TABLE . . . . . . DROP TRIGGER. . . . . . DROP TYPE . . . . . . . DROP VIEW . . . . . . . DROP XADATASOURCE . . . DROP XADATASOURCE TYPE EXECUTE . . . . . . . . EXECUTE FUNCTION . . . EXECUTE IMMEDIATE . . . EXECUTE PROCEDURE . . . FETCH . . . . . . . . . FLUSH . . . . . . . . . FREE . . . . . . . . . GET DESCRIPTOR . . . . . GET DIAGNOSTICS . . . . GRANT . . . . . . . . GRANT FRAGMENT . . . . INFO . . . . . . . . . INSERT . . . . . . . . . LOAD . . . . . . . . . LOCK TABLE . . . . . . MERGE . . . . . . . . MOVE TABLE . . . . . . OPEN . . . . . . . . . OUTPUT . . . . . . . . PREPARE . . . . . . . . PUT . . . . . . . . . . RENAME COLUMN . . . . RENAME DATABASE . . . . RENAME INDEX . . . . . RENAME SEQUENCE. . . . RENAME TABLE . . . . . REVOKE . . . . . . . . REVOKE FRAGMENT . . . . ROLLBACK WORK . . . . SAVE EXTERNAL DIRECTIVES SELECT . . . . . . . . SET ALL_MUTABLES . . . . SET AUTOFREE . . . . . . SET COLLATION . . . . . SET CONNECTION . . . . SET CONSTRAINTS . . . . SET Database Object Mode . . SET DATASKIP . . . . . . SET DEBUG FILE . . . . . SET Default Table Space . . . SET Default Table Type . . . SET DEFERRED_PREPARE . . SET DESCRIPTOR . . . . . SET ENCRYPTION PASSWORD SET ENVIRONMENT . . . . SET EXPLAIN . . . . . . SET INDEX . . . . . . . SET INDEXES . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-304 2-305 2-307 2-308 2-310 2-312 2-313 2-314 2-316 2-317 2-318 2-319 2-320 2-321 2-329 2-334 2-336 2-344 2-353 2-355 2-357 2-362 2-371 2-388 2-393 2-395 2-407 2-413 2-416 2-419 2-424 2-431 2-433 2-442 2-449 2-451 2-452 2-453 2-454 2-456 2-471 2-474 2-476 2-479 2-527 2-529 2-531 2-534 2-538 2-539 2-545 2-547 2-549 2-550 2-552 2-554 2-560 2-563 2-568 2-573 2-574
Contents
SET ISOLATION . . . . . SET LOCK MODE . . . . . SET LOG . . . . . . . . SET OPTIMIZATION . . . . SET PDQPRIORITY . . . . SET PLOAD FILE . . . . . SET Residency . . . . . . SET ROLE . . . . . . . . SET SCHEDULE LEVEL . . . SET SESSION AUTHORIZATION SET STATEMENT CACHE . . SET TABLE . . . . . . . SET TRANSACTION . . . . SET Transaction Mode . . . . SET TRIGGERS . . . . . . START VIOLATIONS TABLE . STOP VIOLATIONS TABLE . . 4 TRUNCATE (IDS) . . . . . TRUNCATE (XPS) . . . . . UNLOAD . . . . . . . . UNLOCK TABLE . . . . . UPDATE . . . . . . . . UPDATE STATISTICS . . . . WHENEVER . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
2-575 2-580 2-582 2-584 2-586 2-589 2-590 2-592 2-594 2-595 2-597 2-601 2-602 2-606 2-608 2-609 2-622 2-624 2-628 2-630 2-635 2-636 2-649 2-659
vi
Appendix A. Reserved Words for IBM Informix Dynamic Server . . . . . . . . . . . A-1 Appendix B. Reserved Words for IBM Informix Extended Parallel Server . . . . . . . B-1 Appendix C. Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
Contents
vii
viii
Introduction
In This Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Types of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Software Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Assumptions About Your Locale . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Demonstration Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x New Features in Dynamic Server, Version 10.0 . . . . . . . . . . . . . . . . . . . . . . . . xi 3 New Features in Dynamic Server, Version 10.00.xC3 . . . . . . . . . . . . . . . . . . . . . . xii 4 New Features in Dynamic Server, Version 10.00.xC4 . . . . . . . . . . . . . . . . . . . . . xiii New Features in Extended Parallel Server, Version 8.5 . . . . . . . . . . . . . . . . . . . . . xiii Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Feature, Product, and Platform Markup . . . . . . . . . . . . . . . . . . . . . . . . . xiv Syntax Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv How to Read a Command-Line Syntax Diagram . . . . . . . . . . . . . . . . . . . . . xvi Keywords and Punctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Identifiers and Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Example Code Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Additional Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix IBM Informix Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Installation Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Online Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Locating Online Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Online Notes Filenames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Informix Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Online Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Printed Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii IBM Informix Dynamic Server Version 10.0 and CSDK Version 2.90 Documentation Set . . . . . . . . . xxii Compliance with Industry Standards . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv IBM Welcomes Your Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
In This Introduction
This introduction provides an overview of the information in this manual and describes the documentation conventions that it uses.
ix
access and manipulate the data in your databases. The IBM Informix Database Design and Implementation Guide shows how to use SQL to implement and manage relational databases.
Types of Users
This manual is written for the following users: v Database users v Database administrators v Database-application programmers This manual assumes that you have the following background: v A working knowledge of your computer, your operating system, and the utilities that your operating system provides v Some experience working with relational databases or exposure to database concepts v Some experience with computer programming If you have limited experience with relational databases, SQL, or your operating system, refer to the IBM Informix Getting Started Guide for your database server for a list of supplementary titles.
Software Dependencies
This manual assumes that you are using one of the following database servers: v IBM Informix Extended Parallel Server, Version 8.5 v IBM Informix Dynamic Server, Version 10.0
Demonstration Databases
The DBAccess utility, which is provided with your IBM Informix database server products, includes one or more of the following demonstration databases: v The stores_demo database illustrates a relational schema with information about a fictitious wholesale sporting-goods distributor. Many examples in IBM Informix manuals are based on the stores_demo database.
v For Extended Parallel Server only, the sales_demo database illustrates a dimensional schema for data-warehousing applications. For conceptual information about dimensional data modeling, see the IBM Informix Database Design and Implementation Guide. v For Dynamic Server only, the superstores_demo database illustrates an object-relational schema that contains examples of extended data types, data-type inheritance and table inheritance, and user-defined routines. For information about how to create and populate the demonstration databases, see the IBM Informix DBAccess User's Guide. For descriptions of the databases and their contents, see the IBM Informix Guide to SQL: Reference. The scripts that you use to install the demonstration databases reside in the $INFORMIXDIR/bin directory on UNIX platforms and in the %INFORMIXDIR%\bin directory in Windows environments.
SET ROLE DEFAULT v The DBSA (Database Server Administrator) can use the GRANT ROLE EXTEND statement to assign the new built-in role EXTEND to users. When this security feature is enabled, only users who hold the EXTEND role can register or drop external UDRs or create shared libraries in the database. v This release introduces default roles that the DBA can grant or revoke. These take effect when a user who holds a default role connects to the database, so that appropriate access privileges are available for using applications that do not include GRANT statements that reference individual users. New SQL functions can return the current role or the default role of the user.
Introduction
xi
v The CREATE INDEX and DROP INDEX statements can specify the ONLINE keyword to define or drop an index without applying table locks that might interfere with concurrent users. v Tables and indexes now can use fragmentation strategies that define multiple named fragments within the same dbspace. v Data values returned by distributed DML operations or UDRs in databases of the same Dynamic Server instance now can return the built-in opaque data types BLOB, BOOLEAN, CLOB, and LVARCHAR. They can also return UDTs, and DISTINCT types whose base types are built-in types, if the UDTs and DISTINCT types are explicitly cast to built-in data types, and if the casts, DISTINCT types, and UDTs are defined in all the participating databases. v The DS_NONPDQ_QUERY_MEM configuration parameter can be set to improve the performance of certain non-PDQ queries. v This release supports external optimizer directives that reside in the database and that are automatically applied to specified queries. These can be enabled or disabled for a given session by a configuration parameter. v The SET ENVIRONMENT OPTCOMPIND statement can reset the value of OPTCOMPIND dynamically, overriding the corresponding configuration parameter or environment variable setting. This can improve performance during sessions that use queries for both DS (decision-support) and OLTP (online-transaction-processing). v The SET ENCRYPTION PASSWORD statement can specify a session password to support column-level and cell-level encryption of sensitive data. This statement, and new built-in encryption and decryption functions, can support compliance with data security and data confidentiality requirements that are mandated for some industries and jurisdictions. v User-defined functions can include multiple INOUT parameters. 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
xii
3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
If you also include the ORDER BY clause, qualifying rows are first arranged by the ORDER BY specification before SKIP offset and LIMIT max clauses are applied.
Introduction
xiii
Documentation Conventions
This section describes the conventions that this manual uses. These conventions make it easier to gather information from this and other volumes in the documentation set. The following conventions are discussed: v Typographical conventions v Other conventions v Syntax diagrams v Command-line conventions v Example code conventions
Typographical Conventions
This manual uses the following conventions to introduce new terms, illustrate screen displays, describe command syntax, and so forth.
Convention KEYWORD italics Meaning Keywords of SQL, SPL, and some other programming languages appear in uppercase letters in a serif font. Within text, new terms and emphasized words appear in italics. Within syntax and code examples, variable values that you are to specify appear in italics. Names of program entities (such as classes, events, and tables), environment variables, file and pathnames, and interface elements (such as icons, menu items, and buttons) appear in boldface. Information that the product displays and information that you enter appear in a monospace typeface. Keys that you are to press appear in uppercase letters in a sans serif font. This symbol indicates a menu item. For example, Choose Tools > Options means choose the Options item from the Tools menu.
boldface
Tip: When you are instructed to enter characters or to execute a command, immediately press RETURN after the entry. When you are instructed to type the text or to press other keys, no RETURN is required.
xiv
of this markup follow: Dynamic Server Identifies information that is specific to IBM Informix Dynamic Server End of Dynamic Server Windows Only Identifies information that is specific to the Windows environment End of Windows Only This markup can apply to one or more paragraphs within a section. When an entire section applies to a particular product or platform, this is noted as part of the heading text, for example: Table Sorting (Linux)
Syntax Diagrams
This guide uses syntax diagrams built with the following components to describe the syntax for statements and all commands other than system-level commands. Syntax diagrams depicting SQL and command-line statements have changed in the following ways: v The symbols at the beginning and end of statements are double arrows. v The symbols at the beginning and end of syntax segment diagrams are vertical lines. v How many times a loop can be repeated is explained in a diagram footnote, whose marker appears above the path that is describes. v Syntax statements that are longer than one line continue on the next line. v Product or condition-specific paths are explained in diagram footnotes, whose markers appear above the path that they describe. v Cross-references to the descriptions of other syntax segments appear as diagram footnotes, whose markers immediately follow the name of the segment that they reference. The following table describes syntax diagram components.
Component represented in PDF Component represented in HTML >>---------------------Meaning Statement begins.
----------------------->
Statement continues on next line. Statement continues from previous line. Statement ends. Required item.
>--------------------------------------------->< --------SELECT----------
Introduction
xv
Required item with choice. One and only one item must be present.
Optional items with choice are shown below the main line, one of which you might specify. The values below the main line are optional, one of which you might specify. If you do not specify an item, the value above the line will be used as the default. Optional items. Several items are allowed; a comma must precede each repetition.
-t table
xvi
The second line in this diagram has a segment named Setting the Run Mode, which according to the diagram footnote, is on page Z-1. If this was an actual cross-reference, you would find this segment in on the first page of Appendix Z. Instead, this segment is shown in the following segment diagram. Notice that the diagram uses segment start and end components. Setting the Run Mode:
l c -f d p a u n N
To see how to construct a command correctly, start at the top left of the main diagram. Follow the diagram to the right, including the elements that you want. The elements in this diagram are case sensitive because the illustrates utility syntax. Other types of syntax, such as SQL, are not case sensitive. The Creating a No-Conversion Job diagram illustrates the following steps: 1. Type onpladm create job and then the name of the job. 2. Optionally, type -p and then the name of the project. 3. Type the following required elements: v -n v -d and the name of the device v -D and the name of the database v -t and the name of the table 4. Optionally, you can choose one or more of the following elements and repeat them an arbitrary number of times: v -S and the server name v -T and the target server name v The run mode. To set the run mode, follow the Setting the Run Mode segment diagram to type -f, optionally type d, p, or a, and then optionally type l or u. 5. Follow the diagram to the terminator. Your diagram is complete.
xvii
shown in uppercase letters. When you use a keyword in a command, you can write it in uppercase or lowercase letters, but you must spell the keyword exactly as it appears in the syntax diagram. You must also use any punctuation in your statements and commands exactly as shown in the syntax diagrams.
When you write a SELECT statement of this form, you replace the variables column_name and table_name with the name of a specific column and table.
To use this SQL code for a specific product, you must apply the syntax rules for that product. For example, if you are using DBAccess, you must delimit multiple statements with semicolons. If you are using an SQL API, you must use EXEC SQL at the start of each statement and a semicolon (or other appropriate delimiter) at the end of the statement. Tip: Ellipsis points in a code example indicate that more code would be added in a full application, but it is not necessary to show it to describe the concept being discussed. For detailed directions on using SQL statements for a particular application development tool or SQL API, see the manual for your product.
xviii
Additional Documentation
For additional information, refer to the following types of documentation: v Installation guides v Online notes v Informix error messages v Manuals v Online help
Installation Guides
Installation guides are located in the /doc directory of the product CD or in the /doc directory of the products compressed file if you downloaded it from the IBM Web site. Alternatively, you can obtain installation guides from the IBM Informix Online Documentation site at http://www.ibm.com/software/data/informix/pubs/library/ or the IBM Informix Information Center at http://publib.boulder.ibm.com/infocenter/ids9help/index.jsp.
Online Notes
The following sections describe the online files that supplement the information in this manual. Please examine these files before you begin using your IBM Informix product. They contain vital information about application and performance issues.
Introduction
xix
Description The TOC (Table of Contents) notes file provides a comprehensive directory of hyperlinks to the release notes, the fixed and known defects file, and all the documentation notes files for individual manual titles. The documentation notes file for each manual contains important information and corrections that supplement the information in the manual or information that was modified since publication. The release notes file describes feature differences from earlier versions of IBM Informix products and how these differences might affect current products. For some products, this file also contains information about any known problems and their workarounds.
Format HTML
Documentation Notes
HTML, text
Release Notes
HTML, text
Machine Notes
(Non-Windows platforms only) The machine text notes file describes any platform-specific actions that you must take to configure and use IBM Informix products on your computer. This text file lists issues that have been identified with the current version. It also lists customer-reported defects that have been fixed in both the current version and in previous versions. text
xx
ids_win_fixed_and_known _defects_version.txt
Manuals
Online Manuals
A CD that contains your manuals in electronic format is provided with your IBM Informix products. You can install the documentation or access it directly from the CD. For information about how to install, read, and print online manuals, see the installation insert that accompanies your CD. You can also obtain the same online manuals from the IBM Informix Online Documentation site at http://www.ibm.com/software/data/informix/pubs/library/ or in the IBM Informix Information Center at http://publib.boulder.ibm.com/infocenter/ids9help/index.jsp.
Introduction
xxi
Printed Manuals
To order hardcopy manuals, contact your sales representative or visit the IBM Publications Center Web site at http://www.ibm.com/software/howtobuy/data.html.
Online Help
IBM Informix online help, provided with each graphical user interface (GUI), displays information about those interfaces and the functions that they perform. Use the help facilities that each GUI provides to display the online help.
Accessibility
IBM is committed to making our documentation accessible to persons with disabilities. Our books are available in HTML format so that they can be accessed with assistive technology such as screen reader software. The syntax diagrams in our manuals are available in dotted decimal format, which is an accessible format that is available only if you are using a screen reader. For more information about the dotted decimal format, see the Accessibility appendix.
IBM Informix Dynamic Server Version 10.0 and CSDK Version 2.90 Documentation Set
The following tables list the manuals that are part of the IBM Informix Dynamic Server, Version 10.0 and the CSDK Version 2.90, documentation set. PDF and HTML versions of these manuals are available at http://www.ibm.com/software/data/informix/pubs/library/ or in the IBM Informix Information Center at http://publib.boulder.ibm.com/infocenter/ids9help/index.jsp. You can order hardcopy versions of these manuals from the IBM Publications Center at http://www.ibm.com/software/howtobuy/data.html.
Table 1. Database Server Manuals Manual Administrators Guide Administrators Reference Subject Understanding, configuring, and administering your database server. Reference material for Informix Dynamic Server, such as the syntax of database server utilities onmode and onstat, and descriptions of configuration parameters, the sysmaster tables, and logical-log records. The concepts and methods you need to understand when you use the ON-Bar and ontape utilities to back up and restore data. Using the following DataBlade modules that are included with Dynamic Server: v MQ DataBlade module, to allow IBM Informix database applications to communicate with other MQSeries applications. v Large Object Locator, a foundation DataBlade module that can be used by other modules that create or store large-object data. DB-Access Users Guide DataBlade API Function Reference DataBlade API Programmers Guide Using the DB-Access utility to access, modify, and retrieve data from Informix databases. The DataBlade API functions and the subset of ESQL/C functions that the DataBlade API supports. You can use the DataBlade API to develop client LIBMI applications and C user-defined routines that access data in Informix databases. The DataBlade API, which is the C-language application-programming interface provided with Dynamic Server. You use the DataBlade API to develop client and server applications that access data stored in Informix databases.
xxii
Table 1. Database Server Manuals (continued) Manual Database Design and Implementation Guide Enterprise Replication Guide Error Messages file Getting Started Guide Subject Designing, implementing, and managing your Informix databases. How to design, implement, and manage an Enterprise Replication system to replicate data between multiple database servers. Causes and solutions for numbered error messages you might receive when you work with IBM Informix products. Describes the products bundled with IBM Informix Dynamic Server and interoperability with other IBM products. Summarizes important features of Dynamic Server and the new features for each version. Information about Informix databases, data types, system catalog tables, environment variables, and the stores_demo demonstration database. Detailed descriptions of the syntax for all Informix SQL and SPL statements. A tutorial on SQL, as implemented by Informix products, that describes the basic ideas and terms that are used when you work with a relational database. Accessing and using the High-Performance Loader (HPL), to load and unload large quantities of data to and from Informix databases. Instructions for installing IBM Informix Dynamic Server on Windows. Instructions for installing IBM Informix Dynamic Server on UNIX and Linux. Writing user-defined routines (UDRs) in the Java programming language for Informix Dynamic Server with J/Foundation. Conversion to and reversion from the latest versions of Informix database servers. Migration between different Informix database servers. The Optical Subsystem, a utility that supports the storage of BYTE and TEXT data on optical disk. Configuring and operating IBM Informix Dynamic Server to achieve optimum performance. Creating R-tree indexes on appropriate data types, creating new operator classes that use the R-tree access method, and managing databases that use the R-tree secondary access method. The IBM Informix subagent that allows a Simple Network Management Protocol (SNMP) network manager to monitor the status of Informix servers. Informix Storage Manager (ISM), which manages storage devices and media for your Informix database server. The secure-auditing capabilities of Dynamic Server, including the creation and maintenance of audit logs. How to define new data types and enable user-defined routines (UDRs) to extend IBM Informix Dynamic Server. Creating a secondary access method (index) with the Virtual-Index Interface (VII) to extend the built-in indexing schemes of IBM Informix Dynamic Server. Typically used with a DataBlade module. Creating a primary access method with the Virtual-Table Interface (VTI) so that users have a single SQL interface to Informix tables and to data that does not conform to the storage scheme of Informix Dynamic Server.
Guide to SQL: Reference Guide to SQL: Syntax Guide to SQL: Tutorial High-Performance Loader Users Guide Installation Guide for Microsoft Windows Installation Guide for UNIX and Linux J/Foundation Developers Guide Migration Guide Optical Subsystem Guide Performance Guide R-Tree Index Users Guide
SNMP Subagent Guide Storage Manager Administrators Guide Trusted Facility Guide User-Defined Routines and Data Types Developers Guide Virtual-Index Interface Programmers Guide Virtual-Table Interface Programmers Guide
Introduction
xxiii
Table 2. Client/Connectivity Manuals Manual Client Products Installation Guide Embedded SQLJ Users Guide Subject Installing IBM Informix Client Software Developers Kit (Client SDK) and IBM Informix Connect on computers that use UNIX, Linux, and Windows. Using IBM Informix Embedded SQLJ to embed SQL statements in Java programs.
ESQL/C Programmers Manual The IBM Informix implementation of embedded SQL for C. GLS Users Guide The Global Language Support (GLS) feature, which allows IBM Informix APIs and database servers to handle different languages, cultural conventions, and code sets. Installing and using Informix JDBC Driver to connect to an Informix database from within a Java application or applet. Using Informix .NET Provider to enable .NET client applications to access and manipulate data in Informix databases. Using the Informix ODBC Driver API to access an Informix database and interact with the Informix database server. Installing and configuring Informix OLE DB Provider to enable client applications, such as ActiveX Data Object (ADO) applications and Web pages, to access data on an Informix server. The architecture of the C++ object interface and a complete class reference.
JDBC Driver Programmers Guide .NET Provider Reference Guide ODBC Driver Programmers Manual OLE DB Provider Programmers Guide Object Interface for C++ Programmers Guide
Table 3. DataBlade Developers Kit Manuals Manual DataBlade Developers Kit Users Guide DataBlade Module Development Overview DataBlade Module Installation and Registration Guide Subject Developing and packaging DataBlade modules using BladeSmith and BladePack. Basic orientation for developing DataBlade modules. Includes an example illustrating the development of a DataBlade module. Installing DataBlade modules and using BladeManager to manage DataBlade modules in Informix databases.
xxiv
[email protected] This email address is reserved for reporting errors and omissions in our documentation. For immediate help with a technical problem, contact IBM Technical Support. For instructions, see the IBM Informix Technical Support website at http://www306.ibm.com/software/data/informix/support/contact.html. We appreciate your suggestions.
Introduction
xxv
xxvi
In This Chapter
This chapter provides information about how to use the SQL statements, SPL statements, and syntax segments that subsequent chapters of this book discuss. The chapter is organized into the following sections.
Section How to Enter SQL Statements How to Enter SQL Comments on page 1-3 Categories of SQL Statements on page 1-6 ANSI/ISO Compliance and Extensions on page 1-8 Scope How to use syntax diagrams and descriptions to enter SQL statements correctly How to enter comments in SQL statements The SQL statements, listed by functional category The SQL statements, listed by degree of ANSI/ISO compliance
1-1
was not set to 1 when the database server was initialized, the database server stores the owner name in uppercase letters. Statement descriptions are provided in this manual to help you to enter SQL statements successfully. A statement description includes this information: v A brief introduction that explains what the statement does v A syntax diagram that shows how to enter the statement correctly v A syntax table that explains each input parameter in the syntax diagram v Rules of usage, typically with examples that illustrate these rules For some statements, this information is provided for individual clauses. Most statement descriptions conclude with references to related information in this manual and in other manuals. Chapter 2 provides descriptions of each SQL statement, arranged in alphabetical order. Chapter 3 describes each of the SPL statements, using the same format. The major aids for entering SQL statements include: v The combination of the syntax diagram and syntax table v The examples of syntax that appear in the rules of usage v The references to related information
1-2
The diagrams generally provide an intuitive notation for what is valid in a given SQL statement, but for some statements, dependencies or restrictions among syntax elements are identified only in the text of the Usage section.
Using Examples
To understand the main syntax diagram and subdiagrams for a statement, study the examples of syntax that appear in the rules of usage for each statement. These examples have two purposes: v To show how to accomplish specific tasks with the statement or its clauses v To show how to use syntax of the statement or its clauses in a concrete way Tip: An efficient way to understand a syntax diagram is to find an example of the syntax and compare it with the keywords and parameters in the syntax diagram. By mapping the concrete elements of the example to the abstract elements of the syntax diagram, you can understand the syntax diagram and use it more effectively. For an explanation of the conventions used in the examples in this manual, see Example Code Conventions on page xviii of the Introduction. These code examples are program fragments to illustrate valid syntax, rather than complete SQL programs. In some code examples, ellipsis ( . . . ) symbols indicate that additional code has been omitted. To save space, however, ellipses are not shown at the beginning or end of the program fragments.
1-3
The following table shows the SQL comment indicators that you can enter in your code. Here a Y in a column signifies that you can use the symbol with the product or with the type of database identified in the column heading. An N in a column signifies that you cannot use the symbol with the indicated product or with a database of the indicated ANSI-compliance status.
ANSICompliant Databases Y Databases Not ANSI Compliant Y
ESQL/C Y
SPL Routine Y
DBAccess Y
Description The double hyphen precedes a comment within a single line. To comment more than one line, put double hyphen symbols at the beginning of each comment line. Braces enclose the comment. The { precedes the comment, and the } follows it. Braces can delimit single- or multiple-line comments, but comments cannot be nested. C-language style slash and asterisk ( /* */ ) paired delimiters enclose the comment. The /* precedes the comment, and the */ follows it. These can delimit single-line or multiple-line comments, but comments cannot be nested.
braces ({...})
Characters within the comment are ignored by the database server. The section Optimizer Directives on page 5-34 describes a context where information within comments can influence query plans of Dynamic Server. If the product that you use supports all of these comment symbols, your choice of a comment symbol depends on requirements for ANSI/ISO compliance: v Double hyphen ( -- ) complies with the ANSI/ISO standard for SQL. v Braces ( { } ) are an Informix extension to the ANSI/ISO standard. v C-style slash-and-asterisk ( /* . . . */ ) comply with the SQL-99 standard. If ANSI/ISO compliance is not an issue, your choice of comment symbols is a matter of personal preference. In DBAccess, you can use any of these comment symbols when you enter SQL statements with the SQL editor and when you create SQL command files with the SQL editor or a system editor. C-style ( /* . . . */ ) comments, however, are not supported by DBAccess within CONNECT, DISCONNECT, INFO, LOAD, OUTPUT, SET CONNECT, nor UNLOAD statements.
1-4
An SQL command file is an operating-system file that contains one or more SQL statements. Command files are also known as command scripts. For more information about command files, see the discussion of command scripts in the IBM Informix Guide to SQL: Tutorial. For information on how to create and modify command files with the SQL editor or a system editor in DBAccess, see the IBM Informix DBAccess User's Guide. You can use either comment symbol in any line of an SPL routine. See the discussion of how to comment and document an SPL routine in the IBM Informix Guide to SQL: Tutorial. In ESQL/C, the double hyphen ( -- ) can begin a comment that extends to the end of the same line. For information on language-specific comment symbols in ESQL/C programs, see the IBM Informix ESQL/C Programmer's Manual.
The following example uses the same SQL statement and the same comment as in the preceding example, but places the comment on a separate line:
SELECT * FROM customer -- Selects all columns and rows
In the following example, the user enters the same SQL statement as in the preceding example but now enters a multiple-line comment:
SELECT * FROM customer -- Selects all columns and rows -- from the customer table
The next example uses the same SQL statement and the same comment as in the preceding example, but the comment appears on a line by itself:
SELECT * FROM customer {Selects all columns and rows}
In the following example, the same SQL statement as in the preceding example is followed by a multiple-line comment:
SELECT * FROM customer {Selects all columns and rows from the customer table}
1-5
1-6
3 3 3 3 3 3 3 3 3 3 3 3 3 3
CREATE XADATASOURCE TYPE DROP ACCESS_METHOD DROP AGGREGATE DROP CAST DROP DATABASE DROP DUPLICATE DROP FUNCTION DROP INDEX DROP OPCLASS DROP PROCEDURE DROP ROLE DROP ROUTINE DROP ROW TYPE DROP SEQUENCE
3 3 3 3 3 3 3 3 3 3 3 3 3 3
DROP SYNONYM DROP TABLE DROP TRIGGER DROP TYPE DROP VIEW DROP XADATASOURCE DROP XADATASOURCE TYPE MOVE RENAME COLUMN RENAME DATABASE RENAME INDEX RENAME SEQUENCE RENAME TABLE TRUNCATE
1-7
Optimization Statements
SAVE EXTERNAL DIRECTIVES SET ALL_MUTABLES SET Default Table Space SET Default Table Type SET ENVIRONMENT SET EXPLAIN SET OPTIMIZATION SET PDQPRIORITY SET Residency SET SCHEDULE LEVEL SET STATEMENT CACHE UPDATE STATISTICS
Auxiliary Statements
GET DIAGNOSTICS INFO OUTPUT SET COLLATION SET DATASKIP SET ENCRYPTION PASSWORD WHENEVER
ANSI/ISO-Compliant Statements
CLOSE COMMIT WORK EXECUTE IMMEDIATE ROLLBACK WORK SET SESSION AUTHORIZATION SET TRANSACTION
1-8
CREATE TABLE CREATE Temporary TABLE CREATE VIEW DEALLOCATE DESCRIPTOR DECLARE DELETE DESCRIBE DESCRIBE INPUT DISCONNECT EXECUTE FETCH GET DESCRIPTOR GET DIAGNOSTICS
GRANT INSERT OPEN PREPARE REVOKE SELECT SET CONNECTION SET CONSTRAINTS SET DESCRIPTOR SET Transaction Mode UPDATE WHENEVER
1-9
RELEASE RENAME COLUMN RENAME DATABASE RENAME INDEX RENAME SEQUENCE RENAME TABLE RESERVE REVOKE FRAGMENT SAVE EXTERNAL DIRECTIVES SET ALL_MUTABLES SET AUTOFREE SET COLLATION SET Database Object Mode SET DATASKIP SET DEBUG FILE TO SET Default Table Space SET Default Table Type SET DEFERRED_PREPARE SET ENCRYPTION PASSWORD SET ENVIRONMENT SET EXPLAIN SET ISOLATION SET LOCK MODE SET LOG SET MOUNTING TIMEOUT SET OPTIMIZATION SET PDQPRIORITY SET PLOAD FILE SET Residency SET ROLE SET SCHEDULE LEVEL SET STATEMENT CACHE START VIOLATIONS TABLE STOP VIOLATIONS TABLE TRUNCATE UNLOAD UNLOCK TABLE UPDATE STATISTICS
1-10
2-1
DROP AGGREGATE . . . . DROP CAST . . . . . . . DROP DATABASE . . . . . DROP DUPLICATE . . . . . DROP FUNCTION . . . . . DROP INDEX . . . . . . DROP OPCLASS . . . . . DROP PROCEDURE . . . . DROP ROLE . . . . . . . DROP ROUTINE . . . . . DROP ROW TYPE . . . . . DROP SEQUENCE . . . . . DROP SYNONYM . . . . . DROP TABLE . . . . . . DROP TRIGGER. . . . . . DROP TYPE . . . . . . . DROP VIEW . . . . . . . DROP XADATASOURCE . . . DROP XADATASOURCE TYPE EXECUTE . . . . . . . . EXECUTE FUNCTION . . . EXECUTE IMMEDIATE . . . EXECUTE PROCEDURE . . . FETCH . . . . . . . . . FLUSH . . . . . . . . . FREE . . . . . . . . . GET DESCRIPTOR . . . . . GET DIAGNOSTICS . . . . GRANT . . . . . . . . GRANT FRAGMENT . . . . INFO . . . . . . . . . INSERT . . . . . . . . . LOAD . . . . . . . . . LOCK TABLE . . . . . . MERGE . . . . . . . . MOVE TABLE . . . . . . OPEN . . . . . . . . . OUTPUT . . . . . . . . PREPARE . . . . . . . . PUT . . . . . . . . . . RENAME COLUMN . . . . RENAME DATABASE . . . . RENAME INDEX . . . . . RENAME SEQUENCE. . . . RENAME TABLE . . . . . REVOKE . . . . . . . . REVOKE FRAGMENT . . . . ROLLBACK WORK . . . . SAVE EXTERNAL DIRECTIVES SELECT . . . . . . . . SET ALL_MUTABLES . . . . SET AUTOFREE . . . . . . SET COLLATION . . . . . SET CONNECTION . . . . SET CONSTRAINTS . . . . SET Database Object Mode . . SET DATASKIP . . . . . . SET DEBUG FILE . . . . . SET Default Table Space . . . SET Default Table Type . . . SET DEFERRED_PREPARE . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-295 2-296 2-297 2-299 2-300 2-302 2-304 2-305 2-307 2-308 2-310 2-312 2-313 2-314 2-316 2-317 2-318 2-319 2-320 2-321 2-329 2-334 2-336 2-344 2-353 2-355 2-357 2-362 2-371 2-388 2-393 2-395 2-407 2-413 2-416 2-419 2-424 2-431 2-433 2-442 2-449 2-451 2-452 2-453 2-454 2-456 2-471 2-474 2-476 2-479 2-527 2-529 2-531 2-534 2-538 2-539 2-545 2-547 2-549 2-550 2-552
2-2
SET DESCRIPTOR . . . . . SET ENCRYPTION PASSWORD SET ENVIRONMENT . . . . SET EXPLAIN . . . . . . SET INDEX . . . . . . . SET INDEXES . . . . . . SET ISOLATION . . . . . SET LOCK MODE . . . . . SET LOG . . . . . . . . SET OPTIMIZATION . . . . SET PDQPRIORITY . . . . SET PLOAD FILE . . . . . SET Residency . . . . . . SET ROLE . . . . . . . . SET SCHEDULE LEVEL . . . SET SESSION AUTHORIZATION SET STATEMENT CACHE . . SET TABLE . . . . . . . SET TRANSACTION . . . . SET Transaction Mode . . . . SET TRIGGERS . . . . . . START VIOLATIONS TABLE . STOP VIOLATIONS TABLE . . 4 TRUNCATE (IDS) . . . . . TRUNCATE (XPS) . . . . . UNLOAD . . . . . . . . UNLOCK TABLE . . . . . UPDATE . . . . . . . . UPDATE STATISTICS . . . . WHENEVER . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-554 2-560 2-563 2-568 2-573 2-574 2-575 2-580 2-582 2-584 2-586 2-589 2-590 2-592 2-594 2-595 2-597 2-601 2-602 2-606 2-608 2-609 2-622 2-624 2-628 2-630 2-635 2-636 2-649 2-659
In This Chapter
This chapter describes the syntax and semantics of SQL statements that are recognized by Dynamic Server or by Extended Parallel Server. Statements (and statement segments, and notes describing usage) that are not explicitly restricted to one of these database servers are valid for both. The statement descriptions appear in alphabetical order. For some statements, important details of the semantics appear in other volumes of this documentation set, as indicated by cross-references. For many statements, the syntax diagram, or the table of terms immediately following the diagram, or both, includes references to syntax segments in Chapter 4, Data Types and Expressions, on page 4-1 or in Chapter 5, Other Syntax Segments, on page 5-1. When the name of an SQL statement includes lowercase characters, such as CREATE Temporary TABLE, it means that the first mixed-lettercase string in the statement name is not an SQL keyword, but that two or more different SQL keywords can follow the preceding uppercase keyword. For an explanation of the structure of statement descriptions, see Chapter 1, Overview of SQL Syntax, on page 1-1.
2-3
ALLOCATE COLLECTION
ALLOCATE COLLECTION
Use the ALLOCATE COLLECTION statement to allocate memory for a variable of a collection data type (such as LIST, MULTISET, or SET) or an untyped collection variable. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL. Use this statement with ESQL/C.
Syntax
ALLOCATE COLLECTION variable
Element variable
Usage
The ALLOCATE COLLECTION statement allocates memory for an ESQL/C variable that can store the value of a collection data type. To create a collection variable for an ESQL/C program: 1. Declare the collection variable as a client collection variable in an ESQL/C program. The collection variable can be a typed or untyped collection variable. 2. Allocate memory for the collection variable with the ALLOCATE COLLECTION statement. The ALLOCATE COLLECTION statement sets SQLCODE (that is, sqlca.sqlcode) to zero (0) if the memory allocation was successful and to a negative error code if the allocation failed. You must explicitly release memory with the DEALLOCATE COLLECTION statement. After you free the collection variable with the DEALLOCATE COLLECTION statement, you can reuse the collection variable. Tip: The ALLOCATE COLLECTION statement allocates memory for an ESQL/C collection variable only. To allocate memory for an ESQL/C row variable, use the ALLOCATE ROW statement.
Examples
The following example shows how to allocate resources with the ALLOCATE COLLECTION statement for the untyped collection variable, a_set:
EXEC SQL BEGIN DECLARE SECTION; client collection a_set; EXEC SQL END DECLARE SECTION; . . . EXEC SQL allocate collection :a_set;
The following example uses ALLOCATE COLLECTION to allocate resources for a typed collection variable, a_typed_set:
2-4
ALLOCATE COLLECTION
EXEC SQL BEGIN DECLARE SECTION; client collection set(integer not null) a_typed_set; EXEC SQL END DECLARE SECTION; . . . EXEC SQL allocate collection :a_typed_set;
Related Information
Related examples: Refer to the collection-variable example in PUT. Related statements: ALLOCATE ROW and DEALLOCATE COLLECTION For a discussion of collection data types in ESQL/C programs, see the IBM Informix ESQL/C Programmer's Manual.
2-5
ALLOCATE DESCRIPTOR
Use the ALLOCATE DESCRIPTOR statement to allocate memory for a system-descriptor area (SDA). Use this statement with ESQL/C.
Syntax
ALLOCATE DESCRIPTOR descriptor descriptor_var WITH MAX items items_var
Restrictions
Syntax
Enclose in single ( ) quotes. Must be Quoted String on unique among SDA names page 4-142. Language specific Literal Number on page 4-137 Language specific
Host variable that stores the name of Must contain name of unallocated a system-descriptor area system-descriptor area Number of item descriptors in descriptor. Default value is 100. Host variable that contains the number of items Must be an unsigned INTEGER greater than zero Data type must be INTEGER or SMALLINT
Usage
The ALLOCATE DESCRIPTOR statement creates a system-descriptor area, which is a location in memory that holds information that the DESCRIBE statement can display, or that holds information about the WHERE clause of a query. A system-descriptor area (SDA) contains one or more fields called item descriptors. Each item descriptor holds a data value that the database server can receive or send. The item descriptors also contain information about the data, such as data type, length, scale, precision, and nullability. A system-descriptor area holds information that a DESCRIBE...USING SQL DESCRIPTOR statement obtains or that holds information about the WHERE clause of a dynamically executed statement. If the name that you assign to a system-descriptor area matches the name of an existing system-descriptor area, the database server returns an error. If you free the descriptor with the DEALLOCATE DESCRIPTOR statement, however, you can reuse the same descriptor name.
2-6
ALLOCATE DESCRIPTOR
EXEC SQL allocate descriptor :descname with max :occ; EXEC SQL allocate descriptor desc1 with max 3;
Related Information
Related statements: DEALLOCATE DESCRIPTOR, DECLARE, DESCRIBE, EXECUTE, FETCH, GET DESCRIPTOR, OPEN, PREPARE, PUT, and SET DESCRIPTOR For more information on system-descriptor areas, refer to the IBM Informix ESQL/C Programmer's Manual.
2-7
ALLOCATE ROW
Use the ALLOCATE ROW statement to allocate memory for a row variable. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL. Use this statement with ESQL/C.
Syntax
ALLOCATE ROW variable
Element variable
Usage
The ALLOCATE ROW statement allocates memory for a host variable that stores row-type data. To create a row variable, an ESQL/C program must do the following: 1. Declare the row variable. The row variable can be a typed or untyped row variable. 2. Allocate memory for the row variable with the ALLOCATE ROW statement. The following example shows how to allocate resources with the ALLOCATE ROW statement for the typed row variable, a_row:
EXEC SQL BEGIN DECLARE SECTION; row (a int, b int) a_row; EXEC SQL END DECLARE SECTION; . . . EXEC SQL allocate row :a_row;
The ALLOCATE ROW statement sets SQLCODE (the contents of sqlca.sqlcode) to zero (0) if the memory allocation operation was successful, or to a negative error code if the allocation failed. You must explicitly release memory with the DEALLOCATE ROW statement. Once you free the row variable with the DEALLOCATE ROW statement, you can reuse the row variable. Tip: The ALLOCATE ROW statement allocates memory for an ESQL/C row variable only. To allocate memory for an ESQL/C collection variable, use the ALLOCATE COLLECTION statement. When you use the same row variable in multiple function calls without deallocating it, a memory leak on the client computer results. Because there is no way to determine if a pointer is valid when it is passed, ESQL/C assumes that the pointer is not valid and assigns it to a new memory location.
Related Information
Related statements: ALLOCATE COLLECTION and DEALLOCATE ROW For a discussion of complex data types in ESQL/C programs, see the IBM Informix ESQL/C Programmer's Manual.
2-8
ALTER ACCESS_METHOD
The ALTER ACCESS_METHOD statement changes the attributes of a user-defined access method in the sysams system catalog table. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
ALTER ACCESS_METHOD access_method
Notes: 1
Element access _method purpose _keyword Description Name of the access method to alter A keyword that indicates which feature to change
Usage
Use ALTER ACCESS_METHOD to modify the definition of a user-defined access method. You must be the owner of the access method or have DBA privileges to alter an access method. When you alter an access method, you change the purpose-option specifications (purpose functions, purpose methods, purpose flags, or purpose values) that define the access method. For example, you might alter an access method to assign a new user-defined function or method name or to provide a multiplier for the scan cost on a table. If a transaction is in progress, the database server waits to alter the access method until after the transaction is committed or rolled back. No other users can execute the access method until the transaction has completed.
Example
The following statement alters the remote user-defined access method:
ALTER ACCESS_METHOD remote ADD am_scancost = FS_scancost, ADD am_rowids, DROP am_getbyid, MODIFY am_costfactor = 0.9;
2-9
ALTER ACCESS_METHOD
The preceding example makes the following changes to the access method: v Adds a user-defined function or method named FS_scancost( ), which is associated in the sysams table with the am_scancost keyword v Sets (adds) the am_rowids flag v Drops the user-defined function or method associated with the am_getbyid keyword v Modifies the am_costfactor value
Related Information
Related statements: CREATE ACCESS_METHOD and DROP ACCESS_METHOD For information about how to set purpose-option specifications, see Purpose Options on page 5-46. For the schema of the sysams system catalog table, see the IBM Informix Guide to SQL: Reference. For more information on primary-access methods, see the IBM Informix Virtual-Table Interface Programmer's Guide. For more information on secondary-access methods, see the IBM Informix Virtual-Index Interface Programmer's Guide and the IBM Informix User-Defined Routines and Data Types Developer's Guide. For a discussion of privileges, see the GRANT statement in this chapter, or the IBM Informix Database Design and Implementation Guide.
2-10
ALTER FRAGMENT
Use the ALTER FRAGMENT statement to change the distribution strategy or the storage location of an existing table or index. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
ALTER FRAGMENT ON
(1) TABLE surviving_table ATTACH Clause (2) DETACH Clause (3) INIT Clause (4) ADD Clause (5) DROP Clause MODIFY Clause (5) INDEX surviving_index INIT Clause (4) ADD Clause (6) DROP Clause (7) MODIFY Clause (3)
(6) (7)
Notes: 1 2 3 4 5 6 7
Element surviving _index surviving _table Description Index on which to modify the distribution or storage Table on which to modify the distribution or storage
See ATTACH Clause on page 2-13 See DETACH Clause on page 2-19 See INIT Clause on page 2-20 See ADD Clause on page 2-26 Dynamic Server only See DROP Clause (IDS) on page 2-27 See MODIFY Clause (IDS) on page 2-28
Restrictions Must exist when the statement executes Syntax Database Object Name on page 5-17 Database Object Name on page 5-17
Must exist. See Restrictions on the ALTER FRAGMENT Statement on page 2-12.
2-11
ALTER FRAGMENT
Usage
The ALTER FRAGMENT statement applies only to table fragments or index fragments that are located at the current site (or the current cluster, for Extended Parallel Server). No remote information is accessed or updated. You must have the Alter or the DBA privilege to change the fragmentation strategy of a table. You must have the Index or the DBA privilege to alter the fragmentation strategy of an index. Attention: This statement can cause indexes to be dropped and rebuilt. Before undertaking alter operations, check corresponding sections in your IBM Informix Performance Guide to review effects and strategies. Clauses of the ALTER FRAGMENT statement support the following tasks. Clause ATTACH DETACH INIT Effect Combines two or more tables that have the same schema into a single fragmented table Detaches a table fragment (or in Extended Parallel Server, a slice) from a fragmentation strategy, and places it in a new table Provides the following options: v Defines and initializes a fragmentation strategy on a table v Creates a fragmentation strategy for tables v Changes the order of evaluation of fragment expressions v Alters the fragmentation strategy of a table or index v Changes the storage location of an existing table Adds an additional fragment to an existing fragmentation list Drops an existing fragment from a fragmentation list Changes an existing fragmentation expression
Use the CREATE TABLE statement or the INIT clause of the ALTER FRAGMENT statement to create fragmented tables. After a dbspace has been renamed successfully by the onspaces utility, only the new name can reference the renamed dbspace. Existing fragmentation strategies for tables or indexes are automatically updated, however, by the database server to replace the old dbspace name with the new name. You do not need to take any additional action to update a distribution strategy or storage location that was defined using the old dbspace name, but you must use the new name if you reference the dbspace in an ALTER FRAGMENT or ALTER TABLE statement.
2-12
ALTER FRAGMENT
In Dynamic Server, you cannot use ALTER FRAGMENT on a typed table that is part of a table hierarchy.
ATTACH Clause
Use the ATTACH clause of the ALTER FRAGMENT ON TABLE statement to combine tables that have identical structures into a fragmentation strategy. ATTACH Clause:
, (1) ATTACH surviving_table consumed_table AS (2) PARTITION new_frag (1) expr AFTER old_frag BEFORE (2) REMAINDER (3)
Notes: 1 2 3 Use path no more than once Dynamic Server only; required if another surviving_table fragment has same name as dbspace Required for fragmentation by expression; optional for round-robin fragmentation
2-13
ALTER FRAGMENT
Element consumed _table Description Table that loses its identity to be merged with surviving_table Restrictions Syntax
Must exist. Cannot include serial columns Database Object nor unique, referential, or primary key Name on page constraints. See also General Restrictions for 5-17 the ATTACH Clause on page 2-14. Can include only columns from the current table and only data values from a single row. See also General Restrictions for the ATTACH Clause on page 2-14. Must be unique among the names of partitions and of dbspaces that store fragments of surviving_table Must exist. See also Altering Hybrid-Fragmented Tables (XPS) on page 2-16. Condition on page 4-5; Expression on page 4-34 Identifier on page 5-22 Identifier on page 5-22
expr
Expression defining which rows are stored in a fragment of a fragmented-by-expression table Name declared here for consumed_table fragment. Default is the dbspace name. Partition or dbspace where a surviving_table fragment exists Table on which to modify the distribution or storage location
new_frag
old_frag
surviving _table
Must exist. Cannot have any constraints. See Database Object also Restrictions on the ALTER FRAGMENT Name on page Statement on page 2-12. 5-17
To use this clause, you must have the DBA privilege or else be the owner of the specified tables. The ATTACH clause supports the following tasks: v Creates a single fragmented table by combining two or more identically-structured, nonfragmented tables (See Combining Nonfragmented Tables to Create a Fragmented Table on page 2-15.) v Attaches one or more tables to a fragmented table (See Attaching a Table to a Fragmented Table on page 2-15.)
2-14
ALTER FRAGMENT
v If the consumed tables contain data that belongs in some existing fragment of the surviving table v If existing data in the surviving table would belong in a new fragment Thus, you cannot use the ATTACH clause for data movement among fragments. To perform this task, see the INIT Clause on page 2-20.
In Dynamic Server, when you attach one or more tables to a fragmented table, a consumed_table must be nonfragmented. In Extended Parallel Server, when you attach one or more tables to a fragmented table, a consumed_table can be nonfragmented or have hash fragmentation. If you
2-15
ALTER FRAGMENT
specify a consumed_table that has hash fragmentation, the hash column specification must match that of the surviving_table and of any other consumed_table involved in the attach operation.
2-16
ALTER FRAGMENT
What Happens to Triggers and Views?: When you attach tables, triggers on the surviving table survive the ATTACH, but triggers on the consumed table are automatically dropped. No triggers are activated by the ATTACH clause, but subsequent data-manipulation operations on the new rows can activate triggers. Views on the surviving table survive the ATTACH operation, but views on the consumed table are automatically dropped. What Happens with the Distribution Scheme?: You can attach a nonfragmented table to a table with any type of supported distribution scheme. In general, the resulting table has the same fragmentation strategy as the prior fragmentation strategy of the surviving_table. When you attach two or more nonfragmented tables, however, the distribution scheme can either be based on expression or round-robin. For Dynamic Server, only the following distribution schemes can result from combining the distribution schemes of the tables in the ATTACH clause.
Prior Distribution Scheme of Surviving Table None Round-robin Expression Prior Distribution Scheme of Consumed Table None None None Resulting Distribution Scheme Round-robin or expression Round-robin Expression
For Extended Parallel Server, this table shows the distribution schemes that can result from different distribution schemes of the tables in the ATTACH clause.
Prior Distribution Scheme of Surviving Table None None Round-robin Expression Hash Hash Hybrid Hybrid Prior Distribution Scheme of Consumed Table None Hash None None None Hash None Hash Resulting Distribution Scheme Round-robin or expression Hybrid Round-robin Expression Hybrid Hybrid Hybrid Hybrid
In Extended Parallel Server, when you attach a nonfragmented table to a table that has hash fragmentation, the resulting table has hybrid fragmentation. You can attach a table with a hash distribution scheme to a table that currently has no fragmentation, or hash fragmentation, or hybrid fragmentation. In any of these situations, the resulting table has a hybrid distribution scheme. The following examples illustrate the use of the ATTACH clause to create fragmented tables with different distribution schemes. Round-Robin Distribution Scheme: The following example combines nonfragmented tables pen_types and pen_makers into a single, fragmented table, pen_types. Table pen_types resides in dbspace dbsp1, and table pen_makers resides in dbspace dbsp2. Table structures are identical in each table.
ALTER FRAGMENT ON TABLE pen_types ATTACH pen_types, pen_makers
Chapter 2. SQL Statements
2-17
ALTER FRAGMENT
After you execute the ATTACH clause, the database server fragments the table pen_types using a round-robin scheme into two dbspaces: the dbspace that contained pen_types and the dbspace that contained pen_makers. Table pen_makers is consumed and no longer exists; all rows that were in table pen_makers are now in table pen_types. Expression Distribution Scheme: Consider the following example that combines tables cur_acct and new_acct and uses an expression-based distribution scheme. Table cur_acct was originally created as a fragmented table and has fragments in dbspaces dbsp1 and dbsp2. The first statement of the example shows that table cur_acct was created with an expression-based distribution scheme. The second statement of the example creates table new_acct in dbsp3 without a fragmentation strategy. The third statement combines the tables cur_acct and new_acct. Table structures (columns) are identical in each table.
CREATE TABLE cur_acct (a int) FRAGMENT BY EXPRESSION a < 5 in dbsp1, a >= 5 and a < 10 in dbsp2; CREATE TABLE new_acct (a int) IN dbsp3; ALTER FRAGMENT ON TABLE cur_acct ATTACH new_acct AS a>=10;
When you examine the sysfragments system catalog table after you alter the fragment, you see that table cur_acct is fragmented by expression into three dbspaces. For additional information about the sysfragments system catalog table, see the IBM Informix Guide to SQL: Reference. In addition to simple range rules, you can use the ATTACH clause to fragment by expression with hash or arbitrary rules. For a discussion of all types of expressions that you can use in an expression-based distribution scheme, see page Fragmenting by EXPRESSION on page 2-192. Warning: When you specify a date value as the default value for a parameter, make sure to specify 4 digits instead of 2 digits for the year. If you specify a 2-digit year, the setting of the DBCENTURY environment variable can affect how the database server interprets the date value. For more information, see the IBM Informix Guide to SQL: Reference. Hybrid Fragmentation Distribution Scheme (XPS): Consider a case where monthly sales data is added to the sales_info table defined below. Due to the large amount of data, the table is distributed evenly across multiple coservers with a system-defined hash function. To manage monthly additions of data to the table, it is also fragmented by a date expression. The combined hybrid fragmentation is declared in the following CREATE TABLE statement:
CREATE TABLE sales_info (order_num INT, sale_date DATE, ...) FRAGMENT BY HYBRID (order_num) EXPRESSION sale_date >= 01/01/1996 AND sale_date < 02/01/1996 IN sales_slice_9601, sale_date >= 02/01/1996 AND sale_date < 03/01/1996 IN sales_slice_9602, . . . sale_date >= 12/01/1996 AND sale_date < 01/01/1997 IN sales_slice_9612;
Values for a new month are originally loaded from an external source. The data values are distributed evenly across the same coservers on which the sales_info table is defined, using a system-defined hash function on the same column:
CREATE TABLE jan_97 (order_num INT, sale_date DATE, ...) FRAGMENT BY HASH (order_num) IN sales_slice_9701 INSERT INTO jan_97 SELECT (...) FROM ... ;
2-18
ALTER FRAGMENT
After data values are loaded, you can attach the new table to sales_info. You can issue the following ALTER FRAGMENT statement to attach the new table:
ALTER FRAGMENT ON TABLE sales_info ATTACH jan_97 AS sale_date >= 01/01/1997 AND sale_date < 02/01/1997
DETACH Clause
Use the DETACH clause of the ALTER FRAGMENT ON TABLE statement to detach a table fragment from a distribution scheme and place the contents into a new nonfragmented table. This clause is not valid in ALTER FRAGMENT ON INDEX statements. In Extended Parallel Server, the new table can also be a table with hash fragmentation. For an explanation of distribution schemes, see FRAGMENT BY Clause on page 2-190. DETACH Clause:
DETACH (1) PARTITION fragment new_table
Notes: 1
Element fragment Description Partition (IDS only), dbspace, or dbslice (XPS only) that contains the table fragment to be detached. With hybrid-fragmentation, dbslice identifies a set of dbspaces. See Altering Hybrid-Fragmented Tables (XPS) on page 2-16.
new_table
Nonfragmented table that results after you execute the ALTER Must not exist FRAGMENT statement. (In XPS, this can also be a hash-fragmented before table.) execution
The new table that results from executing the DETACH clause does not inherit any indexes nor constraints from the original table. Only data values remain. Similarly, the new table does not inherit any privileges from the original table. Instead, the new table has the default privileges of any new table. For further information on default table-level privileges, see the GRANT statement on Table-Level Privileges on page 2-374. The DETACH clause cannot be applied to a table if that table is the parent of a referential constraint or if a rowid column is defined on the table. In Dynamic Server, if you omit the PARTITION keyword, the name of the fragment must be the name of the dbspace where the fragment is stored. In Extended Parallel Server, you cannot use the DETACH clause if the table has a dependent generalized key (GK) index defined on it.
2-19
ALTER FRAGMENT
This example detaches dbsp2 from the distribution scheme for cur_acct and places the rows in a new table, accounts. Table accounts now has the same structure (column names, number of columns, data types, and so on) as table cur_acct, but the table accounts does not contain any indexes or constraints from the table cur_acct. Both tables are now nonfragmented. The following example shows a table that contains three fragments:
ALTER FRAGMENT ON TABLE bus_acct DETACH dbsp3 cli_acct
This statement detaches dbsp3 from the distribution scheme for bus_acct and places the rows in a new table, cli_acct. Table cli_acct now has the same structure (column names, number of columns, data types, and so on) as bus_acct, but the table cli_acct does not contain any indexes or constraints from the table bus_acct. Table cli_acct is a nonfragmented table, and table bus_acct remains a fragmented table.
In this example, data from January 1996 is detached from the sales_info table and placed in a new table called jan_96.
INIT Clause
The INIT clause of the ALTER FRAGMENT statement can define or modify the fragmentation strategy or the storage location of an existing table or (for Dynamic Server) an existing index. The INIT clause has the following syntax. INIT Clause:
(2) INIT (1) WITH ROWIDS (1) (1) FRAGMENT BY Clause for Tables dbspace (3) PARTITION fragment dbslice (4) FRAGMENT BY Clause for Indexes IN
2-20
ALTER FRAGMENT
Notes: 1 2 3 4 Dynamic Server only See 2-22 Extended Parallel Server only See FRAGMENT BY Clause for Indexes (IDS) on page 2-24
Description Dbslice storing fragmented data Dbspace storing fragmented data Partition within dbspace Restrictions Must exist at time of execution Must exist at time of execution No more than 2048 for same table Syntax Identifier on page 5-22 Identifier on page 5-22 Identifier on page 5-22
The INIT clause can accomplish tasks that include the following: v Move a nonfragmented table from one partition (IDS only) or dbslice (XPS only) or dbspace to another partition or to another dbslice or dbspace. v Convert a fragmented table to a nonfragmented table. v Fragment an existing non-fragmented table without redefining it. v Convert a fragmentation strategy to another fragmentation strategy. v Fragment an existing index that is not fragmented without redefining the index (IDS only). v Convert a fragmented index to a nonfragmented index (IDS only). v Add a new rowid column to the table definition (IDS only). With Extended Parallel Server, you cannot use the INIT clause to change the fragmentation strategy of a table that has a GK index. When you use the INIT clause to modify a table, the tabid value in the system catalog tables changes for the affected table. The constrid value of all unique and referential constraints on the table also changes. For more information about the storage spaces in which you can store a table, see Using the IN Clause on page 2-189. Attention: When you execute the ALTER FRAGMENT statement with this clause, it results in data movement if the table contains any data. If data values move, the potential exists for significant logging, for the transaction being aborted as a long transaction, and for a relatively long exclusive lock being held on the affected tables. Use this statement when it does not interfere with day-to-day operations.
2-21
ALTER FRAGMENT
When you use the WITH ROWIDS option to add a new rowid column to a fragmented table, the database server assigns a unique rowid number to each row and creates an index that it can use to find the physical location of the row. Performance using this access method is comparable to using a SERIAL or SERIAL8 column. The rowid value of a row cannot be updated, but remains stable during the existence of the row. Each row requires an additional four bytes to store the rowid column after you specify the WITH ROWIDS option. Attention: It is recommended that users creating new applications move toward using primary keys as a method of row identification instead of using rowid values.
You must use the IN dbspace clause to place the table in a dbspace explicitly. When you use the INIT clause to change a fragmented table to a nonfragmented table, all attached indexes become nonfragmented indexes. In addition, constraints that do not use existing user-defined indexes (detached indexes) become nonfragmented indexes. All newly nonfragmented indexes exist in the same dbspace as the new nonfragmented table. Using the INIT clause to change a fragmented table to a nonfragmented table has no effect on the fragmentation strategy of detached indexes, nor of constraints that use detached indexes. Use the FRAGMENT BY portion of the INIT clause to fragment an existing non-fragmented table, or to convert one fragmentation strategy to another. FRAGMENT BY Clause for Tables:
, FRAGMENT BY (1) PARTITION BY , (1) IN dbspace EXPRESSION ( (2) HASH ( , (2) HYBRID ( column ) Expression List column ) IN (dbspace, dbslice dbspace ) PARTITION part Fragment List ) , ROUND ROBIN IN dbspace (2) dbslice
2-22
ALTER FRAGMENT
Fragment List:
, ( expr ) (3) REMAINDER IN dbspace
Expression List:
, EXPRESSION expr (3) REMAINDER ( dbspace dbspace ) IN dbslice ,
Notes: 1 2 3 Dynamic Server only Extended Parallel Server only If included, must be the last item in fragment list
Description Column to which the strategy applies Dbslice that contains the table fragment Dbspace that contains the table fragment Expression that defines a table fragment Name declared here for a partition of dbspace that contains the table fragment Restrictions Must exist in the table Must be defined Must specify at least 2 but no more than 2,048 dbspaces Must evaluate to a Boolean value (t or f) Must be unique among names of fragments of table in dbspace Syntax Identifier on page 5-22 Identifier on page 5-22 Identifier on page 5-22 Expression on page 4-34 Identifier on page 5-22
In the HYBRID option of Extended Parallel Server, column identifies a column on which to apply the hash portion of the hybrid table fragmentation strategy. The expression can reference columns only from the current table and data values only from a single row. Subqueries, aggregates, and the built-in CURRENT, DATE, and TODAY functions are not valid in the expression. For more information on the available fragmentation strategies for tables, see the FRAGMENT BY Clause on page 2-190.
2-23
ALTER FRAGMENT
The following example shows the statement that originally defined the fragmentation strategy on the table account and then shows an ALTER FRAGMENT statement that redefines the fragmentation strategy:
CREATE TABLE account (col1 INT, col2 INT) FRAGMENT BY ROUND ROBIN IN dbsp1, dbsp2; ALTER FRAGMENT ON TABLE account INIT FRAGMENT BY EXPRESSION col1 < 0 IN dbsp1, col2 >= 0 IN dbsp2;
If an existing dbspace is full when you redefine a fragmentation strategy, you must not use it in the new fragmentation strategy.
In Dynamic Server, when you use the INIT clause to fragment an existing nonfragmented table, all indexes on the table become fragmented in the same way as the table. In Extended Parallel Server, when the INIT clause fragments an existing nonfragmented table, any indexes retain their existing fragmentation strategy.
2-24
ALTER FRAGMENT
Element dbspace expr Description Dbspace that contains the fragmented information Expression defining an index fragment Restrictions Must specify at least two, but no more than 2,048 dbspaces Must return a Boolean value Syntax Identifier on page 5-22 Condition on page 4-5; Expression on page 4-34 Identifier on page 5-22
part
Name that you declare here for a partition of the specified dbspace
Required for any partition in the same dbspace as another partition of the same index
The keywords FRAGMENT BY and PARTITION BY are synonyms in this context. You can convert an existing fragmentation strategy to another expression-based fragmentation strategy. Dynamic Server discards the existing fragmentation strategy and moves the data records to fragments that you define in the new fragmentation strategy. (To convert an existing fragmented index to a nonfragmented index, you can use the INIT clause to specify IN dbspace or else PARTITION partition IN dbspace as the only storage specification for a previously fragmented index.) The expression can contain only columns from the current table and data values from only a single row. No subqueries nor aggregates are allowed. The built-in CURRENT, DATE, and TODAY functions are not valid here.
2-25
ALTER FRAGMENT
ADD Clause
Use the ADD clause to add another fragment to an existing fragment list for a table or (for Dynamic Server only) for an index. ADD Clause:
ADD REMAINDER IN (1) PARTITION part expression IN dbspace (1) PARTITION part BEFORE AFTER fragment (1) PARTITION part IN dbspace
Name of an existing fragment in the Must exist at the time when you Identifier on page fragmentation list execute the statement 5-22 Expression that defines the new fragment that is to be added Must return a Boolean value (t or f) Condition on page 4-5; Expression on page 4-34
dbspace part
Name of dbspace to be added to the Must exist at the time when you Identifier on page fragmentation scheme execute the statement 5-22 Name that you declare here for the fragment. The default name is the name of the dbspace Required for any partition in the same dbspace as another partition of the same index Identifier on page 5-22
The expression can contain column names only from the current table and data values only from a single row. No subqueries or aggregates are allowed. In addition, the built-in current, date, and time functions are not valid here.
To add a new fragment that is stored in a partition within a dbspace, you can use the ADD clause, as in the following example:
2-26
ALTER FRAGMENT
ALTER FRAGMENT ON TABLE book ADD PARTITION chapter3 IN dbsp1;
The new distribution uses dbsp1, dbsp4, and chapter3 as storage locations for a 3-part round-robin fragmentation scheme. The records in the fragment chapter3 are stored in a separate partition of dbsp1 from the records in the first fragment, which are stored in a partition whose default name is dbsp1. (The database server issues an error if you attempt to declare the same name for different fragments of the same fragmented table or index.)
To add another fragment in a new partition of dbspace dbsp2 to hold rows for c1 values between 200 and 299, use the following ALTER FRAGMENT statement:
ALTER FRAGMENT ON TABLE news ADD PARTITION century3 (c1 >= 200 AND c1 < 300) IN dbsp2;
Any rows that were formerly in the remainder fragment but that fit the criteria (c1 >= 200 AND c1 < 300) move to the new century3 partition in dbspace dbsp2.
2-27
ALTER FRAGMENT
Element fragment Description Name of a partition of the dbspace that stores the dropped fragment Restrictions Must exist when you execute the statement Syntax Identifier on page 5-22
If the table is fragmented by expression, you cannot drop a fragment containing data that cannot be moved to another fragment. (If the distribution scheme has a REMAINDER option, or if the expressions overlap, you can drop a fragment that contains data.) You cannot drop a fragment if the table has only two fragments. When you want to make a fragmented table nonfragmented, use either the INIT clause or the DETACH clause of the ALTER FRAGMENT statement. If the fragment was not given a name when it was created or added then the name of the dbspace is also the name of the fragment. When you drop a fragment, the underlying partition or dbspace is not affected. Only the fragment data values within that partition or dbspace are affected. When you drop a fragment, the database server attempts to move all records in the dropped fragment to another fragment. In this case, the destination fragment might not have enough space for the additional records. If this happens, follow one of the procedures that ALTER FRAGMENT and Transaction Logging on page 2-13 describes to increase your available space, and retry the procedure. The following examples show how to drop a fragment from a fragmentation list. The first line shows how to drop an index fragment, and the second line shows how to drop a table fragment.
ALTER FRAGMENT ON INDEX cust_indx DROP dbsp2; ALTER FRAGMENT ON TABLE customer DROP dbsp1;
2-28
ALTER FRAGMENT
Element dbspace expression Description Dbspace that stores the modified information Modified expression Restrictions Must exist at time of execution Can specify columns in current table only and data from only a single row Must be unique in fragmentation list among names of partitions of dbspace Must exist at time of execution Syntax Identifier on page 5-22 Condition on page 4-5; Expression on page 4-34 Identifier on page 5-22 Identifier on page 5-22
new old
Name that you declare here for a partition of dbspace Name of an existing fragment
Here dbspace and old (or old and new) can be identical, if you are not changing the storage location. You must declare a new partition (using the PARTITION keyword) if more than one fragment of the same table or index is named the same as the dbspace. The expression must evaluate to a Boolean value (true or false). No subqueries or aggregates are allowed in the expression. In addition, the built-in CURRENT, DATE, and TODAY functions are not valid. When you use the MODIFY clause, the underlying dbspaces are not affected. Only the fragment data values within the partitions or dbspaces are affected. You cannot change a REMAINDER fragment into a nonremainder fragment if records within the REMAINDER fragment do not satisfy the new expression. When you use the MODIFY clause to change an expression without changing the storage location for the expression, you must use the same name for the old and the new fragment (or else the same name for old and for dbspace, if the dbspace consists of only a single partition, as in the following example):
ALTER FRAGMENT ON TABLE cust_acct MODIFY dbsp1 TO acct_num < 65 IN dbsp1
When you use the MODIFY clause to move an expression from one dbspace to another, old is the name of the dbspace where the expression was previously located, and dbspace is the new location for the expression:
ALTER FRAGMENT ON TABLE cust_acct MODIFY dbsp1 TO acct_num < 35 IN dbsp2
Here the distribution scheme for the cust_acct table is modified so that all row items in column acct_num that are less than 35 are now contained in the dbspace dbsp2. These items were formerly contained in the dbspace dbsp1. When you use the MODIFY clause to move an expression from one partition of a dbspace to another partition, old is the name of the partition where information fragmented by the expression was previously located, and new is the name of the partition that is the new location for the expression, as in the following example:
ALTER FRAGMENT ON TABLE cust_acct MODIFY PARTITION part1 TO PARTITION part2 (acct_num < 35) IN dbsp2
2-29
ALTER FRAGMENT
Here the distribution scheme for the cust_acct table is modified so that all row items in column acct_num that have a value less than 35 are now contained in the partition part2 of dbspace dbsp2. These items were formerly contained in the partition part1. To use the MODIFY clause both to change the expression and to move its corresponding fragment to a new storage location, you must change the expression and you must also specify the name of a different dbspace or partition. If the indexes on a table are attached indexes, and you modify the table, the index fragmentation strategy is also modified.
Related Information
Related statements: CREATE TABLE, CREATE INDEX, and ALTER TABLE For a discussion of fragmentation strategy, refer to the IBM Informix Database Design and Implementation Guide. For information on how to maximize performance when you make fragment modifications, see your IBM Informix Performance Guide.
2-30
ALTER FUNCTION
Use the ALTER FUNCTION statement to change the routine modifiers or pathname of a user-defined function. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
, ALTER FUNCTION function ( parameter_type (1) Specific Name )
SPECIFIC FUNCTION ,
(2) WITH( ADD Routine Modifier MODIFY DROP (3) MODIFY EXTERNAL NAME = )
Notes: 1 2 3 4
Element function
See Specific Name on page 5-68 See Routine Modifier on page 5-54 External routines only See Shared-Object Filename on page 5-65
Restrictions Must be registered in the database. If the name does not uniquely identify a function, you must enter one or more appropriate values for parameter_type. Must be the same data types (and specified in the same order) as in the definition of function Syntax Database Object Name on page 5-17 Identifier on page 5-22
parameter_type
Usage
The ALTER FUNCTION statement can modify a user-defined function to tune its performance by modifying characteristics that control how the function executes. You can also add or replace related user-defined routines (UDRs) that provide alternatives for the query optimizer, which can improve performance. All modifications take effect on the next invocation of the function. Only the UDR owner or the DBA can use the ALTER FUNCTION statement.
2-31
ALTER FUNCTION
WITH
If the routine modifier is a BOOLEAN value, MODIFY sets the value to be t (equivalent of using the keyword ADD to add the routine modifier). For example, both of the following statements alter the func1 function so that it can be executed in parallel in the context of a parallelizable data query:
ALTER FUNCTION func1 WITH (MODIFY PARALLELIZABLE) ALTER FUNCTION func1 WITH (ADD PARALLELIZABLE)
Related Information
Related Statements: ALTER PROCEDURE, ALTER ROUTINE, CREATE FUNCTION, and CREATE PROCEDURE For a discussion of how to create and use SPL routines, see the IBM Informix Guide to SQL: Tutorial. For a discussion of how to create and use external routines, see IBM Informix User-Defined Routines and Data Types Developer's Guide and the IBM Informix DataBlade API Programmer's Guide.
2-32
ALTER INDEX
Use the ALTER INDEX statement to change the clustering attribute or the locking mode of an existing index. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
(1) ALTER INDEX index (2) LOCK MODE NORMAL COARSE TO NOT CLUSTER
Notes: 1 2
Element index Description Name of the index to be altered
Usage
ALTER INDEX is valid only for indexes that the CREATE INDEX statement created explicitly. ALTER INDEX cannot modify an index on a temporary table, nor an index that the database server created implicitly to support a constraint. You cannot change the collating order of an existing index. If you use ALTER INDEX to modify an index after the SET COLLATION statement of Dynamic Server has specified a non-default collating order, the SET COLLATION statement has no effect on the index.
TO CLUSTER Option
The TO CLUSTER option causes the database server to reorder the rows of the physical table according to the indexed order. The next example shows how you can use the ALTER INDEX TO CLUSTER statement to order the rows in the orders table physically. The CREATE INDEX statement creates an index on the customer_num column of the table. Then the ALTER INDEX statement causes the physical ordering of the rows.
CREATE INDEX ix_cust ON orders (customer_num); ALTER INDEX ix_cust TO CLUSTER;
For an ascending index, TO CLUSTER puts rows in lowest-to-highest order. For a descending index, the rows are reordered in highest-to-lowest order. When you reorder, the entire file is rewritten. This process can take a long time, and it requires sufficient disk space to maintain two copies of the table. While a table is clustering, it is locked IN EXCLUSIVE MODE. When another process is using the table to which the index name belongs, the database server
Chapter 2. SQL Statements
2-33
ALTER INDEX
cannot execute the ALTER INDEX statement with the TO CLUSTER option; it returns an error unless lock mode is set to WAIT. (When lock mode is set to WAIT, the database server retries the ALTER INDEX statement.) Over time, if you modify the table, you can expect the benefit of an earlier cluster to disappear because rows are added in space-available order, not sequentially. You can recluster the table to regain performance by issuing another ALTER INDEX TO CLUSTER statement on the clustered index. You do not need to drop a clustered index before you issue another ALTER INDEX TO CLUSTER statement on a currently clustered index. Extended Parallel Server cannot use the CLUSTER option on STANDARD tables.
The first two statements create indexes for the orders table and cluster the physical table in ascending order on the customer_num column. The last two statements recluster the physical table in ascending order on the order_num column.
Related Information
Related statements: CREATE INDEX and CREATE TABLE For a discussion of the performance implications of clustered indexes, see your IBM Informix Performance Guide.
2-34
ALTER PROCEDURE
Use the ALTER PROCEDURE statement to change the routine modifiers or pathname of a previously defined external procedure. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
, ALTER PROCEDURE procedure ( parameter_type (1) Specific Name )
SPECIFIC PROCEDURE ,
(2) WITH( ADD Routine Modifier MODIFY DROP (3) MODIFY EXTERNAL NAME = )
Notes: 1 2 3 4
Element procedure
See Specific Name on page 5-68 See Routine Modifier on page 5-54 External routines only See Shared-Object Filename on page 5-65
Restrictions Must be registered in the database. If the name does not uniquely identify a function, you must enter one or more appropriate values for parameter_type. Must be the same data types (and specified in the same order) as in the definition of procedure Syntax Database Object Name on page 5-17 Identifier on page 5-22
parameter_type
Usage
The ALTER PROCEDURE statement enables you to modify an external procedure to tune its performance by modifying characteristics that control how it executes. You can also add or replace related UDRs that provide alternatives for the optimizer, which can improve performance. All modifications take effect on the next invocation of the procedure. Only the UDR owner or the DBA can use the ALTER PROCEDURE statement. If the procedure name is not unique among routines registered in the database, you must enter one or more appropriate values for parameter_type.
2-35
ALTER PROCEDURE
The following keywords introduce what you want to modify in procedure.
Keyword ADD MODIFY DROP MODIFY EXTERNAL NAME (for external procedures only) Effect Add a new routine modifier to the UDR Change an attribute of a routine modifier Delete a routine modifier from the UDR Replace the file specification of the executable file. When the IFX_EXTEND_ROLE configuration parameter = ON, this option is valid only for users to whom the DBSA has granted the EXTEND role. With IFX_EXTEND_ROLE = OFF (or not set), the UDR owner or the DBA can use this option. Replace the file specification of the executable file. (Valid only for users who have the EXTEND role) Introduces all modifications
If the routine modifier is a BOOLEAN value, MODIFY sets the value to be T (equivalent to using the keyword ADD to add the routine modifier). For example, both of the following statements alter the proc1 procedure so that it can be executed in parallel in the context of a parallelizable data query:
ALTER PROCEDURE proc1 WITH (MODIFY PARALLELIZABLE) ALTER PROCEDURE proc1 WITH (ADD PARALLELIZABLE)
Related Information
Related statements: ALTER ROUTINE, CREATE PROCEDURE, DROP PROCEDURE, and DROP ROUTINE. For a discussion of SPL routines, see the IBM Informix Guide to SQL: Tutorial. For a discussion of how to create and use external routines, see IBM Informix User-Defined Routines and Data Types Developer's Guide. For information about how to create C UDRs, see the IBM Informix DataBlade API Programmer's Guide.
2-36
ALTER ROUTINE
Use the ALTER ROUTINE statement to change the routine modifiers or pathname of a previously defined user-defined routine (UDR). Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
, ALTER ROUTINE routine ( parameter_type (1) Specific Name )
SPECIFIC ROUTINE ,
(2) WITH( ADD Routine Modifier MODIFY DROP (3) MODIFY EXTERNAL NAME = )
Notes: 1 2 3 4
Element routine
See Specific Name on page 5-68 See Routine Modifier on page 5-54 External routines only See Shared-Object Filename on page 5-65
Restrictions Must be registered in the database. If the name does not uniquely identify a routine, you must enter one or more appropriate values for parameter_type. Must be the same data types (and specified in the same order) as in the definition of routine Syntax Database Database Object Name on page 5-17 Identifier on page 5-22
parameter_type
Usage
The ALTER ROUTINE statement allows you to modify a previously defined UDR to tune its performance by modifying characteristics that control how the UDR executes. You can also add or replace related UDRs that provide alternatives for the optimizer, which can improve performance. This statement is useful when you do not know whether a UDR is a user-defined function or a user-defined procedure. When you use this statement, the database server alters the appropriate user-defined procedure or user-defined function. All modifications take effect on the next invocation of the UDR. Only the UDR owner or the DBA can use the ALTER ROUTINE statement.
2-37
ALTER ROUTINE
Restrictions
If the name does not uniquely identify a UDR, you must enter one or more appropriate values for parameter_type. When you use this statement, the type of UDR cannot be ambiguous. The UDR that you specify must refer to either a user-defined function or a user-defined procedure. If either of the following conditions exist, the database server returns an error: v The name (and parameters) that you specify applies to both a user-defined procedure and a user-defined function. v The specific name that you specify applies to both a user-defined function and a user-defined procedure.
WITH
If the routine modifier is a BOOLEAN value, MODIFY sets the value to be T (equivalent to using the keyword ADD to add the routine modifier). For example, both of the following statements alter the func1 UDR so that it can be executed in parallel in the context of a parallelizable data query statement:
ALTER ROUTINE func1 WITH (MODIFY PARALLELIZABLE) ALTER ROUTINE func1 WITH (ADD PARALLELIZABLE)
Because the name func1 is not unique to the database, the data type parameters are specified so that the routine signature is unique. If this function had a Specific Name, for example, raise_sal, specified when it was created, you could identify the function with the following first line:
ALTER SPECIFIC ROUTINE raise_sal
2-38
ALTER ROUTINE
Related Information
Related Statements: ALTER FUNCTION, ALTER PROCEDURE, CREATE FUNCTION, CREATE PROCEDURE, DROP FUNCTION, DROP PROCEDURE, and DROP ROUTINE. For a discussion of how to create and use SPL routines, see the IBM Informix Guide to SQL: Tutorial. For a discussion of how to create and use external routines, see IBM Informix User-Defined Routines and Data Types Developer's Guide. For information about how to create C UDRs, see the IBM Informix DataBlade API Programmer's Guide.
2-39
ALTER SEQUENCE
Use the ALTER SEQUENCE statement to modify the definition of a sequence object. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
ALTER SEQUENCE owner. sequence
(1) (1)
NOCYCLE CYCLE CACHE size NOCACHE ORDER NOORDER BY INCREMENT step WITH RESTART NOMAXVALUE MAXVALUE max NOMINVALUE MINVALUE min restart
Notes: 1
Element max min owner restart sequence size step Description New upper limit on values New lower limit on values Owner of sequence New first value in sequence Name of existing sequence New number of values to preallocate in memory New interval between successive values
2-40
ALTER SEQUENCE
Usage
ALTER SEQUENCE redefines an existing sequence object. It only affects subsequently generated values (and any unused values in the sequence cache). You cannot use the ALTER SEQUENCE statement to rename a sequence nor to change the owner of a sequence. You must be the owner, or the DBA, or else have the Alter privilege on the sequence to modify its definition. Only elements of the sequence definition that you specify explicitly in the ALTER SEQUENCE statement are modified. An error occurs if you make contradictory changes, such as specifying both MAXVALUE and NOMAXVALUE, or both the CYCLE and NOCYCLE options.
INCREMENT BY Option
Use the INCREMENT BY option to specify a new interval between successive numbers in a sequence. The interval, or step value, can be a positive whole number (for ascending sequences) or a negative whole number (for descending sequences) in the INT8 range. The BY keyword is optional.
2-41
ALTER SEQUENCE
NOCYCLE attribute. After an ascending sequence reaches max, it generates the min value for the next value. After a descending sequence reaches min, it generates the max value for the next sequence value. Use the NOCYCLE option to prevent the sequence from generating more values after reaching the declared limit. Once the sequence reaches the limit, the next reference to sequence.NEXTVAL returns an error message.
Related Information
Related statements: CREATE SEQUENCE, DROP SEQUENCE, RENAME SEQUENCE, CREATE SYNONYM, DROP SYNONYM, GRANT, REVOKE, INSERT, UPDATE, and SELECT For information about the syssequences system catalog table in which sequence objects are registered, see the IBM Informix Guide to SQL: Reference. For information about initializing, generating, or reading values from a sequence, see NEXTVAL and CURRVAL Operators (IDS) on page 4-61.
2-42
ALTER TABLE
Use the ALTER TABLE statement to modify the definition of an existing table.
Syntax
(1) ALTER TABLE table synonym Basic Table Options (2) Logging TYPE Options
Notes: 1 2
Element synonym table Description Synonym for the table to be altered Name of table to be altered
Usage
In Dynamic Server, the database server performs the actions in the ALTER TABLE statement in the order that you specify. If any action fails, the entire operation is cancelled. Altering a table on which a view depends might invalidate the view. Warning: The clauses available with this statement have varying performance implications. Before you undertake alter operations, check corresponding sections in your IBM Informix Performance Guide to review effects and strategies. You cannot alter a temporary table. You also cannot alter a violations or diagnostics table. In addition, you cannot add, drop, or modify a column if the table that contains the column has a violation table or a diagnostics table associated with it. If the USETABLENAME environment variable is set, you cannot specify a synonym for the table in the ALTER TABLE statement. In Extended Parallel Server, if a table has range fragmentation, only the Logging TYPE options and LOCK MODE clause are valid. All other ALTER TABLE options return an error. For a RAW or (in Extended Parallel Server) a STATIC table, the Logging TYPE options are the only part of the ALTER TABLE statement that you can use. To use ALTER TABLE, you must meet one of the following conditions: v You must have DBA privilege on the database containing the table. v You must own the table (and for Extended Parallel Server, you must also have the Resource privilege). v You must have the Alter privilege on the specified table and the Resource privilege on the database where the table resides.
Chapter 2. SQL Statements
2-43
ALTER TABLE
v To add a referential constraint, you must have the DBA or References privilege on either the referenced columns or the referenced table. v To drop a constraint, you must have the DBA privilege or be the owner of the constraint. If you are the owner of the constraint but not the owner of the table, you must have Alter privilege on the specified table. You do not need the References privilege to drop a constraint.
(7)
Notes: 1 2 3 4 5 6 7 8 9 10 11 See page 2-45 See page 2-59 See page 2-53 See page 2-61 See page 2-52 Use path once See page 2-61 See page 2-62 Dynamic Server only See page 2-58 See page 2-64
You can use the Basic Table Options segment to modify the schema of a table by adding, modifying, or dropping columns and constraints, or changing the extent
2-44
ALTER TABLE
size or locking granularity of a table. The database server performs alterations in the order that you specify. If any of the actions fails, the entire operation is cancelled. With Dynamic Server, you can also associate a table with a named ROW type or specify a new storage space to store large-object data. You can also add or drop rowid columns and shadow columns for Enterprise Replication. You cannot, however, specify these options in conjunction with any other alterations.
ADD Clause
Use the ADD Column clause to add a column to a table. ADD Column Clause:
, ADD ( New Column New Column )
New Column:
2-45
ALTER TABLE
(1) new_column Data Type (2) DEFAULT Clause
BEFORE column
Notes: 1 2 3
Element column new_column Description Name of column before which new_column is to be placed Name of column that you are adding
The following restrictions apply to the ADD clause: v You cannot add a serial column to a table that contains data. v In Extended Parallel Server, you cannot add a column to a table that has a bit-mapped index. v You cannot add columns beyond the maximum row size of 32,767 bytes.
If you do not include the BEFORE option, the database server adds the new column or list of columns to the end of the table definition by default.
DEFAULT Clause
Use the DEFAULT clause to specify value that the database server should insert in a column when an explicit value for the column is not specified.
2-46
ALTER TABLE
DEFAULT Clause:
DEFAULT literal NULL USER (1) CURRENT (2) DATETIME Field Qualifier TODAY SITENAME DBSERVERNAME
Notes: 1 2
Element literal Description Literal default value for the column
You cannot specify a default value for serial columns. If the table that you are altering already has rows in it when you add a column that contains a default value, the database server inserts the default value for all pre-existing rows. The following example adds a column to the items table. In items, the new column item_weight has a literal default value:
ALTER TABLE items ADD item_weight DECIMAL (6, 2) DEFAULT 2.00 BEFORE total_price
In this example, each existing row in the items table has a default value of 2.00 for the item_weight column. For more information about the options of the DEFAULT clause, refer to DEFAULT Clause on page 2-174.
2-47
ALTER TABLE
UNIQUE (1) DISTINCT PRIMARY KEY (3) REFERENCES Clause (4) CHECK Clause
(2)
Notes: 1 2 3 4 Informix extension See page 2-49 See page 2-49 See page 2-51
You cannot specify a primary-key or unique constraint on a new column if the table contains data. In the case of a unique constraint, however, the table can contain a single row of data. When you want to add a column with a primary-key constraint, the table must be empty when you issue the ALTER TABLE statement. The following rules apply when you place primary-key or unique constraints on existing columns: v When you place a primary-key or unique constraint on a column or set of columns, the database server creates an internal B-tree index on the constrained column or set of columns unless a user-created index was defined on the column or set of columns. v When you place a primary-key or unique constraint on a column or set of columns, and a unique index already exists on that column or set of columns, the constraint shares the index. If the existing index allows duplicates, however, the database server returns an error. You must then drop the existing index before you add the constraint. v When you place a primary-key or unique constraint on a column or set of columns, and a referential constraint already exists on that column or set of columns, the duplicate index is upgraded to UNIQUE (if possible), and the index is shared. You cannot place a unique constraint on a BYTE or TEXT column, nor can you place referential constraints on columns of these types. A check constraint on a BYTE or TEXT column can check only for IS NULL, IS NOT NULL, or LENGTH. When you place a referential constraint on a column or set of columns, and an index already exists on that column or set of columns, the index is upgraded to UNIQUE (if possible) and the index is shared.
2-48
ALTER TABLE
Constraint Definition
In Dynamic Server, use the Constraint Definition portion of the ALTER TABLE statement to declare the name of a constraint and to set the mode of the constraint to disabled, enabled, or filtering. In Extended Parallel Server, use the Constraint Definition portion of the ALTER TABLE statement to declare the name of a constraint. Constraint Definition:
CONSTRAINT constraint
Notes: 1
Element constraint Description Name declared here for the constraint
For more information about constraint-mode options, see Choosing a Constraint-Mode Option (IDS) on page 2-183.
REFERENCES Clause
The REFERENCES clause has the following syntax. REFERENCES Clause:
REFERENCES table , ( column ) (1) ON DELETE CASCADE
Notes: 1
Element column table Description
Informix extension
Restrictions Syntax
Referenced column in the See Restrictions on Referential Constraints Identifier, p. 5-22 referenced table on page 2-50. The referenced table The referenced and the referencing tables must reside in the same database Database Object Name, p. 5-17
The REFERENCES clause allows you to place a foreign-key reference on a column. The referenced column can be in the same table as the referencing column, or in a different table in the same database.
2-49
ALTER TABLE
If the referenced table is different from the referencing table, the default column is the primary-key column. If the referenced table is the same as the referencing table, there is no default.
When you place a referential constraint on a column or set of columns, and a duplicate or unique index already exists on that column or set of columns, the index is shared.
2-50
ALTER TABLE
For example, in the stores_demo database, the stock table contains the stock_num column as a primary key. The catalog table refers to the stock_num column as a foreign key. The following ALTER TABLE statements drop an existing foreign-key constraint (without cascading delete) and add a new constraint that specifies cascading deletes:
ALTER TABLE catalog DROP CONSTRAINT aa ALTER TABLE catalog ADD CONSTRAINT (FOREIGN KEY (stock_num, manu_code) REFERENCES stock ON DELETE CASCADE CONSTRAINT ab)
With cascading deletes specified on the child table, in addition to deleting a stock item from the stock table, the delete cascades to the catalog table that is associated with the stock_num foreign key. This cascading delete works only if the stock_num that you are deleting was not ordered; otherwise, the constraint from the items table would disallow the cascading delete. For more information, see Restrictions on DELETE When Tables Have Cascading Deletes on page 2-277. If a table has a trigger with a DELETE trigger event, you cannot define a cascading-delete referential constraint on that table. You receive an error when you attempt to add a referential constraint that specifies ON DELETE CASCADE to a table that has a delete trigger. For information about syntax restrictions and locking implications when you delete rows from tables that have cascading deletes, see Considerations When Tables Have Cascading Deletes on page 2-276.
CHECK Clause
A check constraint designates a condition that must be met before data can be inserted into a column. CHECK Clause:
(1) CHECK ( Condition )
During an insert or update, if a row returns false for any check constraint defined on a table, the database server returns an error. No error is returned, however, if a row returns NULL for a check constraint. In some cases, you might want to use both a check constraint and a NOT NULL constraint. Check constraints are defined using search conditions. The search condition cannot contain user-defined routines, subqueries, aggregates, host variables, or rowids. In addition, the condition cannot contain the variant built-in functions CURRENT, USER, SITENAME, DBSERVERNAME, or TODAY.
2-51
ALTER TABLE
The check constraint cannot include columns in different tables. When you are using the ADD or MODIFY clause, the check constraint cannot depend upon values in other columns of the same table. The next example adds a new unit_price column to the items table and includes a check constraint to ensure that the entered value is greater than 0:
ALTER TABLE items ADD (unit_price MONEY (6,2) CHECK (unit_price > 0) )
To create a constraint that checks values in more than one column, use the ADD CONSTRAINT clause. The following example builds a constraint on the column that was added in the previous example. The check constraint now spans two columns in the table.
ALTER TABLE items ADD CONSTRAINT CHECK (unit_price < total_price)
DROP Clause
Use the DROP clause to drop one or more columns from a table. DROP Clause:
, DROP ( column column )
Element column
Restrictions Must exist in the table. No fragment expression can reference column, and it cannot be the last column in the table.
You cannot issue an ALTER TABLE DROP statement that would drop every column from the table. At least one column must remain in the table. You cannot drop a column that is part of a fragmentation strategy. In Extended Parallel Server, you cannot use the DROP clause if the table has a dependent GK index.
2-52
ALTER TABLE
After the ALTER TABLE statement, tab2 has only one column. The col1trig trigger is invalidated because the action clause as it is currently defined with values for two columns cannot occur. If you drop a column that occurs in the triggering column list of an UPDATE trigger, the database server drops the column from the triggering column list. If the column is the only member of the triggering column list, the database server drops the trigger from the table. For more information on triggering columns in an UPDATE trigger, see CREATE TRIGGER on page 2-216. If a trigger is invalidated when you alter the underlying table, drop and then re-create the trigger.
2-53
ALTER TABLE
Modify Column Clause:
(1) column Data Type (2) DEFAULT Clause
Notes: 1 2 3
Element column Description Column to modify
In Extended Parallel Server, you cannot use the MODIFY clause if the table has a dependent GK index. You cannot change the data type of a column to a COLLECTION or a ROW type. When you modify a column, all attributes previously associated with the column (that is, default value, single-column check constraint, or referential constraint) are dropped. When you want certain attributes of the column to remain, such as PRIMARY KEY, you must re-specify those attributes. For example, if you are changing the data type of an existing column, quantity, to SMALLINT, but you want to keep the default value (in this case, 1) and the NOT NULL column attribute, you can issue this statement:
ALTER TABLE items MODIFY (quantity SMALLINT DEFAULT 1 NOT NULL)
Tip: Both attributes are specified again in the modify clause. When you change the data type of a column, the database server does not perform the modification in place. The next example (for Dynamic Server only) changes a VARCHAR(15) column to an LVARCHAR(3072) column:
ALTER TABLE stock MODIFY (description LVARCHAR(3072))
When you modify a column that has column constraints associated with it, the following constraints are dropped: v All single-column constraints are dropped. v All referential constraints that reference the column are dropped. v If the modified column is part of a multiple-column primary-key or unique constraint, all referential constraints that reference the multiple columns also are dropped. For example, if you modify a column that has a unique constraint, the unique constraint is dropped. If this column was referenced by columns in other tables,
2-54
ALTER TABLE
those referential constraints are also dropped. In addition, if the column is part of a multiple-column primary-key or unique constraint, the multiple-column constraints are not dropped, but any referential constraints placed on the column by other tables are dropped. For another example, suppose that a column is part of a multiple-column primary-key constraint. This primary key is referenced by foreign keys in two other tables. When this column is modified, the multiple-column primary-key constraint is not dropped, but the referential constraints placed on it by the two other tables are dropped.
As an alternative, you can use the INSERT statement to create a gap in the series of serial values in the column. For more information, see Inserting Values into Serial Columns on page 2-400. Altering the Next Serial Value in a Typed Table (IDS): You can set the initial serial number or modify the next serial number for a ROW-type field with the MODIFY clause of the ALTER TABLE statement. (You cannot set the initial number for a serial field when you create a ROW data type.) Suppose you have ROW types parent, child1, child2, and child3.
2-55
ALTER TABLE
CREATE CREATE CREATE CREATE ROW ROW ROW ROW TYPE TYPE TYPE TYPE parent child1 child2 child3 (a (s (b (d int); serial) UNDER parent; float, s8 serial8) UNDER child1; int) UNDER child2;
To change the next SERIAL and SERIAL8 numbers to 75, you can issue the following statement:
ALTER TABLE child3 MODIFY (s serial(75), s8 serial8(75))
When the ALTER TABLE statement executes, the database server updates corresponding serial columns in the child1, child2, and child3 tables.
When a primary-key or unique constraint exists, however, conversion takes place only if it does not violate the constraint. If a data type conversion would result in duplicate values (by changing FLOAT to SMALLFLOAT, for example, or by truncating CHAR values), the ALTER TABLE statement fails.
2-56
ALTER TABLE
Issue the ALTER TABLE statement again, but this time specify the DISABLED keyword in the MODIFY clause. Start a violations and diagnostics table for the target table with the START VIOLATIONS TABLE statement. Issue the SET CONSTRAINTS statement to switch the database object mode of the constraint to the enabled mode. When you issue this statement, existing rows in the target table that violate the constraint are duplicated in the violations table; however, you receive an integrity-violation error message, and the constraint remains disabled. Issue a SELECT statement on the violations table to retrieve the nonconforming rows that are duplicated from the target table. You might need to join the violations and diagnostics tables to get all the necessary information. Take corrective action on the rows in the target table that violate the constraint. After you fix all the nonconforming rows in the target table, issue the SET statement again to enable the constraint that was disabled. Now the constraint is enabled, and no integrity-violation error message is returned because all rows in the target table now satisfy the new constraint.
2. 3.
4.
5. 6.
After the ALTER TABLE statement, column i4 accepts only character values. Because character columns accept only values enclosed in quotation marks, the action clause of the col1trig trigger is invalidated. If a trigger is invalidated when you modify the underlying table, drop and then re-create the trigger.
2-57
ALTER TABLE
, ( EXTENT SIZE kilobytes NO LOG LOG HIGH INTEG NO KEEP ACCESS TIME KEEP ACCESS TIME )
Description Column to store in the specified sbspace Number of kilobytes to allocate for the extent size
Restrictions Must be a UDT, or a complex, BLOB, or CLOB data type Must be an integer value
Name of an area of storage for The sbspace must exist smart large objects
When you modify the storage characteristics of a column, all attributes previously associated with the storage space for that column are dropped. When you want certain attributes to remain, you must specify those attributes again. For example, to retain logging, you must specify the log keyword again. The format column.field is not valid here. That is, the smart large object that you are storing cannot be one field of a row type. When you modify the storage characteristics of a column that holds smart large objects, the database server does not alter smart large objects that already exist, but applies the new storage characteristics only to those smart large objects that are inserted after the ALTER TABLE statement takes effect. For more information on the available storage characteristics, refer to the counterpart of this section in the CREATE TABLE statement, PUT Clause (IDS) on page 2-199. For a discussion of large-object characteristics, refer to Large-Object Data Types on page 4-25.
2-58
ALTER TABLE
For example, to add a unique constraint to the fname and lname columns of the customer table, use the following statement:
ALTER TABLE customer ADD CONSTRAINT UNIQUE (lname, fname)
When you do not specify a name for a new constraint, the database server provides one. You can find the name of the constraint in the sysconstraints system catalog table. For more information about the sysconstraints system catalog table, see the IBM Informix Guide to SQL: Reference. When you add a constraint, the collating order must be the same as when the table was created.
(5)
2-59
ALTER TABLE
Notes: 1 2 3 4 5
Element column Description A column on which the constraint is placed
Informix extension Use path no more than 16 times See page 2-51 See page 2-49 See page 2-49
Restrictions No more than 16 columns Syntax Identifier, p. 5-22
A multiple-column constraint has these restrictions: v It can include no more than 16 column names. v In Dynamic Server, the total length of the list of columns cannot exceed 390 bytes. v In Extended Parallel Server, the total length of the list of columns cannot exceed 255 bytes. You can declare a name for the constraint and set its mode by means of Constraint Definition on page 2-49.
2-60
ALTER TABLE
Element constraint
To drop an existing constraint, specify the DROP CONSTRAINT keywords and the name of the constraint. Here is an example of dropping a constraint:
ALTER TABLE manufact DROP CONSTRAINT con_name
If no name is specified when the constraint is created, the database server generates the name. You can query the sysconstraints system catalog table for the name and owner of a constraint. For example, to find the name of the constraint placed on the items table, you can issue the following statement:
SELECT constrname FROM sysconstraints WHERE tabid = (SELECT tabid FROM systables WHERE tabname = items)
When you drop a primary-key or unique constraint that has a corresponding foreign key, the referential constraints are dropped. For example, if you drop the primary-key constraint on the order_num column in the orders table and order_num exists in the items table as a foreign key, that referential relationship is also dropped. You cannot use the MODIFY NEXT SIZE clause of the ALTER TABLE statement to change the size of the next extent of any system catalog table, even if you are user informix.
Element kilobytes
Description Length (in kilobytes) assigned here to the next extent for this table
Restrictions Specification cannot be a variable, and (4(page size)) kilobytes (chunk size)
The minimum extent size is 4 times the disk-page size. For example, on a system with 2-kilobyte pages, the minimum length is 8 kilobytes. The maximum length is equal to the chunk size. This example specifies an extent size of 32 kilobytes:
ALTER TABLE customer MODIFY NEXT SIZE 32
2-61
ALTER TABLE
This clause cannot change the size of existing extents. You cannot change the size of existing extents without unloading all of the data. To change the size of existing extents, you must unload all the data, drop the table, modify the first-extent and next-extent sizes in the CREATE TABLE definition in the database schema, re-create the database, and reload the data. For information about how to optimize extent sizes, see your IBM Informix Administrator's Guide.
The following table describes the locking-granularity options available. Granularity PAGE Effect Obtains and releases one lock on a whole page of rows This is the default locking granularity. Page-level locking is especially useful when you know that the rows are grouped into pages in the same order that you are using to process all the rows. For example, if you are processing the contents of a table in the same order as its cluster index, page locking is especially appropriate. ROW Obtains and releases one lock per row Row-level locking provides the highest level of concurrency. If you are using many rows at one time, the lock-management overhead can become significant. You can also exceed the maximum number of locks available, depending on the configuration of your database server. TABLE (XPS only) Places a lock on the entire table This type of lock reduces update concurrency in comparison to row and page locks. A table lock reduces the lock-management overhead for a table. Multiple read-only transactions can still access the table.
2-62
ALTER TABLE
Guide to SQL: Reference. For information about the DEF_TABLE_LOCKMODE configuration parameter, refer to the IBM Informix Dynamic Server Administrator's Reference.
Here STANDARD, the default option of the CREATE TABLE statement, is used for online transaction processing (OLTP) databases. The OPERATIONAL and STATIC options are used primarily to improve performance in data warehousing databases. A table can have any of the following logging characteristics. Option STANDARD Effect Logging table that allows rollback, recovery, and restoration from archives. This is the default. Use this type for recovery and constraints functionality on OLTP databases. Nonlogging table that cannot have indexes or referential constraints but can be updated. Use this type for quickly loading data. In XPS, raw tables take advantage of light appends and avoid the overhead of logging, checking constraints, and building indexes. Logging table that uses light appends and cannot be restored from archive. Use this type on tables that are refreshed frequently. Light appends allow the quick addition of many rows. Nonlogging table that can contain index and referential constraints but cannot be updated. Use this type for read-only operations because there is no logging or locking overhead.
RAW
Warning: Use raw tables for fast loading of data. It is recommended that you alter the logging type to STANDARD and perform a level-0 backup before you use the table in a transaction or modify the data within the table. If you must use a raw table within a transaction, either set the isolation level to Repeatable Read or lock the table in exclusive mode to prevent concurrency problems.
2-63
ALTER TABLE
The Logging TYPE option can convert a non-logging table, such as a RAW table, to a STANDARD table that supports transaction logging. If you use this feature, you should be aware that the database server does not check to see whether a level 0 archive has been performed on the table. Operations on a RAW table are not logged and are not recoverable, so RAW tables are always at risk. When the database server converts a table that is not logged to a STANDARD table type, it is your responsibility to perform a level-0 backup before using the table in a transaction or modifying data in the table. Failure to do this might lead to recovery problems in the event of a server crash. For more information on these logging types of tables, refer to your IBM Informix Administrator's Guide. The Logging TYPE options have the following restrictions: v You must perform a level-0 archive before the logging type of a table can be altered to STANDARD from any other logging type. v If you want to change the logging type of a table to RAW, you must drop all indexes on the table before you do so. v If you have triggers defined on the table, you cannot change the logging type to RAW or STATIC. These nonlogging table types do not support triggers. v The table cannot be a SCRATCH or TEMP table, and you cannot change any of these types of tables to a SCRATCH or TEMP table. v The table cannot have a dependent generalized-key (GK) index.
Element row_type
Description Identifier of an existing named ROW data type for the table
Restrictions
Syntax
The row_type fields must match the column Identifier, p. 5-22 data type in their order and number
When you use the ADD TYPE clause, you assign the specified named ROW data type to a table whose columns match the fields of that data type. In addition to the requirements common to all ALTER TABLE operations (namely DBA privilege on the database, Alter privilege on the table, and ownership of the table), all of the following must be also true when you use the ADD TYPE clause to convert an untyped table to the specified named ROW data type: v The named ROW data type is already registered in the database. v You hold the Usage privilege on the named ROW data type. v There must be a 1-to-1 correspondence between the ordered set of column data types of the untyped table and the ordered set of field data types of the named ROW data type. v The table cannot be a fragmented table that has rowid values.
2-64
ALTER TABLE
You cannot combine the ADD TYPE clause with any clause that changes the schema of the table. No other ADD, DROP, or MODIFY clause is valid in the same ALTER TABLE statement that has the ADD TYPE clause. The ADD TYPE clause does not allow you to change column data types. (To change the data type of a column, use the MODIFY clause.)
(1) ADD CONSTRAINT Clause (2) DROP CONSTRAINT Clause (3) MODIFY NEXT SIZE Clause (3) (5) LOCK MODE Clause
(4)
Notes: 1 2 3 4 5 See page 2-59 See page 2-61 Use path no more than once See page 2-61 See page 2-62
Related Infrmation
Related statements: CREATE TABLE, DROP TABLE, LOCK TABLE, and SET Database Object Mode. For discussions of data-integrity constraints and the ON DELETE CASCADE option, see the IBM Informix Guide to SQL: Tutorial. For a discussion of database and table creation, see the IBM Informix Database Design and Implementation Guide. For information on how to maximize performance when you make table modifications, see your IBM Informix Performance Guide.
2-65
BEGIN WORK
Use the BEGIN WORK statement to start a transaction (a series of database operations that the COMMIT WORK or ROLLBACK WORK statement terminates). This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
WORK BEGIN (1) WITHOUT REPLICATION
Usage
The BEGIN WORK statement is valid only in a database that supports transaction logging. Each row that an UPDATE, DELETE, or INSERT statement affects during a transaction is locked and remains locked throughout the transaction. A transaction that contains many such statements or that contains statements that affect many rows can exceed the limits that your operating system or the database server configuration imposes on the number of simultaneous locks. If no other user is accessing the table, you can avoid locking limits and reduce locking overhead by locking the table with the LOCK TABLE statement after you begin the transaction. Like other locks, this table lock is released when the transaction terminates. The example of a transaction on Example of BEGIN WORK on page 2-67 includes a LOCK TABLE statement. Important: Issue the BEGIN WORK statement only if a transaction is not in progress. If you issue a BEGIN WORK statement while you are in a transaction, the database server returns an error. The WORK keyword is optional. The following two statements are equivalent:
BEGIN; BEGIN WORK;
In ESQL/C, if you use the BEGIN WORK statement within a UDR called by a WHENEVER statement, specify WHENEVER SQLERROR CONTINUE and WHENEVER SQLWARNING CONTINUE before the ROLLBACK WORK statement. These statements prevent the program from looping endlessly if the ROLLBACK WORK statement encounters an error or a warning.
2-66
BEGIN WORK
v CREATE DATABASE v ROLLBACK WORK The database server returns an error when you use a BEGIN WORK statement after any other statement in an ANSI-compliant database.
The database server must perform this sequence of operations either completely or not at all. When you include all of these operations within a single transaction, the database server guarantees that all the statements are completely and perfectly committed to disk, or else the database is restored to the same state that it was in before the transaction began.
Related Information
Related statements: COMMIT WORK and SAVE EXTERNAL DIRECTIVES For discussions of transactions and locking, see the IBM Informix Guide to SQL: Tutorial.
2-67
CLOSE
Use the CLOSE statement when you no longer need to refer to the rows that a select or function cursor retrieved, or to flush and close an insert cursor. Use this statement with ESQL/C.
Syntax
CLOSE cursor_id (1) cursor_id_var
Notes: 1
Element cursor_id cursor_id_var Description Name of cursor to be closed Host variable that contains the value of cursor_id
Informix extension
Restrictions Must have been declared Must be of a character data type Syntax Identifier on page 5-22 Must conform to language-specific rules for names.
Usage
Closing a cursor makes the cursor unusable for any statements except OPEN or FREE and releases resources that the database server had allocated to the cursor. A CLOSE statement treats a cursor that is associated with an INSERT statement differently from one that is associated with a SELECT or EXECUTE FUNCTION (or EXECUTE PROCEDURE) statement. In a database that is not ANSI-compliant, you can close a cursor that has not been opened or that has already been closed. No action is taken in these cases. In an ANSI-compliant database, the database server returns an error if you close a cursor that was not open.
2-68
CLOSE
rows that were successfully inserted into the database is returned in the third element of the sqlerrd array, sqlca.sqlerrd[2], in the sqlca structure. For information on how to use SQLERRD to count the total number of rows that were inserted, see Error Checking on page 2-447. The SQLCODE field of the sqlca structure, sqlca.sqlcode, indicates the result of the CLOSE statement for an insert cursor. If all buffered rows are successfully inserted, SQLCODE is set to zero. If an error is encountered, the sqlca.sqlcode field in the SQLCODE is set to a negative error message number. When SQLCODE is zero, the row buffer space is released, and the cursor is closed; that is, you cannot execute a PUT or FLUSH statement that names the cursor until you reopen it. Tip: When you encounter an SQLCODE error, a corresponding SQLSTATE error value also exists. For information about how to get the message text, check the GET DIAGNOSTICS statement. If the insert is not successful, the number of successfully inserted rows is stored in sqlerrd. Any buffered rows that follow the last successfully inserted row are discarded. Because the insert fails, the CLOSE statement fails also, and the cursor is not closed. For example, a CLOSE statement can fail if insufficient disk space prevents some of the rows from being inserted. In this case, a second CLOSE statement can be successful because no buffered rows exist. An OPEN statement can also be successful because the OPEN statement performs an implicit close.
2-69
CLOSE
Related Information
Related statements: DECLARE, FETCH, FLUSH, FREE, OPEN, PUT, and SET AUTOFREE For an introductory discussion of cursors, see the IBM Informix Guide to SQL: Tutorial. For a more advanced discussion of cursors, see the IBM Informix ESQL/C Programmer's Manual.
2-70
CLOSE DATABASE
Use the CLOSE DATABASE statement to close the current database. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CLOSE DATABASE
Usage
When you issue a CLOSE DATABASE statement, you can issue only the following SQL statements immediately after it: v v v v v CONNECT CREATE DATABASE DATABASE DROP DATABASE DISCONNECT (The DISCONNECT statement is valid here only if an explicit connection existed before CLOSE DATABASE was executed.)
Issue the CLOSE DATABASE statement before you drop the current database. If your current database supports transaction logging, and if you have started a transaction, you must issue a COMMIT WORK or ROLLBACK WORK statement before you can use the CLOSE DATABASE statement. The following example shows how to use the CLOSE DATABASE statement to drop the current database:
DATABASE stores_demo . . . CLOSE DATABASE DROP DATABASE stores_demo
In ESQL/C, the CLOSE DATABASE statement cannot appear in a multistatement PREPARE operation. If a previous CONNECT statement has established an explicit connection to a database, and that connection is still your current connection, you cannot use the CLOSE DATABASE statement to close that explicit connection. (You can use the DISCONNECT statement to close the explicit connection.) If you use the CLOSE DATABASE statement within a UDR called by a WHENEVER statement, specify WHENEVER SQLERROR CONTINUE and WHENEVER SQLWARNING CONTINUE before the ROLLBACK WORK statement. This action prevents the program from looping if the ROLLBACK WORK statement encounters an error or a warning. When you issue the CLOSE DATABASE statement, any declared cursors are no longer valid. You must re-declare any cursors that you want to use.
2-71
CLOSE DATABASE
In an ANSI-compliant database, if no error is encountered while you exit from DBAccess in non-interactive mode without issuing the CLOSE DATABASE, COMMIT WORK, or DISCONNECT statement, the database server automatically commits any open transaction.
Related Information
Related statements: CONNECT, CREATE DATABASE, DATABASE, DISCONNECT, and DROP DATABASE
2-72
COMMIT WORK
Use the COMMIT WORK statement to commit all modifications made to the database from the beginning of a transaction.
Syntax
COMMIT WORK
Usage
The COMMIT WORK statement informs the database server that you reached the end of a series of statements that must succeed as a single unit. The database server takes the required steps to make sure that all modifications that the transaction makes are completed correctly and saved to disk. Use COMMIT WORK only at the end of a multistatement operation in a database with transaction logging, when you are sure that you want to keep all changes made to the database from the beginning of a transaction. The COMMIT WORK statement releases all row and table locks. The WORK keyword is optional in a COMMIT WORK statement. The following two statements are equivalent:
COMMIT; COMMIT WORK;
The following example shows a transaction bounded by BEGIN WORK and COMMIT WORK statements.
BEGIN WORK; DELETE FROM call_type WHERE call_code = O; INSERT INTO call_type VALUES (S, order status); COMMIT WORK;
In this example, the user first deletes the row from the call_type table where the value of the call_code column is O. The user then inserts a new row in the call_type table where the value of the call_code column is S. The database server guarantees that both operations succeed or else neither succeeds. In ESQL/C, the COMMIT WORK statement closes all open cursors except those that were declared using the WITH HOLD option.
2-73
COMMIT WORK
Explicit DB-Access Transactions: When you use DB-Access in interactive mode with a database that is not ANSI-compliant but that supports transaction logging, if you select the Commit menu but do not issue the COMMIT WORK statement after a transaction has been started by the BEGIN WORK statement, DB-Access automatically commits the data, but issues the following warning:
Warning: Data commit is a result of unhandled exception in TXN PROC/FUNC
The purpose of this warning is to remind you to issue COMMIT WORK explicitly to end a transaction that BEGIN WORK initiated. In non-interactive mode, however, DB-Access rolls back the current transaction if you end a session without issuing the COMMIT WORK statement.
Related Information
Related statements: BEGIN WORK, SAVE EXTERNAL DIRECTIVES, and DECLARE For a discussion of concepts related to transactions, see the IBM Informix Guide to SQL: Tutorial.
2-74
CONNECT
Use the CONNECT statement to connect to a database environment. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CONNECT TO
(1) Database Environment (2) AS DEFAULT connection connection_var (3) USER Clause (4)
Notes: 1 2 3 4 See Database Environment on page 2-78 ESQL/C only ESQL/C and DB-Access only See USER Clause on page 2-80
Description Restrictions Syntax Quoted String on page 4-142 Language specific
Case-sensitive name that you declare Must be unique among here for a connection connection names Host variable that stores the name of Must be a fixed-length connection character data type
Usage
The CONNECT statement connects an application to a database environment, which can be a database, a database server, or a database and a database server. If the application successfully connects to the specified database environment, the connection becomes the current connection for the application. SQL statements fail if the application has no current connection to a database server. If you specify a database name, the database server opens that database. You cannot include CONNECT within a PREPARE statement. An application can connect to several database environments at the same time, and it can establish multiple connections to the same database environment, provided each connection has a unique connection name. On UNIX, the only restriction on establishing multiple connections to the same database environment is that an application can establish only one connection to each local server that uses the shared-memory connection mechanism. To find out whether a local server uses the shared-memory connection mechanism or the local-loopback connection mechanism, examine the $INFORMIXDIR/etc/sqlhosts file. For more information on the sqlhosts file, refer to your IBM Informix Administrator's Guide.
Chapter 2. SQL Statements
2-75
CONNECT
On Windows, the local connection mechanism is named pipes. Multiple connections to the local server from one client can exist. Only one connection is current at any time; other connections are dormant. The application cannot interact with a database through a dormant connection. When an application establishes a new connection, that connection becomes current, and the previous current connection becomes dormant. You can make a dormant connection current with the SET CONNECTION statement. See also SET CONNECTION on page 2-534.
Connection Identifiers
The optional connection name is a unique identifier that an application can use to refer to a connection in subsequent SET CONNECTION and DISCONNECT statements. If the application does not provide a connection name (or a connection-host variable), it can refer to the connection using the database environment. If the application makes more than one connection to the same database environment, however, each connection must have a unique name. After you associate a connection name with a connection, you can refer to the connection using only that connection name.
Connection Context
Each connection encompasses a set of information that is called the connection context. The connection context includes the name of the current user, the information that the database environment associates with this name, and information on the state of the connection (such as whether an active transaction is associated with the connection). The connection context is saved when an application becomes dormant, and this context is restored when the application becomes current again. (For more information, see Making a Dormant Connection the Current Connection on page 2-534.)
DEFAULT Option
Use the DEFAULT option to request a connection to a default database server, called a default connection. The default database server can be local or remote. To designate the default database server, set its name in the INFORMIXSERVER environment variable. This option of CONNECT does not open a database. If you select the DEFAULT option for the CONNECT statement, you must use the DATABASE statement or the CREATE DATABASE statement to open or create a database in the default database environment.
2-76
CONNECT
v DROP DATABASE If one of these database statements is the first SQL statement in an application, the statement establishes a connection to a database server, which is known as an implicit connection. If the database statement specifies only a database name, the database server name is obtained from the DBPATH environment variable. This situation is described in Specifying the Database Environment on page 2-79. An application that makes an implicit connection can establish other connections explicitly (using the CONNECT statement) but cannot establish another implicit connection unless the original implicit connection is closed. An application can terminate an implicit connection using the DISCONNECT statement. After you create an explicit connection, you cannot use any database statement to create implicit connections until after you close the explicit connection. After any implicit connection is made, that connection is considered to be the default connection, regardless of whether the database server is the default that the INFORMIXSERVER environment variable specifies. This feature allows the application to refer to the implicit connection if additional explicit connections are made, because the implicit connection has no identifier. For example, if you establish an implicit connection followed by an explicit connection, you can make the implicit connection current by issuing the SET CONNECTION DEFAULT statement. This means, however, that once you establish an implicit connection, you cannot use the CONNECT DEFAULT statement, because the implicit connection is now the default connection. The database statements can always be used to open a database or create a new database on the current database server.
2-77
CONNECT
/* Execute SQL statements in connection C , starting a transaction */ EXEC SQL set connection B; -- switch to connection B /* Execute SQL statements starting a transaction in B. Now there are two active transactions, one each in B and C. */ EXEC SQL set connection A; -- switch to connection A /* Execute SQL statements starting a transaction in A. Now there are three active transactions, one each in A, B and C. */ EXEC SQL set connection C; -- ERROR, transaction active in A /* SET CONNECTION C fails (current connection is still A) The transaction in A must be committed or rolled back because connection A was started without the CONCURRENT TRANSACTION clause. */
Now, there are two active transactions, in B and in C, which must be committed or rolled back separately */ EXEC SQL set connection B; -- switch to connection B EXEC SQL commit work; -- commit tx in current connection (B) EXEC SQL set connection C; -- go back to connection C EXEC SQL commit work; -- commit tx in current connection (C) EXEC SQL disconnect all; }
Warning: When an application uses the WITH CONCURRENT TRANSACTION clause to establish multiple connections to the same database environment, a deadlock condition can occur.
Database Environment
Database Environment:
dbname @dbservername dbname@dbservername (1) db_var
2-78
CONNECT
Element db_var Description Restrictions Syntax Language specific
Host variable that contains a Must be a fixed-length character data type, valid database environment (in whose contents are in a format from the one of the formats in the syntax diagram syntax diagram) Database to which to connect Must already exist
dbname dbservername
Name of the database server to Must already exist; blank space is not valid which a connection is made between @ symbol and dbservername. See also Restrictions on dbservername.
If the DELIMIDENT environment variable is set, any quotation ( ) marks in the database environment must be single. If DELIMIDENT is not set, then either single ( ) or double ( ) quotation marks are valid here.
Restrictions on dbservername
If you specify dbservername, it must satisfy the following restrictions. v If the database server that you specify is not online, you receive an error. v On UNIX, the database server that you specify in dbservername must match the name of a database server in the sqlhosts file. v On Windows, dbservername must match the name of a database server in the sqlhosts subkey in the registry. It is recommended that you use the setnet32 utility to update the registry.
2-79
CONNECT
On Windows, choose Start > Programs > Informix > setnet32 from the Task Bar and set the INFORMIXSERVER and DBPATH environment variables:
set INFORMIXSERVER = srvA set DBPATH = //srvA://srvB://srvC
The next example shows the resulting DBPATH that your application uses:
//srvA://srvB://srvC
The application first establishes a connection to the database server that INFORMIXSERVER specifies. The database server uses parameters in the configuration file to locate the database. If the database does not reside on the default database server, or if the default database server is not online, the application connects to the next database server in DBPATH. In the previous example, that database server would be srvB.
USER Clause
The USER clause specifies information that is used to determine whether the application can access the target computer on a remote host. USER Clause:
USER user_id user_id_var USING validation_var
Restrictions See Restrictions on the User Identifier Parameter on page 2-81. Must be a fixed-length character data type; same restrictions as user_id
Host variable that contains a valid Must be a fixed-length character type. See Language specific password for login name in user_id Restrictions on the Validation Variable or user_id_var Parameter on page 2-80.
The USER clause is required when the CONNECT statement connects to the database server on a remote host. Subsequent to the CONNECT statement, all database operations on the remote host use the specified user name. In DB-Access, the USING clause is valid within files executed from DB-Access. In interactive mode, DB-Access prompts you for a password, so the USING keyword and validation_var are not used.
2-80
CONNECT
Related Information
Related Statements: DISCONNECT, SET CONNECTION, DATABASE, and CREATE DATABASE For more information about sqlhosts, refer to your IBM Informix Administrator's Guide.
2-81
CREATE ACCESS_METHOD
Use the CREATE ACCESS_METHOD statement to register a new primary or secondary access method in the sysams system catalog table. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE SECONDARY PRIMARY ACCESS_METHOD access_method
Notes: 1
Element access method Description Name declared here for the new access method
Usage
The CREATE ACCESS_METHOD statement adds a user-defined access method to a database. To create an access method, you specify purpose functions (or purpose methods), purpose flags, or purpose values as attributes of the access method, and you associate keywords (based on column names in the sysams system catalog table) with UDRs. You must have the DBA or Resource privilege to create an access method. For information on setting purpose options, including a list of all the purpose function keywords, refer to Purpose Options on page 5-46. The PRIMARY keyword specifies a user-defined primary-access method for a virtual table. The SECONDARY keyword specifies creating a user-defined secondary-access method for a virtual index. The SECONDARY keyword (and creating virtual indexes) is not supported in the Java Virtual-Table Interface. The following statement creates a secondary-access method named T_tree:
CREATE SECONDARY ACCESS_METHOD T_tree ( am_getnext = ttree_getnext, . . . am_unique, am_cluster, am_sptype = S );
2-82
CREATE ACCESS_METHOD
In the preceding example, the am_getnext keyword in the Purpose Options list is associated with the ttree_getnext( ) UDR as the name of a method to scan for the next item that satisfies a query. This example indicates that the T_tree secondary access method supports unique keys and clustering, and resides in an sbspace. Any UDR that the CREATE ACCESS_METHOD statement associates with the keyword for a purpose function task, such as the association of ttree_getnext( ) with am_getnext in the preceding example, must already have been registered in the database by the CREATE FUNCTION statement (or by a functionally equivalent statement, such as CREATE PROCEDURE FROM). The following statement creates a primary-access method named am_tabprops that resides in an extspace.
CREATE PRIMARY ACCESS_METHOD am_tabprops ( am_open = FS_open, am_close = FS_close, am_beginscan = FS_beginScan, am_create = FS_create, am_scancost = FS_scanCost, am_endscan = FS_endScan, am_getnext = FS_getNext, am_getbyid = FS_getById, am_drop = FS_drop, am_truncate = FS_truncate, am_rowids, am_sptype = x );
Related Information
Related statements: ALTER ACCESS_METHOD and DROP ACCESS_METHOD For the schema of the sysams table, see the IBM Informix Guide to SQL: Reference. For information about how to set purpose-option specifications, see Purpose Options on page 5-46. For more information on primary-access methods, see the IBM Informix Virtual-Table Interface Programmer's Guide. For more information on secondary-access methods, see the IBM Informix Virtual-Index Interface Programmer's Guide and the IBM Informix User-Defined Routines and Data Types Developer's Guide. For a discussion of privileges, see the GRANT or REVOKE statements or the IBM Informix Database Design and Implementation Guide.
2-83
CREATE AGGREGATE
Use the CREATE AGGREGATE statement to create a new aggregate function and register it in the sysaggregates system catalog table. User-defined aggregates extend the functionality of the database server by performing aggregate computations that the user implements. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE AGGREGATE (1) Owner Name . aggregate
, WITH ( Modifiers )
Modifiers:
INIT=init_func ITER=iter_func COMBINE=comb_func FINAL=final_func HANDLESNULLS
Notes: 1
Element aggregate comb_func Description Name of the new aggregate
Function that merges one partial Must specify the combined function result into the other and returns the both for parallel queries and for updated partial result sequential queries Function that converts a partial result into the result type Function that initializes the data structures required for the aggregate computation If this is omitted, then the returned value is the final result of iter_func Must be able to handle NULL arguments
final_func init_func
iter_func
Function that merges a single value Must specify an iterator function. If with a partial result and returns init_func is omitted, iter_func must be updated partial result able to handle NULL arguments
Usage
You can specify the INIT, ITER, COMBINE, FINAL, and HANDLESNULLS modifiers in any order.
2-84
CREATE AGGREGATE
Important: You must specify the ITER and COMBINE modifiers in a CREATE AGGREGATE statement. You do not have to specify the INIT, FINAL, and HANDLESNULLS modifiers in a CREATE AGGREGATE statement. The ITER, COMBINE, FINAL, and INIT modifiers specify the support functions for a user-defined aggregate. These support functions do not have to exist at the time you create the user-defined aggregate. If you omit the HANDLESNULLS modifier, rows with NULL aggregate argument values do not contribute to the aggregate computation. If you include the HANDLESNULLS modifier, you must define all the support functions to handle NULL values as well.
Before you use the average aggregate in a query, you must also use CREATE FUNCTION statements to create the support functions specified in the CREATE AGGREGATE statement. The following table gives an example of the task that each support function might perform for average.
2-85
CREATE AGGREGATE
Support Function average_init average_iter
Effect Allocates and initializes an extended data type storing the current sum and the current row count For each row, adds the value of the expression to the current sum and increments the current row count by one Adds the current sum and the current row count of one partial result to the other and returns the updated result Returns the ratio of the current sum to the current row count and converts this ratio to the result type
COMBINE
average_combine
FINAL
average_final
Parallel Execution
The database server can break up an aggregate computation into several pieces and compute them in parallel. The database server uses the INIT and ITER support functions to compute each piece sequentially. Then the database server uses the COMBINE function to combine the partial results from all the pieces into a single result value. Whether an aggregate is parallel is an optimization decision that is transparent to the user.
Related Information
Related statements: CREATE FUNCTION and DROP AGGREGATE For information about how to invoke a user-defined aggregate, see User-Defined Aggregates on page 4-117 in the Expression segment. For a description of the sysaggregates system catalog table that stores data about user-defined aggregates, see the IBM Informix Guide to SQL: Reference. For a discussion of user-defined aggregates, see IBM Informix User-Defined Routines and Data Types Developer's Guide.
2-86
CREATE CAST
Use the CREATE CAST statement to register a cast that converts data from one data type to another. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE EXPLICIT IMPLICIT CAST ) WITH function
( source_type AS target_type
Element function
Description UDR that you register to implement the cast Data type to be converted
Syntax Database Object Name on page 5-17 Data Type on page 4-18
source_type
Must exist in the database at the time the cast is registered. See also Source and Target Data Types on page 2-87.
target_type
The same restrictions that apply for the source_type (as Data Type on listed above) also apply for the target_type page 4-18
Usage
A cast is a mechanism that the database server uses to convert one data type to another. The database server uses casts to perform the following tasks: v To compare two values in the WHERE clause of a SELECT, UPDATE, or DELETE statement v To pass values as arguments to user-defined routines v To return values from user-defined routines To create a cast, you must have the necessary privileges on both the source data type and the target data type. All users have access privileges to use the built-in data types. To create a cast to or from an OPAQUE, DISTINCT, or named ROW data type, however, requires the Usage privilege on that data type. The CREATE CAST statement registers a cast in the syscasts system catalog table. For more information on syscasts, see the chapter on system catalog tables in the IBM Informix Guide to SQL: Reference.
2-87
CREATE CAST
The following SELECT statement explicitly invokes this explicit cast in its WHERE clause to compare the bond_rate column (of type rate_of_return) to the initial_APR column (of type percent):
SELECT bond_rate FROM bond WHERE bond_rate::percent > initial_APR
Implicit Casts: The database server invokes built-in casts to convert from one built-in data type to another built-in type that is not directly substitutable. For example, the database server performs conversion of a character type such as CHAR to a numeric type such as INTEGER through a built-in cast. An implicit cast is a cast that the database server can invoke automatically when it encounters data types that cannot be compared with built-in casts. This type of cast enables the database server to automatically handle conversions between other data types. To define an implicit cast, specify the IMPLICIT keyword in the CREATE CAST statement. For example, the following CREATE CAST statement specifies that the database server should automatically use the prcnt_to_char( ) function to convert from the CHAR data type to a distinct data type, percent:
CREATE IMPLICIT CAST (CHAR AS percent WITH char_to_prcnt)
This cast only supports automatic conversion from the CHAR data type to percent. For the database server to convert from percent to CHAR, you also need to define another implicit cast, as follows:
CREATE IMPLICIT CAST (percent AS CHAR WITH prcnt_to_char)
The database server automatically invokes the char_to_prcnt( ) function to evaluate the WHERE clause of the following SELECT statement:
SELECT commission FROM sales_rep WHERE commission > "25%"
Users can also invoke implicit casts explicitly. For more information on how to explicitly invoke a cast function, see Explicit Casts on page 2-88. When a built-in cast does not exist for conversion between data types, you can create user-defined casts to make the necessary conversion.
2-88
CREATE CAST
WITH Clause
The WITH clause of the CREATE CAST statement specifies the name of the user-defined function to invoke to perform the cast. This function is called the cast function. You must specify a function name unless the source data type and the target data type have identical representations. Two data types have identical representations when the following conditions are met: v Both data types have the same length and alignment. v Both data types are passed by reference or both are passed by value. The cast function must be registered in the same database as the cast at the time the cast is invoked, but need not exist when the cast is created. The CREATE CAST statement does not check privileges on the specified function name, or even verify that the cast function exists. Each time a user invokes the cast explicitly or implicitly, the database server verifies that the user has the Execute privilege on the cast function.
Related Information
Related statements: CREATE FUNCTION, CREATE DISTINCT TYPE, CREATE OPAQUE TYPE, CREATE ROW TYPE, and DROP CAST For more information about data types, casting, and conversion, see the Data Types segment in this manual and the IBM Informix Guide to SQL: Reference. For examples that show how to create and use casts, see the IBM Informix Database Design and Implementation Guide.
2-89
CREATE DATABASE
Use the CREATE DATABASE statement to create a new database. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE DATABASE database IN dbspace
LOG
Description Name that you declare here for the new database that you are creating The dbspace to store the data for this database; default is the root dbspace
Restrictions Must be unique among names of databases of the database server Must already exist
Usage
This statement is an extension to ANSI-standard syntax. (The ANSI/ISO standard for the SQL language does not specify any syntax for construction of a database, the process by which a database comes into existence.) The database that CREATE DATABASE specifies becomes the current database. The database name that you declare must be unique within the database server environment in which you are working. The database server creates the system catalog tables that describe the structure of the new database. When you create a database, you alone can access it. It remains inaccessible to other users until you, as DBA, grant database privileges. For information on how to grant database privileges, see GRANT on page 2-371. If a previous CONNECT statement has established an explicit connection to a database, and that connection is still your current connection, you cannot use the CREATE DATABASE statement (nor any SQL statement that creates an implicit connection) until after you use DISCONNECT to close the explicit connection. In ESQL/C, the CREATE DATABASE statement cannot appear in a multistatement PREPARE operation. If you do not specify the dbspace, the database server creates the system catalog tables in the root dbspace. The following statement creates the vehicles database in the root dbspace:
CREATE DATABASE vehicles
The following statement creates the vehicles database in the research dbspace:
CREATE DATABASE vehicles IN research
2-90
CREATE DATABASE
In Extended Parallel Server, you can create a database in the dbspace of the primary coserver (coserver 1) only.
Logging Options
The logging options of the CREATE DATABASE statement determine the type of logging that is done for the database. In the event of a failure, the database server uses the log to re-create all committed transactions in your database. If you do not specify the WITH LOG option, you cannot use transactions nor the statements that support transaction logging (BEGIN WORK, COMMIT WORK, ROLLBACK WORK, SET IMPLICIT TRANSACTION, SET LOG, and SET ISOLATION). If you are using Extended Parallel Server, the CREATE DATABASE statement always creates a database with logging. If you do not specify the WITH LOG option, the unbuffered log type is used by default.
If you use a buffered log, you marginally enhance the performance of logging at the risk of not being able to re-create the last few transactions after a failure. (See the discussion of buffered logging in the IBM Informix Database Design and Implementation Guide.)
ANSI-Compliant Databases
When you use the LOG MODE ANSI option in the CREATE DATABASE statement, the database that you create is an ANSI-compliant database. The following example creates an ANSI-compliant database:
CREATE DATABASE employees WITH LOG MODE ANSI
ANSI-compliant databases are different from databases that are not ANSI compliant in several ways, including the following features: v All statements are automatically contained in transactions. v All databases use unbuffered logging. v Owner naming is enforced. You must qualify with the owner name any table, view, synonym, index, or constraint, unless you are the owner. Unless you enclose the name between quotation marks, alphabetic characters in owner names default to uppercase. v For databases, the default isolation level is REPEATABLE READ. v Default privileges on objects differ from those in databases that are not ANSI compliant. Users do not receive PUBLIC privilege to tables and synonyms by default. Other slight differences exist between databases that are ANSI compliant and those that are not. These differences are noted with the related SQL statement in this manual. For a detailed discussion of the differences between ANSI compliant databases and databases that are not ANSI-compliant, see the IBM Informix Database Design and Implementation Guide. Creating an ANSI-compliant database does not mean that you automatically receive warnings for Informix extensions to the ANSI/ISO standard for SQL syntax
2-91
CREATE DATABASE
when you run the database. You must also use the -ansi flag or the DBANSIWARN environment variable to receive such warnings. For additional information about -ansi and DBANSIWARN, see the IBM Informix Guide to SQL: Reference.
Related Information
Related statements: CLOSE DATABASE, CONNECT, DATABASE, and DROP DATABASE
2-92
Syntax
CREATE DISTINCT TYPE distinct_type AS source_type
Element distinct_type
Description
Restrictions
Name that you In an ANSI-compliant database, the combination of the declare here for the owner and data type must be unique within the database. In new distinct data type a database that is not ANSI compliant, the name must be unique among names of data types in the database. Name of existing type Must be either a built-in data type or one created with the on which the new CREATE DISTINCT TYPE, CREATE OPAQUE TYPE, or type is based CREATE ROW TYPE statement
source_type
Usage
A distinct type is a data type based on a built-in data type or on an existing opaque data type, a named ROW data type, or another distinct data type. Distinct data types are strongly typed. Although the distinct type has the same physical representation of data as its source type, values of the two types cannot be compared without an explicit cast from one type to the other. To create a distinct data type, you must have the Resource privilege on the database. Any user with the Resource privilege can create a distinct type from one of the built-in data types, which user informix owns. Important: You cannot create a distinct type on the SERIAL or SERIAL8 data types. To create a distinct type from an opaque type, from a named ROW type, or from another distinct type, you must be the owner of the data type or have the Usage privilege on the data type. By default, after a distinct type is defined, only the owner of the distinct type and the DBA can use it. The owner of the distinct type, however, can grant to other users the Usage privilege on the distinct type. A distinct type has the same storage structure as its source type. The following statement creates the distinct type birthday, based on the built-in DATE data type:
CREATE DISTINCT TYPE birthday AS DATE
Although Dynamic Server uses the same storage format for the distinct type as it does for its source type, a distinct type and its source type cannot be compared in an operation unless one type is explicitly cast to the other type.
2-93
To directly compare the distinct type and its source type or assign a value of the source type to a column of the distinct type, you must cast one type to the other, as the following examples show:
INSERT INTO tab (col1) VALUES (3.5::dist_type); SELECT col1, col2 FROM t WHERE (col1::NUMERIC) > col2; SELECT col1, col2, (col1 + col2::dist_type) sum_col FROM tab;
2-94
Related Information
Related statements: CREATE CAST, CREATE FUNCTION, CREATE OPAQUE TYPE, CREATE ROW TYPE, DROP TYPE, and DROP ROW TYPE For information and examples that show how to use and cast distinct types, see the IBM Informix Guide to SQL: Tutorial. For more information on when you might create a distinct type, see IBM Informix User-Defined Routines and Data Types Developer's Guide.
2-95
CREATE DUPLICATE
Use the CREATE DUPLICATE statement to create a duplicate copy of an existing table for read-only use in a specified dbslice or in specified dbspaces across coservers. Only Extended Parallel Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
, CREATE DUPLICATE OF TABLE table IN ( dbspace dbslice )
Description Name of a dbslice in which to duplicate one fragment of table Name of a dbspace in which to duplicate one fragment of table Name of the original table from which to create a duplicate
Restrictions Must exist and must contain at most one dbspace on each target coserver
Must exist; must not contain an original or Identifier on page duplicate fragment of table 5-22 Must exist in the database. See also Supported Operations on page 2-97. Database Object Name on page 5-17
Usage
If the original table resides entirely on a single coserver, you can create duplicate copies of small tables across coservers for read-only use. For each attached index of the original table, a similarly defined index is built on each table duplicate, using the same dbspaces as the table. Because query operators read the local copy of the table, duplicating small tables across coservers might improve the performance of some queries. If a local copy of a duplicated table exists but is not available because the dbspace that stores it is offline (or for some similar reason), a query that requires access to the table fails. The database server does not attempt to access the original table. The location of a duplicated table can be either a dbslice or a comma-separated list of dbspaces. You can combine dbslices and lists of dbspaces in a single CREATE DUPLICATE statement. v If the original table is not fragmented, the dbspace list need provide only a single dbspace on each coserver. For example, if the table tab1 is not fragmented, enter the following statement to create a duplicate on the remaining three of the four coservers. If the original table is stored in the dbspace db1 on coserver 1 and db2 is on coserver 2, db3 is on coserver 3, and db4 is on coserver 4.
CREATE DUPLICATE OF TABLE tab1 IN (db2, db3, db4)
v If the original table is fragmented with one fragment in the first dbspace of several dbslices that contain dbspaces on all coservers, you can create duplicate copies of the table in the remaining dbspaces of the dbslice. For example, you might create the tab3 table in the first dbspace of three dbslices, each of which contains a dbspace on each coserver, as follows:
2-96
CREATE DUPLICATE
CREATE TABLE tab3 (...) FRAGMENT BY HASH (....) IN dbsl1.l, dbsl2.1, dbsl3.1;
To duplicate the tab3 table across the remaining coservers, use the following statement:
CREATE DUPLICATE OF TABLE tab3 IN dbsl1, dbsl2, dbsl3
v You can mix dbslice names and dbspace lists in the same CREATE DUPLICATE statement. For example, instead of using dbspaces in a dbslice, for the previous example, you might enter the following statement in which dbsp2a is on coserver 2, dbsp3a is on coserver 3, and dbsp4a is on coserver 4:
CREATE DUPLICATE OF TABLE tab3 IN dbsl1, dbsl2, (dbsp2a, dbsp3a, dbsp4a)
The first fragment of the original table is duplicated into dbsl1, which contains a dbspace on each coserver, the second fragment into dbsl2, which also contains a dbspace on each coserver, and the third fragment into the list of dbspaces. Only one fragment of a duplicated table can reside in any single dbspace. You cannot list an existing dbspace of the duplicated table in the list of dbspaces into which it is duplicated, but it is not an error for an existing dbspace to be a member of a dbslice that specifies duplication dbspaces. Matching dbspaces in the dbslice are ignored. The CREATE DUPLICATE statement requires the ALTER privilege.
Supported Operations
The following operations are permitted on duplicated tables: v SELECT v UPDATE STATISTICS v LOCK and UNLOCK v SET Residency v DROP TABLE You cannot duplicate a table in certain circumstances. The table must not: v Have GK or detached indexes v Use range fragmentation v v v v Be a temporary table Be a violations or diagnostic table Contain BYTE, TEXT, SERIAL, or SERIAL8 columns Have referential constraints
The CREATE DUPLICATE statement does not support incremental duplication. It also does not support multiple duplicates of the same table on a single coserver, nor duplication of tables that are fragmented across coservers. If you need to take a dbspace offline and it contains a copy of a duplicated table, or if you need to update data in a duplicated table, first drop all duplicates of the table, as described in DROP DUPLICATE on page 2-299.
Related Information
DROP DUPLICATE
2-97
Syntax
(1) CREATE EXTERNAL TABLE table Column Definition (3) USING( (2) Table Options Table Options DATAFILES Clause (2) )
Notes: 1 2 3 See Column Definition See Table Options on page 2-103 See DATAFILES Clause on page 2-102
Description Name declared here for a table to store external data Restrictions Must be unique among names of tables, views, and synonyms in the current database Syntax Database Object Name on page 5-17
Element table
Usage
The first portion of the syntax diagram declares the name of the table and defines its columns and any column-level constraints. The portion that follows the USING keyword identifies external files that the database server opens when you use the external table, and specifies additional options for characteristics of the external table. After executing the CREATE EXTERNAL TABLE statement, you can move data to and from the external source with an INSERT INTO ... SELECT statement. See the section INTO EXTERNAL Clause (XPS) on page 2-523 for more information about loading the results of a query into an external table.
Column Definition
Column Definition:
SAMEAS template , (1) column Data Type Other Optional Clauses
2-98
EXTERNAL
HEX TEXT (1) Data Type 'PACKED(p,s)' 'ZONED(p,s)' 'BINARY(n)' NULL 'null_string'
Notes: 1 2 3 See Data Type on page 4-18 See DEFAULT Clause on page 2-174 See Column-Level Constraints on page 2-101
Description One column name for each column of the external table Number of 8-bit bytes to represent the integer Precision (number of digits) Scale (digits in fractional part) Value to represent NULL Existing table with the same schema as the external table Restrictions Syntax
For each column, you must specify a Identifier on page 5-22 built-in data type For FIXED format binary integers; big-endian byte order For FIXED-format files only For FIXED-format files only See Defining NULL Values on page 2-100. Cannot be subset of columns nor differ in any column data type n=2 for 16-bit integers n=4 for 32-bit integers Literal Number on page 4-137 Literal Number on page 4-137 Quoted String on page 4-142 Database Object Name on page 5-17
2-99
If the packed decimal or zoned decimal is stored with all bits cleared to represent a NULL value, the null_string can be defined as 0x0. The following rules apply to the value assigned to a null_string: v The NULL representation must fit into the length of the external field. v If a bit pattern is defined, the null_string is not case sensitive. v If a bit pattern is defined, the null_string must begin with 0x. v For numeric fields, the left-most fields are assigned zeros by the database server if the bit pattern does not fill the entire field. v If the NULL representation is not a bit pattern, the NULL value must be a valid number for that field. Warning: If a row that contains a NULL value is unloaded into an external table and the column that receives the NULL value has no NULL value defined, the database server inserts a zero into the column. TEXT and HEX External Types: An Informix BYTE or TEXT column can be encoded in either the TEXT or HEX external type. You can use only delimited BYTE and TEXT formats with these external types. Fixed formats are not allowed. In addition, you cannot use these external types with any other type of delimited-format columns (such as character columns).
2-100
Column-Level Constraints
Use column-level constraints to limit the type of data that is allowed in a column. Constraints at the column level are limited to a single column. Column-Level Constraints:
(1) )
2-101
DATAFILES Clause
The DATAFILES clause specifies external files that are opened when you use external tables. DATAFILES Clause:
DATAFILES
Syntax Identifier on page 5-22 Literal Number on page 4-137 Must conform to operating-system rules. Must conform to operating-system rules.
Numeric ID of coserver containing the external data Must exist Pathname for input or output files in the definition of the external table Formatted pathname that uses pattern-matching characters Must exist Must exist
You can use cogroup names and coserver numbers when you describe the input or output files for the external table definition. You can identify the DATAFILES either by coserver number or by cogroup name. A coserver number contains only digits. A cogroup name is a valid identifier that begins with a letter but otherwise contains any combination of letters, digits, and underscore symbols. If you use only some of the available coservers for reading or writing files, you can designate these coservers as a cogroup using onutil and then use the cogroup name, rather than explicitly specifying each coserver and file separately. Whenever you use all coservers to manage external files, you can use the predefined coserver_group. For examples of the DATAFILES clause, see Examples on page 2-106.
2-102
Important: The formatted pathname option does not support the %o formatting string.
Table Options
These options specify additional characteristics that define the table. Table Options:
, DELIMITED FORMAT ' INFORMIX FIXED DEFAULT ESCAPE EXPRESS DELUXE ASCII CODESET ' ' EBCDIC DELIMITER field_delimiter RECORDEND record_delimiter MAXERRORS num_errors REJECTFILE filename SIZE num_rows '
Description
Restrictions
Syntax Quoted String on page 4-142 Must conform to operating-system rules. Literal Number on page 4-137 Literal Number on page 4-137 Quoted String on page 4-142 Quoted String on page 4-142
Character to separate fields. Default For nonprinting characters, is pipe ( | ) character use octal Full pathname for conversion error messages from coservers Errors per coserver before load operations are terminated Approximate number of rows contained in the external table ASCII character specified here as the escape character Character to separate records Default is Newline ( \n ) See Reject Files on page 2-104. Value is ignored unless MAXERRORS is set Cannot be a negative number Only a single character is valid For nonprinting characters, use octal
The num_errors specification is ignored during unload tasks. If the MAXERRORS environment variable is not set, the database server processes all data in load operations, regardless of the number of errors or num_errors value. If the RECORDEND environment variable is not set, record_delimiter defaults to the Newline character ( \n ). To specify a nonprinting character as the record delimiter or field delimiter, you must encode it as the octal representation of the ASCII character. For example, \006 can represent CTRL-F.
2-103
SIZE
Important: Check constraints on external tables are designed to be evaluated only when loading data. The database server cannot enforce check constraints on external tables because the data can be freely altered outside the control of the database server. If you want to restrict rows that are written to an external table during unload, use a WHERE clause to filter the rows.
Reject Files
Rows that have conversion errors during a load or rows that violate check constraints on the external table are written to a reject file on the coserver that
2-104
The following table describes these elements of the reject file: Element coserver-number filename record reason-code field-name Description Number of the coserver from which the file is read Name of the input file Record number in the input file where the error was detected Description of the error External field name where the first error in the line occurred, or <none> if the rejection is not specific to a particular column Line that caused the error (delimited or fixed-position character files only), up to 80 characters
bad-line
The reject file writes the coserver-number, filename, record, field-name, and reason-code in ASCII. The bad-line information varies with the type of input file. v For delimited files or fixed-position character files, up to 80 characters of the bad line are copied directly into the reject file. v For Informix internal data files, the bad line information is not placed in the reject file because you cannot edit the binary representation in a file; but the coserver-number, filename, record, reason-code, and field-name are still reported in the reject file so you can isolate the problem. Use the Table Options clause to specify the format of the external data. Errors that can cause a row to be rejected include the following. Error Text Explanation
CONSTRAINT constraint name This constraint was violated. CONVERT_ERR MISSING_DELIMITER MISSING_RECORDEND NOT NULL ROW_TOO_LONG Any field encounters a conversion error. No delimiter was found. No record end was found. A NULL was found in field-name. The input record is longer than 32 kilobytes.
2-105
Examples
The examples in this section show how to specify the DATAFILES field. Assume that the database server is running on four nodes, and one file is to be read from each node. All files have the same name. The DATAFILES specification can then be as follows:
DATAFILES ("DISK:cogroup_all:/work2/unload.dir/mytbl")
Now, consider a system with 16 coservers where only three coservers have tape drives attached (for example, coservers 2, 5, and 9). If you define a cogroup for these coservers before you run load and unload commands, you can use the cogroup name rather than a list of individual coservers when you execute the commands. To set up the cogroup, run onutil.
% onutil 1> create cogroup tape_group 2> from coserver.2, coserver.5, coserver.9; Cogroup successfully created.
If, instead, you want to process three files on each of two coservers, define the files as follows:
DATAFILES ("DISK:1:/work2/extern.dir/mytbl.%r(1..3)", "DISK:2:/work2/extern.dir/mytbl.%r(4..6)")
Related Information
Related statements: INSERT and SET PLOAD FILE See also the INTO Table Clauses of SELECT. For more information on external tables, refer to your IBM Informix Administrator's Reference.
2-106
CREATE FUNCTION
Use the CREATE FUNCTION statement to create a user-defined function, register an external function, or to write and register an SPL function. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE DBA Routine Parameter List FUNCTION function( (1) )
Notes: 1 2 3 4 5 6 7 8 9 See Routine Parameter List on page 5-61 See Return Clause on page 5-51 See Specific Name on page 5-68 See Routine Modifier on page 5-54 Stored Procedure Language only See Statement Block on page 5-69 External routines only See External Routine Reference on page 5-20 See Quoted String on page 4-142
2-107
CREATE FUNCTION
Element function Description Name of new function that is defined here Restrictions You must have the appropriate language privileges. See GRANT on page 2-371 and Overloading the Name of a Function on page 2-109. The specified pathname must exist on the computer where the database resides Syntax Database Object Name on page 5-17
pathname
Tip: If you are trying to create a function from text of source code that is in a separate file, use the CREATE FUNCTION FROM statement.
Usage
Dynamic Server supports user-defined functions written in these languages: v Stored Procedure Language (SPL) v One of the external languages (C or Java) that Dynamic Server supports (external functions) When the IFX_EXTEND_ROLE configuration parameter of is set to ON, only users to whom the DBSA grants the built-in EXTEND role can create external functions. How many values a function can return is language-dependent. Functions written in SPL can return one or more values. External functions written in the C or Java languages must return exactly one value. But a C function can return a collection type, and external functions in queries can return additional values indirectly from OUT parameters (and for the Java language, from INOUT parameters) that Dynamic Server can process as statement-local variables (SVLs). For information on how this manual uses the terms UDR, function, and procedure as well as recommended usage, see Relationship Between Routines, Functions, and Procedures on page 2-146 and Using CREATE PROCEDURE Versus CREATE FUNCTION on page 2-146, respectively. The entire length of a CREATE FUNCTION statement must be less than 64 kilobytes. This length is the literal length of the statement, including whitespace characters such as blank spaces and tabs. In ESQL/C, you can use a CREATE FUNCTION statement only within a PREPARE statement. If you want to create a user-defined function for which the text is known at compile time, you must put the text in a file and specify this file with the CREATE FUNCTION FROM statement. Functions use the collating order that was in effect when they were created. See SET COLLATION for information about using non-default collation.
2-108
CREATE FUNCTION
By default, Usage privilege on SPL is granted to PUBLIC. You must also have at least Resource privilege on a database to create an SPL function in that database.
DOCUMENT Clause
The quoted string in the DOCUMENT clause provides a synopsis and description of the UDR. The string is stored in the sysprocbody system catalog table and is intended for the user of the UDR. Anyone with access to the database can query the sysprocbody system catalog table to obtain a description of one or all of the UDRs stored in the database. For example, the following query obtains a description of the SPL function update_by_pct, that SPL Functions on page 2-110 shows:
2-109
CREATE FUNCTION
SELECT data FROM sysprocbody b, sysprocedures p WHERE b.procid = p.procid --join between the two catalog tables AND p.procname = update_by_pct -- look for procedure named update_by_pct AND b.datakey = D-- want user document;
A UDR or application program can query the system catalog tables to fetch the DOCUMENT clause and display it for a user. For C and Java language functions, you can include a DOCUMENT clause at the end of the CREATE FUNCTION statement, whether or not you use the END FUNCTION keywords.
SPL Functions
SPL functions are UDRs written in SPL that return one or more values. To write and register an SPL function, use a CREATE FUNCTION statement. Embed appropriate SQL and SPL statements between the CREATE FUNCTION and END FUNCTION keywords. You can also follow the function with the DOCUMENT and WITH FILE IN options. SPL functions are parsed, optimized (as far as possible), and stored in the system catalog tables in executable format. The body of an SPL function is stored in the sysprocbody system catalog table. Other information about the function is stored in other system catalog tables, including sysprocedures, sysprocplan, and sysprocauth. For more information about these system catalog tables, see the IBM Informix Guide to SQL: Reference. The END FUNCTION keywords are required in every SPL function, and a semicolon ( ; ) must follow the clause that immediately precedes the statement block. The following code example creates an SPL function:
CREATE FUNCTION update_by_pct ( pct INT, pid CHAR(10)) RETURNING INT; DEFINE n INT; UPDATE inventory SET price = price + price * (pct/100) WHERE part_id = pid;
2-110
CREATE FUNCTION
LET n = price; RETURN price; END FUNCTION DOCUMENT "USAGE: Update a price by a percentage", "Enter an integer percentage from 1 - 100", "and a part id number" WITH LISTING IN /tmp/warn_file
For more information on how to write SPL functions, see the chapter about SPL routines in IBM Informix Guide to SQL: Tutorial. See also the section Transactions in SPL Routines on page 5-72. You can include valid SQL or SPL language statements in SPL functions. See, however, the following sections in Chapter 5 that describe restrictions on SQL and SPL statements within SPL routines: Subset of SPL Statements Valid in the Statement Block on page 5-69; SQL Statements Not Valid in an SPL Statement Block on page 5-70; and Restrictions on SPL Routines in Data-Manipulation Statements on page 5-71.
Internal representation
External functions are functions you write in an external language (that is, a programming language other than SPL) that Dynamic Server supports. To create a C user-defined function: 1. Write the C function. 2. Compile the function and store the compiled code in a shared library (the shared-object file for C). 3. Register the function in the database server with the CREATE FUNCTION statement. To create a user-defined function written in the Java language: 1. Write a Java static method, which can use the JDBC functions to interact with the database server. 2. Compile the Java source file and create a .jar file (the shared-object file for Java). 3. Execute the install_jar( ) procedure with the EXECUTE PROCEDURE statement to install the jar file in the current database. 4. If the UDR uses user-defined types, create a map between SQL data types and Java classes. Use the setUDTExtName( ) procedure that is explained in EXECUTE PROCEDURE on page 2-336. 5. Register the UDR with the CREATE FUNCTION statement. Rather than storing the body of an external routine directly in the database, the database server stores only the pathname of the shared-object file that contains the compiled version of the routine. When it executes the external routine, the database server invokes the external object code. The database server stores information about an external function in system catalog tables, including sysprocbody and sysprocauth. For more information on the system catalog, see the IBM Informix Guide to SQL: Reference.
2-111
CREATE FUNCTION
returns a single Boolean value. The external routine reference name specifies the path to the C shared library where the function object code is actually stored. This library contains a C function basetype1_equal( ), which is invoked during execution of the equal( ) function.
CREATE FUNCTION equal ( arg1 basetype1, arg2 basetype1) RETURNING BOOLEAN; EXTERNAL NAME "/usr/lib/basetype1/lib/libbtype1.so(basetype1_equal)" LANGUAGE C END FUNCTION
This function returns a single INTEGER value. The EXTERNAL NAME clause specifies that the Java implementation of the sql_explosive_reaction( ) function is a method called explosiveReaction( ), which resides in the Chemistry Java class that resides in the course_jar jar file.
If user joan now executes function func1, user mike, not user joan, is the owner of the newly created table tab1. In the case of a DBA-privileged UDR, however, the user who executes a UDR (rather than the UDR owner) owns any database objects created by the UDR, unless another owner is specified for the database object within the UDR. For example, assume that user mike creates this user-defined function:
CREATE DBA FUNCTION func2 () RETURNING INT; CREATE TABLE tab2 (coly INT); RETURN 1; END FUNCTION;
If user joan now executes function func2, user joan, not user mike, is the owner of the newly created table tab2. See also the section Support for Roles and User Identity on page 5-72.
Related Information
Related statements: ALTER FUNCTION, ALTER ROUTINE, CREATE PROCEDURE, CREATE FUNCTION FROM, DROP FUNCTION, DROP ROUTINE, GRANT, EXECUTE FUNCTION, PREPARE, REVOKE, and UPDATE STATISTICS
2-112
CREATE FUNCTION
Chapter 3 of this manual describes the SPL language. For a discussion of how to create and use SPL routines, see the IBM Informix Guide to SQL: Tutorial. For a discussion of how to create and use external routines, see IBM Informix User-Defined Routines and Data Types Developer's Guide. For information about how to create C UDRs, see the IBM Informix DataBlade API Programmer's Guide.
2-113
Syntax
CREATE FUNCTION FROM file file_var
Element file
Description Path and filename of a file that contains the full CREATE FUNCTION statement text. Default pathname is current directory. Variable storing value of file
Restrictions Must exist and contain exactly one CREATE FUNCTION statement Same as for file
file_var
Usage
Functions written in the C or Java language are called external functions. When the IFX_EXTEND_ROLE configuration parameter is set to ON, only users who have been granted the built-in EXTEND role can create external functions. An ESQL/C program cannot directly create a user-defined function. That is, it cannot contain the CREATE FUNCTION statement. To create these functions within an ESQL/C program: 1. Create a source file with the CREATE FUNCTION statement. 2. Use the CREATE FUNCTION FROM statement to send the contents of this source file to the database server for execution. The file that you specify in the file parameter can contain only one CREATE FUNCTION statement. For example, suppose that the following CREATE FUNCTION statement is in a separate file, called del_ord.sql:
CREATE FUNCTION delete_order( p_order_num int) RETURNING int, int; DEFINE item_count int; SELECT count(*) INTO item_count FROM items WHERE order_num = p_order_num; DELETE FROM orders WHERE order_num = p_order_num; RETURN p_order_num, item_count; END FUNCTION;
In the ESQL/C program, you can access the delete_order( ) SPL function with the following CREATE FUNCTION FROM statement:
EXEC SQL create function from del_ord.sql;
If you are not sure whether the UDR in the file is a user-defined function or a user-defined procedure, use the CREATE ROUTINE FROM statement.
2-114
Related Information
Related statements: CREATE FUNCTION, CREATE PROCEDURE, CREATE PROCEDURE FROM, and CREATE ROUTINE FROM
2-115
CREATE INDEX
Use the CREATE INDEX statement to create an index for one or more columns in a table, or on values returned by a UDR that uses column values as arguments. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE
(1) Index-Type Options (3) GK INDEX index ON static ( GK SELECT Clause Index Scope Index-Key Specs
(5)
(6) ONLINE
Index Scope:
INDEX index ON table synonym
Index Options:
(8)
(10)
Notes: 1 2 3 4 5 6 7 8 9 10 See Index-Type Options on page 2-117 See Index-Key Specification on page 2-118 Extended Parallel Server only See SELECT Clause for Generalized-Key Index on page 2-132 See LOCK MODE Options (XPS) on page 2-132 Dynamic Server only See USING Access-Method Clause (IDS) on page 2-123 See FILLFACTOR Option on page 2-125 See Storage Options on page 2-125 See Index Modes (IDS) on page 2-130
IBM Informix Guide to SQL: Syntax
2-116
CREATE INDEX
Element index static synonym, table Description Name declared here for a new index Table on which a Generalized Key index is created (XPS) Name or synonym of a standard or temporary table to be indexed Restrictions Must be unique among names of indexes in the database Table must exist and be static; it cannot be a virtual table Synonym and its table must exist in the current database Syntax Database Object Name on page 5-17 Database Object Name on page 5-17 Database Object Name on page 5-17
Usage
When you issue the CREATE INDEX statement, the table is locked in exclusive mode. If another process is using the table, CREATE INDEX returns an error. (For an exception, however, see The ONLINE Keyword (IDS) on page 2-302.) Indexes use the collation that was in effect when CREATE INDEX executed. A secondary-access method (sometimes referred to as an index-access method) is a set of database server functions that build, access, and manipulate an index structure such as a B-tree, R-tree, or an index structure that a DataBlade module provides, in order to speed up the retrieval of data. In Dynamic Server, neither synonym nor table can refer to a virtual table. If you are using Extended Parallel Server, use the USING BITMAP keywords to store the list of records in each key of the index as a compressed bitmap. The storage option is not compatible with a bitmap index because bitmap indexes must be fragmented in the same way as the table.
Index-Type Options
The index-type options let you specify attributes of the index. Index-Type Options:
DISTINCT UNIQUE
CLUSTER
A unique index prevents duplicate values in the customer_num column. A column with a unique index can have, at most, one NULL value. The DISTINCT and UNIQUE keywords are synonyms in this context, so the following statement has the same effect as the previous example:
CREATE DISTINCT INDEX c_num_ix ON customer (customer_num)
The index in both examples is maintained in ascending order, which is the default order.
Chapter 2. SQL Statements
2-117
CREATE INDEX
You can also prevent duplicates in a column or set of columns by creating a unique constraint with the CREATE TABLE or ALTER TABLE statement. You cannot specify an R-tree secondary-access method for a UNIQUE index key. For more information on how to create unique constraints, see the CREATE TABLE or ALTER TABLE statements. See also the section Differences Between a Unique Constraint and a Unique Index on page 2-178. How Indexes Affect Primary-Key, Unique, and Referential Constraints: The database server creates internal B-tree indexes for primary-key, unique, and referential constraints. If a primary-key, unique, or referential constraint is added after the table is created, any user-created indexes on the constrained columns are used, if appropriate. An appropriate index is one that indexes the same columns that are used in the primary-key, referential, or unique constraint. If an appropriate user-created index is not available, the database server creates a nonfragmented internal index on the constrained column or columns.
CLUSTER Option
Use the CLUSTER keyword to reorder the rows of the table in the order that the index designates. The CREATE CLUSTER INDEX statement fails if a CLUSTER index already exists on the same table.
CREATE CLUSTER INDEX c_clust_ix ON customer (zipcode)
This statement creates an index on the customer table and physically orders the rows according to their postal code values, in (by default) ascending order. If the CLUSTER option is specified in addition to fragments on, the data values are clustered only within each fragment, and not globally across the entire table. In Dynamic Server, you cannot specify the CLUSTER option and the ONLINE keyword in the same statement. In addition, some secondary-access methods (such as R-tree) do not support clustering. Before you specify CLUSTER for your index, be sure that the index uses an access method that supports clustering. If you are using Extended Parallel Server, you cannot use the CLUSTER option on STANDARD tables. In addition, you cannot specify the CLUSTER option and storage options in the same CREATE INDEX statement (see Storage Options on page 2-125). When you create a clustered index, the constrid of any unique or referential constraints on the associated table changes. The constrid is stored in the sysconstraints system catalog table.
Index-Key Specification
Use the Index-Key Specification of the CREATE INDEX statement to define the key value for the index, to specify whether to sort the index in ascending or descending order, and to identify a default operator class if the secondary access method in the USING clause has no default operator class, or to override its default operator class. Index-Key Specification:
, ( column , (1) function ( func_col ) (1) op_class ASC DESC )
2-118
CREATE INDEX
Notes: 1
Element column function Description
Column or columns used as a See Using a Column or Column List as the Index key to this index Key on page 2-119. User-defined function used as Must be a nonvariant function that does not return a a key to this index large object data type. Cannot be a built-in algebraic, exponential, log, or hex function. Columns whose values are arguments to function
func_col op_class
See Using a Function as an Index Key (IDS) on page Identifier on 2-120. page 5-22 Identifier on page 5-22
Operator class associated with If the secondary-access method in the USING clause column or function for this has no default operator class, you must specify one index here. (See Using an Operator Class (IDS) on page 2-123.)
The index-key value can be one or more columns that contain built-in data types. If you specify multiple columns, the concatenation of values from the set of columns is treated as a single composite column for indexing. In Dynamic Server, the index-key value also can be one of the following: v A column of type LVARCHAR(size), if size is smaller than 387 bytes v One or more columns that contain user-defined data types v One or more values that a user-defined function returns (referred to as a functional index) v A combination of columns and functions The 387-byte LVARCHAR size limit is for dbspaces of the default (2 kilobyte) page size, but dbspaces of larger page sizes can support larger index key sizes, as listed in the following table.
Table 2-1. Maximum Index Key Size for Selected Page Sizes Page Size Maximum Index Key Size 2 kilobytes 4 kilobytes 387 bytes 796 bytes
2-119
CREATE INDEX
the column or column list; you cannot create another unique index on this column or column list with the CREATE INDEX statement. v The number of indexes that you can create on the same column or the same set of columns is restricted. See Restrictions on the Number of Indexes on a Set of Columns on page 2-122. In Dynamic Server, these additional restrictions apply to indexes: v You cannot create an index on a column of an external table. v The column cannot be of a collection data type.
You can create functional indexes within an SPL routine. You can also create an index on a nonvariant user-defined function that does not return a large object. The functional index can be a B-tree index, an R-tree index, or a user-defined secondary-access method. The ONLINE keyword, however, is not valid when you create a functional index; see The ONLINE Keyword (IDS) on page 2-302.
The UNIQUE keyword prevents any duplicates of a given combination of stock_num and manu_code. The index is in ascending order by default. You can include up to 16 columns in a composite index. The total width of all indexed columns in a single composite index cannot exceed 380 bytes. In Dynamic Server, an index key part is either a column in a table, or the result of a user-defined function on one or more columns. A composite index can have up to 16 key parts that are columns, or up to 341 key parts that are values returned by a UDR. This limit is language-dependent and applies to UDRs written in SPL or Java; functional indexes based on C language UDRs can have up to 102 key parts. A composite index can have any of the following items as an index key: v One or more columns v One or more values that a user-defined function returns (referred to as a functional index) v A combination of columns and user-defined functions
2-120
CREATE INDEX
For dbspaces of the default page size of 2 kilobytes, the total width of all indexed columns in a single CREATE INDEX statement cannot exceed 387 bytes, except for functional indexes of Dynamic Server, whose language-dependent limits are described earlier in this section. Whether the index is based directly on column values in the table, or on functions that take column values as arguments, the maximum size of the index key depends only on page size. The maximum index key size for functional indexes in dbspaces larger than 2 kilobytes are the same as for non-functional indexes. The only difference between limits on column indexes and functional indexes is the number of key parts. A column based index can have no more than 16 key parts and a functional index has different language-dependent limits on key parts. For a given page size, the maximum index key size is the same for both column-based and functional indexes.
In this example, the customer_num column has a unique constraint placed on it. The first CREATE INDEX statement places an index sorted in descending order on the customer_num column. The second CREATE INDEX includes the customer_num column as part of a composite index. For more information on composite indexes, see Creating Composite Indexes on page 2-120. Bidirectional Traversal of Indexes: If you do not specify the ASC or DESC keywords when you create an index on a single column, key values are stored in ascending order by default; but the bidirectional-traversal capability of the database server lets you create just one index on a column and use that index for queries that specify sorting of results in either ascending or descending order of the sort column.
2-121
CREATE INDEX
Because of this capability, it does not matter whether you create a single-column index as an ascending or descending index. Whichever storage order you choose for an index, the database server can traverse that index in ascending or descending order when it processes queries. If you create a composite index on a table, however, the ASC and DESC keywords might be required. For example, if you want to enter a SELECT statement whose ORDER BY clause sorts on multiple columns and sorts each column in a different order, and you want to use an index for this query, you need to create a composite index that corresponds to the ORDER BY columns. For example, suppose that you want to enter the following query:
SELECT stock_num, manu_code, description, unit_price FROM stock ORDER BY manu_code ASC, unit_price DESC
This query sorts first in ascending order by the value of the manu_code column and then in descending order by the value of the unit_price column. To use an index for this query, you need to issue a CREATE INDEX statement that corresponds to the requirements of the ORDER BY clause. For example, you can enter either of the following statements to create the index:
CREATE INDEX stock_idx1 ON stock (manu_code ASC, unit_price DESC); CREATE INDEX stock_idx2 ON stock (manu_code DESC, unit_price ASC);
The composite index that was used for this query (stock_idx1 or stock_idx2) cannot be used for queries in which you specify the same sort direction for the two columns in the ORDER BY clause. For example, suppose that you want to enter the following queries:
SELECT stock_num, manu_code, description, unit_price FROM stock ORDER BY manu_code ASC, unit_price ASC; SELECT stock_num, manu_code, description, unit_price FROM stock ORDER BY manu_code DESC, unit_price DESC;
If you want to use a composite index to improve the performance of these queries, you need to enter one of the following CREATE INDEX statements. You can use either one of the created indexes (stock_idx3 or stock_idx4) to improve the performance of the preceding queries.
CREATE INDEX stock_idx3 ON stock (manu_code ASC, unit_price ASC); CREATE INDEX stock_idx4 ON stock (manu_code DESC, unit_price DESC);
You can create no more than one ascending index and one descending index on a column. Because of the bidirectional-traversal capability of the database server, you only need to create one of the indexes. Creating both would achieve exactly the same results for an ascending or descending sort on the stock_num column. After INSERT or DELETE operations are performed on an indexed table, the number of index entries can vary within a page, and the number of index pages that a table requires can depend on whether the index specifies ascending or descending order. For some load and DML operations, a descending single-column or multi-column index might cause the database server to allocate more index pages than an ascending index requires. Restrictions on the Number of Indexes on a Set of Columns: You can create multiple indexes on a set of columns, provided that each index has a unique
2-122
CREATE INDEX
combination of ascending and descending columns. For example, to create all possible indexes on the stock_num and manu_code columns of the stock table, you could create four indexes: v The ix1 index on both columns in ascending order v The ix2 index on both columns in descending order v The ix3 index on stock_num in ascending order and on manu_code in descending order v The ix4 index on stock_num in descending order and on manu_code in ascending order Because of the bidirectional-traversal capability of the database server, you do not need to create these four indexes. You only need to create two indexes: v The ix1 and ix2 indexes achieve the same results for sorts in which the user specifies the same sort direction (ascending or descending) for both columns, so you only need one index of this pair. v The ix3 and ix4 indexes achieve the same results for sorts in which the user specifies different sort directions for the two columns (ascending on the first column and descending on the second column or vice versa). Thus, you only need to create one index of this pair. (See also Bidirectional Traversal of Indexes on page 2-121.) Dynamic Server can also support multiple indexes on the same combination of ascending and descending columns, if each index has a different collating order; see SET COLLATION on page 2-531.
2-123
CREATE INDEX
Element parameter sec_acc _method value Description Secondary-access-method parameter for this index Secondary-access method for this index Value of the specified parameter Restrictions See the user documentation for your user-defined access method Method can be a B-tree, R-tree, or user-defined access method, such as one that a DataBlade module defines Must be a valid literal value for parameter in this secondary-access method Syntax Quoted String on page 4-142 Identifier on page 5-22 Quoted String on page 4-142 or Literal Number on page 4-137
A secondary-access method is a set of routines that perform all of the operations needed for an index, such as create, drop, insert, delete, update, and scan. The database server provides the following secondary-access methods: v The generic B-tree index is the built-in secondary-access method. A B-tree index is good for a query that retrieves a range of data values. The database server implements this secondary-access method and registers it as btree in the system catalog tables. v The R-tree method is a registered secondary-access method. An R-tree index is good for searches on multidimensional data. The database server registers this secondary-access method as rtree in the system catalog tables of a database. An R-tree secondary-access method is not valid for a UNIQUE index key. An R-tree index cannot be clustered, and cannot be stored in a dbspace that has a non-default page size. For more information on R-tree indexes, see the IBM Informix R-Tree Index User's Guide. The access method that you specify must be registered in the sysams system catalog table. The default secondary-access method is B-tree. If the access method is B-tree, you can create only one index for each unique combination of ascending and descending columnar or functional keys with operator classes. (This restriction does not apply to other secondary-access methods.) By default, CREATE INDEX creates a generic B-tree index. If you want to create an index with a secondary-access method other than B-tree, you must specify the name of the secondary-access method in the USING clause. Some user-defined access methods are packaged as DataBlade modules. Some DataBlade modules provide indexes that require specific parameters when you create them. For more information about user-defined access methods, refer to the documentation of your secondary access-method or DataBlade module. The following example (for a database that implements R-tree indexes) creates an R-tree index on the location column that contains an opaque data type, point, and performs a query with a filter on the location column.
CREATE INDEX loc_ix ON TABLE emp (location) USING rtree; SELECT name FROM emp WHERE location N_equator_equals point(500, 0);
The following CREATE INDEX statement creates an index that uses the fulltext secondary-access method, which takes two parameters: WORD_SUPPORT and PHRASE_SUPPORT. It indexes a table t, which has two columns: i, an integer column, and data, a TEXT column.
2-124
CREATE INDEX
CREATE INDEX tx ON t(data) USING fulltext (WORD_SUPPORT=PATTERN, PHRASE_SUPPORT=MAXIMUM);
FILLFACTOR Option
The FILLFACTOR option takes effect only in the following cases: v when you build an index on a table that contains more than 5,000 rows and that uses more than 100 table pages v when you create an index on a fragmented table v when you create a fragmented index on a nonfragmented table. Use the FILLFACTOR option to provide for expansion of an index at a later date or to create compacted indexes. FILLFACTOR Option:
FILLFACTOR percent
Element percent
Description Percentage of each index page that is filled by index data when the index is created. The default is 90.
When the index is created, the database server initially fills only that percentage of the nodes specified with the FILLFACTOR value. The FILLFACTOR can also be set as a parameter in the ONCONFIG file. The FILLFACTOR clause on the CREATE INDEX statement overrides the setting in the ONCONFIG file. For more information about the ONCONFIG file and the parameters you can use, see your IBM Informix Administrator's Guide.
Storage Options
The storage options specify the distribution scheme of an index. You can use the IN clause to specify a storage space for the entire index, or you can use the FRAGMENT BY clause to fragment the index across multiple storage spaces.
Chapter 2. SQL Statements
2-125
CREATE INDEX
Storage Options:
IN dbspace (1) dbslice (2) extspace TABLE FRAGMENT BY Clause for Indexes
(3)
Notes: 1 2 3
Element dbslice dbspace extspace Description The dbslice that contains all of the index fragments The dbspace in which to store the index
Extended Parallel Server only Dynamic Server only See FRAGMENT BY Clause for Indexes on page 2-127
Restrictions Must exist Must exist Syntax Identifier on page 5-22 Identifier on page 5-22 See the documentation for your access method.
Name assigned by the onspaces command to a storage Must exist area outside the database server
If you specify any storage option (except IN TABLE), you create a detached index. Detached indexes are indexes that are created with a specified distribution scheme. Even if the distribution scheme specified for the index is identical to that specified for the table, the index is still considered to be detached. If the distribution scheme of a table changes, all detached indexes continue to use the distribution scheme that the Storage Option clause specified. For information on locally detached and globally detached indexes, see FRAGMENT BY Clause for Indexes on page 2-127. If you are using Extended Parallel Server, you cannot use the CLUSTER option and storage options in the same CREATE INDEX statement. See CLUSTER Option on page 2-118.
IN Clause
Use the IN clause to specify a storage space to hold the entire index. The storage space that you specify must already exist. Storing an Index in a dbspace: Use the IN dbspace clause to specify the dbspace where you want your index to reside. When you use this clause with any option except the TABLE keyword, you create a detached index. The IN dbspace clause allows you to isolate an index. For example, if the customer table is created in the custdata dbspace, but you want to create an index in a separate dbspace called custind, use the following statements:
CREATE TABLE customer . . . IN custdata EXTENT SIZE 16 CREATE INDEX idx_cust ON customer (customer_num) IN custind
Storing an Index Fragment in a Named Partition (IDS): Besides the option of storing a fragment of the index in a dbspace, Dynamic Server supports storing fragments of the index in a named subset of the dbspace, called a partition. Unless you explicitly declare names for the fragments in the PARTITION BY or
2-126
CREATE INDEX
FRAGMENT BY clause, each fragment, by default, has the same name as the dbspace where it resides. This includes all fragmented tables and indexes migrated from earlier releases of Dynamic Server. Storing an Index in a dbslice (XPS): Using Extended Parallel Server, the IN dbslice clause allows you to fragment an index across multiple dbspaces. The database server fragments the table by round-robin in the dbspaces that make up the dbslice when the table is created. Storing Data in an extspace (IDS): In general, use the extspace storage option in conjunction with the USING Access-Method Clause (IDS) on page 2-123. For more information, refer to the user documentation for your custom-access method. Creating an Attached Index with the IN TABLE Keywords (IDS): In some earlier releases of Dynamic Server, if you did not use the storage options to specify a distribution scheme, then, by default, the index used the same distribution scheme as the table on which it was built. Such an index is called an attached index. If you omit the Storage Options clause, Dynamic Server creates new indexes as detached indexes by default, but supports existing attached indexes created by earlier release versions. (The DEFAULT_ATTACH environment variable, however, can override the default behavior so that the new index is attached.) You can also specify IN TABLE as the storage option to create an attached index, even if the DEFAULT_ATTACH environment variable is not set. An attached index is created in the same dbspace (or dbspaces, if the table is fragmented) as the table on which it is built. If the distribution scheme of a table changes, all attached indexes start using the new distribution scheme. Only B-tree indexes that are nonfragmented and that are on nonfragmented tables can be attached. All other indexes, including extensibility related indexes, such as R-trees and functional indexes, must be detached. You cannot create an attached index using a collating order different from that of the table, nor different from what DB_LOCALE specifies. For information about the DB_LOCALE and DEFAULT_ATTACH environment variables, see the IBM Informix Guide to SQL: Reference. Important: Attached indexes are supported in this release for backward compatibility with Dynamic Server 7.x, but IBM does not recommend use of the DEFAULT_ATTACH environment variable nor of the IN TABLE storage option in new applications. Attached indexes are a deprecated feature that might not be supported in some future release of Dynamic Server.
2-127
CREATE INDEX
FRAGMENT BY (1) PARTITION BY EXPRESSION ( (2) HASH ( column ) IN dbslice , ( , HYBRID ( column ) EXPRESSION Clause dbspace ) Expression List , , REMAINDER Clause )
Expression List:
, ( expr ) IN dbspace (1) PARTITION part
REMAINDER Clause:
REMAINDER ( expr ) PARTITION part IN dbspace
(1)
EXPRESSION Clause:
, EXPRESSION expr IN dbslice dbspace , ( dbspace ) , REMAINDER expr IN dbslice dbspace , ( dbspace )
Expression defining which index keys each See Restrictions on fragment stores Fragmentation Expressions on page 2-129. Name that you declare here for a partition of a dbspace Required for any partition in the same dbspace as another partition of the same index
part
2-128
CREATE INDEX
Here the IN keyword introduces the name of a storage space where an index fragment is to be stored. If you list multiple dbspace names after the IN keyword, use parentheses to delimit the dbspace list. The parentheses around the list of fragment definitions that follow the EXPRESSION keyword are optional.
2-129
CREATE INDEX
server creates the index in the first dbspace that the DBSPACETEMP environment variable specifies. For more information on the DBSPACETEMP environment variable, see the IBM Informix Guide to SQL: Reference. For more information on the default storage characteristics of temporary tables, see Where Temporary Tables are Stored on page 2-214.
The following table explains the index modes Mode DISABLED Effect The database server does not update the index after insert, delete, and update operations that modify the base table. The optimizer does not use the index during the execution of queries. The database server updates the index after insert, delete, and update operations that modify the base table. The optimizer uses the index during query execution. If an insert or update operation causes a duplicate key value to be added to a unique index, the statement fails. The database server updates a unique index after insert, delete, and update operations that modify the base table. (This option is not available with duplicate indexes.) The optimizer uses the index during query execution. If an insert or update operation causes a duplicate key value to be added to a unique index in filtering mode, the statement continues processing, but the bad row is written to the violations table associated with the base table. Diagnostic information about the unique-index violation is written to the diagnostics table associated with the base table. If you specify filtering for a unique index, you can also specify one of the following error options. Error Option WITHOUT ERROR Effect A unique-index violation during an insert or update operation returns no integrity-violation error to the user. Any unique-index violation during an insert or update operation returns an integrity-violation error to the user.
ENABLED
FILTERING
WITH ERROR
2-130
CREATE INDEX
v You can set the mode of a unique index to enabled, disabled, or filtering. v If you do not specify a mode, then by default the index is enabled. v For an index set to filtering mode, if you do not specify an error option, the default is WITHOUT ERROR. v When you add a new unique index to an existing base table and specify the disabled mode for the index, your CREATE INDEX statement succeeds even if duplicate values in the indexed column would cause a unique-index violation. v When you add a new unique index to an existing base table and specify the enabled or filtering mode for the index, your CREATE INDEX statement succeeds provided that no duplicate values exist in the indexed column that would cause a unique-index violation. However, if any duplicate values exist in the indexed column, your CREATE INDEX statement fails and returns an error. v When you add a new unique index to an existing base table in the enabled or filtering mode, and duplicate values exist in the indexed column, erroneous rows in the base table are not filtered to the violations table. Thus, you cannot use a violations table to detect the erroneous rows in the base table. Adding a Unique Index When Duplicate Values Exist in the Column: If you attempt to add a unique index in the enabled mode but receive an error message because duplicate values are in the indexed column, take the following steps to add the index successfully: 1. Add the index in the disabled mode. Issue the CREATE INDEX statement again, but this time specify the DISABLED keyword. 2. Start a violations and diagnostics table for the target table with the START VIOLATIONS TABLE statement. 3. Issue a SET Database Object Mode statement to change the mode of the index to enabled. When you issue this statement, existing rows in the target table that violate the unique-index requirement are duplicated in the violations table. You receive an integrity-violation error message, however, and the index remains disabled. 4. Issue a SELECT statement on the violations table to retrieve the nonconforming rows that are duplicated from the target table. You might need to join the violations and diagnostics tables to get all the necessary information. 5. Take corrective action on the rows in the target table that violate the unique-index requirement. 6. After you fix all the nonconforming rows in the target table, issue the SET Database Object Mode statement again to switch the disabled index to the enabled mode. This time the index is enabled, and no integrity violation error message is returned because all rows in the target table now satisfy the new unique-index requirement.
2-131
CREATE INDEX
When an index is disabled, the database server stops updating it and stops using it during queries, but the catalog information about the disabled index is retained. You cannot create a new index on a column or set of columns if a disabled index on that column or set of columns already exists. Similarly, you cannot create an active (enabled) unique, foreign-key, or primary-key constraint on a column or on a set of columns if the indexes on which the active constraint depends are disabled.
LOCK MODE
In COARSE lock mode, index-level locks are acquired on the index instead of item-level or page-level locks. This mode reduces the number of lock calls on the index. Use the coarse-lock mode when you know the index is not going to change, as when only read-only operations are performed on the index. If you specify no lock mode, the default is NORMAL. That is, the database server places item-level or page-level locks on the index as necessary.
2-132
CREATE INDEX
, SELECT ALL DISTINCT (1) UNIQUE (2) Expression * table. synonym. alias.
Notes: 1 2 3 4
Element alias Description Temporary name assigned to the table in the FROM clause
Informix extension See Expression on page 4-34 See FROM Clause for Generalized-Key Index See WHERE Clause for Generalized-Key Index on page 2-134
Restrictions You cannot use an alias for the table on which the index is built Syntax Identifier on page 5-22
The synonym and the table to which it Database Object points must exist Name on page 5-17
The following restrictions apply to expressions in the GK SELECT clause: v It cannot refer to any SPL routine. v It cannot include the USER, TODAY, CURRENT, DBINFO built-in functions, nor any function that refers to a point in time or interval. FROM Clause for Generalized-Key Index: GK FROM Clause:
FROM indexed_table synonym1 , table synonym2 AS
alias
Description Temporary name for a table Table on which the index is being built
The FROM clause must include the Database Object indexed table Name on page 5-17 Database Object Name on page 5-17
Synonym or identifier of table from The synonym and the table to which to retrieve data which it points must exist
2-133
CREATE INDEX
All tables that appear in the FROM clause must be local static tables. That is views, non-static tables, and remote tables are not valid in the FROM clause. Tables that you specify in the FROM clause must be transitively joined on key to the indexed table. Table A is transitively joined on key to table B if A and B are joined with equal joins on the unique-key columns of A. Suppose that tables A, B, and C each have col1 as a primary key. In the following example, B is joined on key to A and C is joined on key to B. C is transitively joined on key to A.
CREATE GK INDEX gki (SELECT A.col1, A.col2 FROM A, B, C WHERE A.col1 = B.col1 AND B.col1 = C.col1)
Notes: 1 2 See Condition on page 4-5 See Specifying a Join in the WHERE Clause on page 2-511
The WHERE clause for a GK index has the following restrictions: v It cannot include USER, TODAY, CURRENT, or DBINFO built-in functions, nor any function that references a time value or a time-interval value. v It cannot refer to any SPL routine. v It cannot include any subquery. v It cannot include any aggregate function. v It cannot include any IN, LIKE, or MATCHES expression.
2-134
CREATE INDEX
this time, the database server briefly locks the table while updating the system catalog with information about the new index. The indexed table in a CREATE INDEX . . . ONLINE statement can be permanent or temporary, logged or unlogged, and fragmented or non-fragmented. You cannot specify the ONLINE keyword, however, when you create a functional index, a clustered index, a virtual index, or an R-tree index. The following statement instructs the database server to create a unique online index called idx_1 on the lname column of the customer table:
CREATE UNIQUE INDEX idx_1 ON customer(lname) ONLINE;
If, while this index is being constructed, other users insert into the customer table new rows in which lname is not unique, the database server issues an error after it has created the new idx_1 index and registered it in the system catalog. The term online index refers to the locking strategy that the database follows in creating or dropping an index with the ONLINE keyword, rather than to properties of the index that persist after its creation (or its destruction) has completed. This term appears in some error messages, however, and in recovery or restore operations, the database server re-creates as an online index any index that you created as an online index. No more than one CREATE INDEX . . . ONLINE or DROP INDEX . . . ONLINE statement can concurrently reference online indexes on the same table, or online indexes that have the same identifier.
Related Information
Related statements: ALTER INDEX, CREATE OPCLASS, CREATE TABLE, DROP INDEX, RENAME INDEX, and SET Database Object Mode For a discussion of the structure of indexes, see your IBM Informix Administrator's Reference. For a discussion of the different types of indexes and information about performance issues with indexes, see your IBM Informix Performance Guide. For a discussion of the GLS aspects of the CREATE INDEX statement, see the IBM Informix GLS User's Guide. For information about operator classes, refer to the CREATE OPCLASS statement and IBM Informix User-Defined Routines and Data Types Developer's Guide. For information about the indexes that DataBlade modules provide, refer to your DataBlade module documentation.
2-135
Syntax
CREATE OPAQUE TYPE type ( INTERNALLENGTH = length VARIABLE , ) , (1) Opaque-Type Modifier
Notes: 1
Element length type Description Number of bytes needed to store a value of this data type Name that you declare here for the new opaque data type
Usage
The CREATE OPAQUE TYPE statement registers a new opaque data type in the sysxtdtypes system catalog table. To create an opaque type, you must have the Resource privilege on the database. When you create the opaque type, only you, the owner, have the Usage privilege on the new opaque data type. You can use the GRANT or REVOKE statements to grant or revoke the Usage privilege of other users of the database. To view the privileges on a data type, check the sysxtdtypes system catalog table for the owner name, and check the sysxtdtypeauth system catalog table for additional type privileges that might have been granted. For details of system catalog tables, see the IBM Informix Guide to SQL: Reference. The DBAccess utility can also display privileges on opaque data types.
2-136
INTERNALLENGTH Modifier
The INTERNALLENGTH modifier specifies the storage size that is required for the opaque data type as fixed length or varying length. Fixed-Length Opaque Types: A fixed-length opaque type has an internal structure of fixed size. To create a fixed-length opaque type, specify the size of the internal structure, in bytes, for the INTERNALLENGTH modifier. The next example creates a fixed-length opaque type called fixlen_typ and allocates 8 bytes for storing this data type.
CREATE OPAQUE TYPE fixlen_typ(INTERNALLENGTH=8, CANNOTHASH)
Varying-Length Opaque Types: A varying-length opaque type has an internal structure whose size might vary from one value to another. For example, the internal structure of an opaque data type might hold the actual value of a string up to a certain size, but beyond this size it might use an LO-pointer to a CLOB to hold the value. To create a varying-length opaque data type, use the VARIABLE keyword with the INTERNALLENGTH modifier. The following statement creates a variable-length opaque data type called varlen_typ:
CREATE OPAQUE TYPE varlen_typ (INTERNALLENGTH=VARIABLE, MAXLEN=1024)
Opaque-Type Modifier
Opaque-Type Modifier:
Element align_value
Description Byte boundary on which to align an opaque type that is passed to a user-defined routine. Default is 4 bytes.
Restrictions Must be 1, 2, 4, or 8, depending on the C definition of the opaque data type and hardware and compiler used to build the object file for the data type
length
Maximum length to allocate for instances Must be a positive integer 32 kilobytes. Literal of varying-length opaque types. Default Do not specify for fixed-length data types. Number is 2 kilobytes. Values that exceed this length return errors. on page 4-137
Modifiers can specify the following optional information for opaque types: v MAXLEN specifies the maximum length for varying-length types. v CANNOTHASH specifies that the database server cannot use the built-in hash function on the opaque type. v ALIGNMENT specifies the byte boundary on which the database server aligns the opaque type. v PASSEDBYVALUE specifies that an opaque type that requires 4 bytes or fewer of storage is passed by value. By default, opaque types are passed to user-defined routines by reference.
Chapter 2. SQL Statements
2-137
output( )
receive( )
send( )
db_receive( )
db_send( )
server_receive( )
2-138
import( )
export ( )
importbinary( )
exportbinary( )
assign( )
destroy( )
Performs any processing necessary before removing a When the database server executes the row that contains the opaque type This support DELETE or DROP TABLE, before it function must be named destroy( ). removes the opaque type from disk Returns a list of the LO-pointer structures (pointers to When the database server must search smart large objects) in an opaque type opaque types for references to smart large objects; when oncheck runs, or an archive is performed Compares two values of the opaque type and returns When the database server encounters an integer value to indicate whether the first value is an ORDER BY, UNIQUE, DISTINCT, or less than, equal to, or greater than the second value UNION clause in a SELECT statement, or when CREATE INDEX creates a B-tree index
lohandles( )
compare( )
After you write the necessary support functions for the opaque type, use the CREATE FUNCTION statement to register these support functions in the same database as the opaque type. Certain support functions convert other data types to or from the new opaque type. After you create and register these support functions, use the CREATE CAST statement to associate each function with a particular cast. The cast must be registered in the same database as the support function. After you have written the necessary C language or Java language source code to define an opaque data type, you then use the CREATE OPAQUE TYPE statement to register the opaque data type in the database.
Related Information
Related statements: CREATE CAST, CREATE DISTINCT TYPE, CREATE FUNCTION, CREATE ROW TYPE, CREATE TABLE, and DROP TYPE For a description of an opaque type, see the IBM Informix Guide to SQL: Reference.
2-139
2-140
CREATE OPCLASS
Use the CREATE OPCLASS statement to create an operator class for a secondary-access method. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE OPCLASS opclass FOR sec_acc_method
Element opclass
Description Name that you declare here for a new operator class Secondary-access method with which the new operator class is associated Support function that the secondary-access method requires
Restrictions Must be unique among operator classes within the database Must already exist and must be registered in the sysams table Must be listed in the order expected by the access method
Syntax Identifier on page 5-22 Identifier on page 5-22 Identifier on page 5-22
sec_acc_method
support_function
Usage
An operator class is the set of operators that support a secondary-access method for query optimization and building the index. A secondary-access method (sometimes referred to as an index access method) is a set of database server functions that build, access, and manipulate an index structure such as a B-tree, R-tree, or an index structure that a DataBlade module provides. The database server provides the B-tree and R-tree secondary-access methods. For more information on the btree secondary-access method, see Default Operator Classes on page 2-144. Define a new operator class when you want one of the following: v An index to use a different order for the data than the sequence that the default operator class provides v A set of operators that is different from any existing operator classes that are associated with a particular secondary-access method You must have the Resource privilege or be the DBA to create an operator class. The actual name of an operator class is an SQL identifier. When you create an operator class, theopclass name must be unique within the database.
Chapter 2. SQL Statements
2-141
CREATE OPCLASS
When you create an operator class in an ANSI-compliant database, the owner.opclass combination must be unique within the database. The owner name is case sensitive. If you do not put quotes around the owner name (or else set the ANSIOWNER environment variable), the name of the operator-class owner is stored in uppercase letters. The following CREATE OPCLASS statement creates a new operator class called abs_btree_ops for the btree secondary-access method:
CREATE OPCLASS abs_btree_ops FOR btree STRATEGIES (abs_lt, abs_lte, abs_eq, abs_gte, abs_gt) SUPPORT (abs_cmp)
An operator class has two kinds of operator-class functions: v Strategy functions Specify strategy functions of an operator class in the STRATEGY clause of the CREATE OPCLASS statement. In the preceding CREATE OPCLASS code example, the abs_btree_ops operator class has five strategy functions. v Support functions Specify support functions of an operator class in the SUPPORT clause. In the preceding CREATE OPCLASS code example, the abs_btree_ops operator class has one support function.
STRATEGIES Clause
Strategy functions are functions that users can invoke within a DML statement to operate on a specific data type. The query optimizer uses the strategy functions to determine whether a given index can be used to process a query. If a query includes a UDF or a column on which an index exists, and if the qualifying operator in the query matches any function in the STRATEGIES clause, then the query optimizer considers using the index for the query. For more information on query plans, see your IBM Informix Performance Guide. When you create a new operator class, the STRATEGIES clause identifies the strategy functions for the secondary-access method. Each strategy specification lists the name of a strategy function (and optionally, the data types of its parameters). You must list these functions in the order that the secondary-access method expects. For the specific order of strategy operators for the default operator classes for a B-tree index and for an R-tree index, see IBM Informix User-Defined Routines and Data Types Developer's Guide.
Strategy Specification
The STRATEGIES keyword introduces a comma-separated list of function names or function signatures for the new operator class. Each element of this list is called a strategy specification and has the following syntax: Strategy Specification:
strategy_function ( input_type , input_type , output_type )
2-142
CREATE OPCLASS
Element input_type Description Data type of an input parameter to the strategy function for which you intend to use a specific secondary-access method Data type of the optional output parameter of the strategy function Strategy function to associate with the specified operator class Restrictions Syntax
A strategy function accepts two Data Type on input parameter and can have one page 4-18 optional output parameter Optional output parameter for side-effect indexes Must be listed in the order that the specified secondary-access method expects Data Type on page 4-18 Database Database Object Name on page 5-17
output_type strategy_function
Each strategy_function is an external function. The CREATE OPCLASS statement does not verify that a user-defined function of the name you specify exists. However, for the secondary-access method to use the strategy function, the external function must be: v Compiled in a shared library v Registered in the database with the CREATE FUNCTION statement Optionally, you can specify the signature of a strategy function in addition to its name. A strategy function requires two input parameters and an optional output parameter. To specify the function signature, specify: v An input data type for each of the two input parameters of the strategy function, in the order that the strategy function uses them v Optionally, one output data type for an output parameter of the strategy function You can specify UDTs as well as built-in data types. If you do not specify the function signature, the database server assumes that each strategy function takes two arguments of the same data type and returns a BOOLEAN value.
SUPPORT Clause
Support functions are functions that the secondary-access method uses internally to build and search the index. Specify these functions for the secondary-access method in the SUPPORT clause of the CREATE OPCLASS statement. You must list the names of the support functions in the order that the secondary-access method expects. For the specific order of support operators for the default operator classes for a B-tree index and an R-tree index, refer to Default Operator Classes on page 2-144. The support function is an external function. CREATE OPCLASS does not verify that a specified support function exists. For the secondary-access method to use a support function, however, the support function must meet these criteria: v Be compiled in a shared library v Be registered in the database with the CREATE FUNCTION statement
2-143
CREATE OPCLASS
For each of the secondary-access methods that Dynamic Server provides, it provides a default operator class, as follows: v The default B-tree operator class is a built-in operator class. The database server implements the operator-class functions for this operator class and registers it as btree_ops in the system catalog tables of a database. v The default R-tree operator class is a registered operator class. The database server registers this operator class as rtree_ops in the system catalog tables. The database server does not implement the operator-class functions for the default R-tree operator class. Important: To use an R-tree index, you must install a spatial DataBlade module such as the Geodetic DataBlade module or any other third-party DataBlade module that implements the R-tree index. These implement the R-tree operator-class functions. DataBlade modules can provide other types of secondary-access methods. If a DataBlade module provides a secondary-access method, it might also provide a default operator class. For more information, refer to your DataBlade module users guide.
Related Information
Related statements: CREATE FUNCTION, CREATE INDEX, and DROP OPCLASS For information on support functions and how to create and extend an operator class, see IBM Informix User-Defined Routines and Data Types Developer's Guide. For more about R-tree indexes, see the IBM Informix R-Tree Index User's Guide. For information about the GLS aspects of the CREATE OPCLASS statement, refer to the IBM Informix GLS User's Guide.
2-144
CREATE PROCEDURE
Use the CREATE PROCEDURE statement to create a user-defined procedure. (To create a procedure from text of source code that is in a separate file, use the CREATE PROCEDURE FROM statement.) This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE DBA PROCEDURE procedure (1) function (1) (3) ( (2) Routine Parameter List (4) (5) SPECIFIC Specific Name ) Return Clause
(1) (7) Statement Block END PROCEDURE (5) (8) (9) External Routine Reference END PROCEDURE , (10) DOCUMENT Quoted String WITH LISTING IN pathname
Notes: 1 2 3 4 5 6 7 8 9 10 Stored Procedure Language only See Routine Parameter List on page 5-61 See Return Clause on page 5-51 See Specific Name on page 5-68 Dynamic Server only See Routine Modifier on page 5-54 See Statement Block on page 5-69 External routines only See External Routine Reference on page 5-20 See Quoted String on page 4-142
2-145
CREATE PROCEDURE
Element function, procedure pathname Description Restrictions Syntax Database Object Name on page 5-17 Operating system specific
Name declared here for a (XPS) The name must be unique among all SPL new SPL procedure or routines in the database. (IDS) See Procedure function Names in Dynamic Server on page 2-147. File to store compile-time Must exist on the computer where the database warnings resides
Usage
The entire length of a CREATE PROCEDURE statement must be less than 64 kilobytes. This length is the literal length of the CREATE PROCEDURE statement, including blank spaces, tabs, and other whitespace characters. In ESQL/C, you can use CREATE PROCEDURE only as text within a PREPARE statement. If you want to create a procedure for which the text is known at compile time, you must use a CREATE PROCEDURE FROM statement. Routines use the collating order that was in effect when they were created. See SET COLLATION statement of Dynamic Server for information about using non-default collation.
2-146
CREATE PROCEDURE
the term stored procedure. When it is necessary to distinguish between an SPL function and an SPL procedure, this manual does so. The term external routine applies to an external procedure or (for Dynamic Server) an external function, both constructs designating UDRs that are written in a programming language other than SPL. When it is necessary to distinguish between an external function and an external procedure, this manual does so. Extended Parallel Server does not support external routines, but the term user-defined routine (UDR) encompasses both SPL routines and external routines. Wherever the term UDR appears, it is applicable to SPL routines.
2-147
CREATE PROCEDURE
For a brief description of the routine signature that uniquely identifies each UDR, see Routine Overloading and Naming UDRs with a Routine Signature (IDS) on page 5-19. Using the SPECIFIC Clause to Specify a Specific Name: You can declare a specific name that is unique in the database for a user-defined procedure. A specific name is useful when you are overloading a procedure.
DOCUMENT Clause
The quoted string in the DOCUMENT clause provides a synopsis and description of a UDR. The string is stored in the sysprocbody system catalog table and is intended for the user of the UDR. Anyone with access to the database can query the sysprocbody system catalog table to obtain a description of one or all the UDRs stored in the database. A UDR or application program can query the system catalog tables to fetch the DOCUMENT clause and display it for a user. For example, to find the description of the SPL procedure raise_prices, shown in SPL Procedures on page 2-148, enter a query such as this example:
SELECT data FROM sysprocbody b, sysprocedures p WHERE b.procid = p.procid --join between the two catalog tables AND p.procname = raise_prices -- look for procedure named raise_prices AND b.datakey = D;-- want user document
For external procedures, you can use a DOCUMENT clause at the end of the CREATE PROCEDURE statement, whether or not you use the END PROCEDURE keywords.
SPL Procedures
SPL procedures are UDRs written in Stored Procedure Language (SPL) that do not return a value. To write and register an SPL routine, use the CREATE PROCEDURE statement. Embed appropriate SQL and SPL statements between the
2-148
CREATE PROCEDURE
CREATE PROCEDURE and END PROCEDURE keywords. You can also follow the UDR definition with the DOCUMENT and WITH FILE IN options. SPL routines are parsed, optimized (as far as possible), and stored in the system catalog tables in executable format. The body of an SPL routine is stored in the sysprocbody system catalog table. Other information about the routine is stored in other system catalog tables, including sysprocedures, sysprocplan, and sysprocauth. If the Statement Block portion of the CREATE PROCEDURE statement is empty, no operation takes place when you call the procedure. You might use such a dummy procedure in the development stage when you intend to establish the existence of a procedure but have not yet coded it. If you specify an optional clause after the parameter list, you must place a semicolon after the clause that immediately precedes the Statement Block. The following example creates an SPL procedure:
CREATE PROCEDURE raise_prices ( per_cent INT ) UPDATE stock SET unit_price = unit_price + (unit_price * (per_cent/100) ); END PROCEDURE DOCUMENT "USAGE: EXECUTE PROCEDURE raise_prices( xxx )", "xxx = percentage from 1 - 100 " WITH LISTING IN /tmp/warn_file
2-149
CREATE PROCEDURE
When the IFX_EXTEND_ROLE configuration parameter is set to ON, only users who have the built-in EXTEND role can create external procedures.
This example registers a user-defined procedure named showusers( ) that is written in the Java language:
CREATE PROCEDURE showusers() WITH (CLASS = "jvp") EXTERNAL NAME admin_jar:admin.showusers LANGUAGE JAVA
The EXTERNAL NAME clause specifies that the Java implementation of the showusers( ) procedure is a method called showusers( ), which resides in the admin Java class that resides in the admin_jar jar file.
2-150
CREATE PROCEDURE
Important: The sysdbopen( ) and sysdbclose( ) procedures are exceptions to the scope rule for stored procedures. In ordinary UDR procedures, the scope of variables and statements is local. SET PDQPRIORITY and SET ENVIRONMENT statement settings do not persist when these SPL procedures exit. In sysdbopen( ) and sysdbclose( ) procedures, however, statements that set the session environment remain in effect until another statement resets the options. For example, you might create the following procedure, which sets the isolation level to Dirty Read and turns on the IMPLICIT_PDQ environment variable, to be executed when any user connect to the database:
create procedure public.sysdbopen() set role engineer; end procedure;
Procedures do not accept arguments or return values. The sysdbopen( ) and sysdbclose( ) procedures must be executed from the connection coserver and must be installed in each database where you want to execute them. You can create the following four SPL procedures. Procedure Name user.sysdbopen( ) public.sysdbopen( ) Description This procedure is executed when the specified user opens the database as the current database. If no user.sysdbopen( ) procedure applies, this procedure is executed when any user opens the database as the current database. To avoid duplicating SPL code, you can call this procedure from a user-specific procedure. This procedure is executed when the specified user closes the database, disconnects from the database server, or the user session ends. If the sysdbclose( ) procedure did not exist when a session opened the database, however, it is not executed when the session closes the database. If no user.sysdbopen( ) procedure applies, this procedure is executed when the specified user closes the database, disconnects from the database server, or the user session ends. If the sysdbclose( ) procedure did not exist when a session opened the database, however, it is not executed when the session closes the database.
user.sysdbclose( )
public.sysdbclose( )
See also the section Transactions in SPL Routines on page 5-72. Make sure that you set permissions appropriately to allow intended users to execute the SPL procedure statements. For example, if the SPL procedure executes a command that writes output to a local directory, permissions must be set to allow users to write to this directory. If you want the procedure to continue if permission failures occur, include an ON EXCEPTION error handler for this condition. See also the section Support for Roles and User Identity on page 5-72. Warning: If a sysdbopen( ) procedure fails, the database cannot be opened. If a sysdbclose( ) procedure fails, the failure is ignored. While you are
Chapter 2. SQL Statements
2-151
CREATE PROCEDURE
writing and debugging a sysdbopen( ) procedure, set the DEBUG environment variable to NODBPROC before you connect to the database. When DEBUG is set to NODBPROC, the procedure is not executed, and failures cannot prevent the database from opening. Failures from these procedures can be generated by the system or simulated by the procedures with the RAISE EXCEPTION statement of SPL. For more information, refer to the description of RAISE EXCEPTION in Chapter 3. Only a user with DBA privileges can install these procedures. For security reasons, non-DBAs cannot prevent execution of these procedures. For some applications, however, such as ad hoc query applications, users can execute commands and SQL statements that subsequently change the environment. For general information about how to write and install SPL procedures, refer to the chapter about SPL routines in IBM Informix Guide to SQL: Tutorial.
Related Information
Related statements: ALTER FUNCTION, ALTER PROCEDURE, ALTER ROUTINE, CREATE FUNCTION, CREATE FUNCTION FROM, CREATE PROCEDURE FROM, DROP FUNCTION, DROP PROCEDURE, DROP ROUTINE, EXECUTE FUNCTION, EXECUTE PROCEDURE, GRANT, PREPARE, REVOKE, and UPDATE STATISTICS. For a discussion of how to create and use SPL routines, see the IBM Informix Guide to SQL: Tutorial. For a discussion of external routines, see IBM Informix User-Defined Routines and Data Types Developer's Guide. For information about how to create C UDRs, see the IBM Informix DataBlade API Programmer's Guide. For more information on the NODEFDAC environment variable and the related system catalog tables (sysprocedures, sysprocplan, sysprocbody and sysprocauth), see the IBM Informix Guide to SQL: Reference.
2-152
Syntax
CREATE PROCEDURE FROM file file_var
Element file
Description Pathname and filename of file that contains full text of a CREATE PROCEDURE statement. Default pathname is the current directory. Name of a program variable that contains file specification
Restrictions Must exist, and can contain only one CREATE PROCEDURE statement. See also Default Directory That Holds the File on page 2-154. Must be of a character data type; its contents have same restrictions as file
file_var
Language specific
Usage
You cannot create a user-defined procedure directly in an ESQL/C program. That is, the program cannot contain the CREATE PROCEDURE statement. To use a user-defined procedure in an ESQL/C program: 1. Create a source file with the CREATE PROCEDURE statement. 2. Use the CREATE PROCEDURE FROM statement to send the contents of this source file to the database server for execution. The file can contain only one CREATE PROCEDURE statement. For example, suppose that the following CREATE PROCEDURE statement is in a separate file, called raise_pr.sql:
CREATE PROCEDURE raise_prices( UPDATE stock -- increase by SET unit_price = unit_price ( unit_price * (per_cent END PROCEDURE; per_cent int ) percentage; + / 100) );
In the ESQL/C program, you can access the raise_prices( ) SPL procedure with the following CREATE PROCEDURE FROM statement:
EXEC SQL create procedure from raise_pr.sql;
In Dynamic Server, if you are not sure whether the UDR in the file returns a value, use the CREATE ROUTINE FROM statement. When the IFX_EXTEND_ROLE configuration parameter is set to ON, only users who have the built-in EXTEND role can create external routines.
Chapter 2. SQL Statements
2-153
Related Information
Related statements: CREATE PROCEDURE, CREATE FUNCTION FROM, and CREATE ROUTINE FROM
2-154
CREATE ROLE
Use the CREATE ROLE statement to declare and register a new role. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE ROLE role role
Element role
Description Name declared here for a role that the DBA creates
Restrictions Must be unique among role and user names in the database. Maximum number of bytes is 32.
Usage
CREATE ROLE declares a new role and registers it in the system catalog. A role can associate a set of authorization identifiers with a set of access privileges on database objects. The system catalog maintains information about the roles (and their corresponding privileges) that are granted to users or to other roles. Only the database administrator (DBA) can use CREATE ROLE to create a new role. The DBA can assign the privileges required for some work task to a role, such as engineer, and then use the GRANT statement to assign that role to specific users, instead of granting that set of privileges to each user individually. The role name is an authorization identifier. It cannot be a user name that is known to the database server or to the operating system of the database server. The role name cannot already be listed in the username column of the sysusers system catalog table, nor in the grantor or grantee columns of the systabauth, syscolauth, sysfragauth, sysprocauth, or sysroleauth system catalog tables. In Dynamic Server, the role name also cannot match the name of any user or role that is already listed in the grantor or grantee columns of the sysxtdtypeauth system catalog table, nor any built-in role, such as EXTEND or NONE. After a role is created, the DBA can use the GRANT statement to assign the role to PUBLIC, to users, or to other roles, and to grant specific privileges to the role. (A role cannot, however, hold database-level privileges.) After a role is granted successfully to a user or to PUBLIC, the user must use the SET ROLE statement to enable the role. Only then can the user exercise the privileges of the role. To create the role engineer, for example, enter the following statement:
CREATE ROLE engineer
To grant access privileges to the role engineer, the DBA can issue GRANT statements that include engineer in the list of grantees:
GRANT USAGE ON LANGUAGE SPL TO engineer
To assign the role engineer to user kaycee, the DBA could issue this statement:
GRANT ROLE engineer TO kaycee
To activate the role engineer, user kaycee must issue the following statement:
Chapter 2. SQL Statements
2-155
CREATE ROLE
SET ROLE engineer
If this SET ROLE statement is successful, user kaycee acquires whatever privileges have been granted to the role engineer, in addition to any other privileges that kaycee already holds as an individual or as PUBLIC. A user can be granted several roles, but no more than one non-default role, as specified by SET ROLE, can be enabled for any user at a given time. An exception to requiring SET ROLE to explicitly enable a role is any default role that the DBA specifies in the GRANT DEFAULT ROLE role TO user statement. If that statement succeeds, the default role is automatically enabled when user connects to the database. Any role can be a default role. (Similarly, users to whom the Dynamic Server DBSA grants the EXTEND role need not execute SET ROLE before they can create and drop external routines and shared libraries.) CREATE ROLE, when used with the GRANT and SET ROLE statements, enables a DBA to create one set of privileges for a role and then grant the role to many users, instead of granting the same set of privileges individually to many users. With the GRANT DEFAULT ROLE and SET ROLE DEFAULT statements, default roles enable a DBA to assign privileges to a role that is activated automatically when any user who holds that default role connects to the database. This feature is useful when an application performs operations that require specific access privileges, but the application does not include SET ROLE statements. The REVOKE statement can cancel access privileges of a role, remove users from a role, or cancel the default status of a role for one or more users. A role exists until either the DBA or a user to whom the role was granted with the WITH GRANT OPTION keywords uses the DROP ROLE statement to drop the role.
Related Information
Related statements: DROP ROLE, GRANT, REVOKE, and SET ROLE For a discussion of how to use roles, see the IBM Informix Database Design and Implementation Guide.
2-156
Syntax
CREATE ROUTINE FROM file file_var
Element file
Description Pathname and filename for the text of a CREATE PROCEDURE or CREATE FUNCTION statement. Default path is the current directory. Name of a program variable that contains file specification
Restrictions Must exist and can contain only one CREATE FUNCTION or CREATE PROCEDURE statement. Must be a character data type; contents must satisfy file restrictions
file_var
Usage
ESQL/C programs cannot use the CREATE FUNCTION or CREATE PROCEDURE statement directly to define a UDR. You must instead do this: 1. Create a source file with the CREATE FUNCTION or CREATE PROCEDURE statement. 2. Execute the CREATE ROUTINE FROM statement from an ESQL/C program to send the contents of this source file to the database server for execution. The file that you specify can contain only one CREATE FUNCTION or CREATE PROCEDURE statement. The file specification that you provide is relative. If you include no pathname, the client application looks for the file in the current directory. If you do not know at compile time whether the UDR in the file is a function or a procedure, use the CREATE ROUTINE FROM statement in the ESQL/C program. If you know whether the UDR is a function or a procedure, you can improve the readability of your code by using the matching SQL statement to access the source file: v To access user-defined functions, use CREATE FUNCTION FROM. v To access user-defined procedures, use CREATE PROCEDURE FROM. When the IFX_EXTEND_ROLE configuration parameter is set to ON, only users to whom the Database System Administrator (DBSA) has granted the built-in EXTEND role can create external routines. Routines use the collating order that was in effect when they were created. See SET COLLATION for information about using non-default collation.
Related Information
Related statements: CREATE FUNCTION, CREATE FUNCTION FROM, CREATE PROCEDURE, and CREATE PROCEDURE FROM
Chapter 2. SQL Statements
2-157
Syntax
CREATE ROW TYPE row_type
UNDER supertype
Notes: 1
Element row_type supertype Description Name that you declare here for a new named ROW data type Name of the supertype within a data type inheritance hierarchy
Usage
The CREATE ROW TYPE statement declares a named ROW data type and registers it in the system catalog. You can assign a named ROW data type to a table or view to create a typed table or typed view. You can also define a column as a named ROW type. Although you can assign a ROW type to a table to define the schema of the table, ROW data types are not the same as table rows. Table rows consist of one or more columns; ROW data types consist of one or more fields, defined using the Field Definition syntax. A named ROW data type is valid in most contexts where you can specify a data type. Named ROW types are said to be strongly typed. No two named ROW types are equivalent, even if they are structurally equivalent. ROW types without identifiers are called unnamed ROW types. Any two unnamed ROW types are considered equivalent if they are structurally equivalent. For more information, see ROW Data Types on page 4-29. Privileges on named ROW type columns are the same as privileges on any column. For more information, see Table-Level Privileges on page 2-374. (To see what privileges you have on a column, check the syscolauth system catalog table, which is described in the IBM Informix Guide to SQL: Reference.)
2-158
For information about Resource and Under privileges and the ALL keyword in the context of privileges, see the GRANT statement. To find out what privileges exist on a ROW type, check the sysxtdtypes system catalog table for the owner name and the sysxtdtypeauth system catalog table for privileges on the ROW type that might have been granted to users or to roles. To find out what privileges you have on a given table, check the systabauth system catalog table. For more information on system catalog tables, see the IBM Informix Guide to SQL: Reference.
Creating a Subtype
In most cases, you add new fields when you create a named ROW type as a subtype of another named ROW type (its supertype). To create the fields of a named ROW type, use the field definition clause, as described in Field Definition on page 2-160. When you create a subtype, you must use the UNDER keyword to associate the supertype with the named ROW type that you want to create. The next example creates the employee_t type under the person_t type:
CREATE ROW TYPE employee_t (salary NUMERIC(10,2), bonus NUMERIC(10,2)) UNDER person_t;
The employee_t type inherits all the fields of person_t and has two additional fields: salary and bonus; but the person_t type is not altered.
Type Hierarchies
When you create a subtype, you create a type hierarchy. In a type hierarchy, each subtype that you create inherits its properties from a single supertype. If you create a named ROW type customer_t under person_t, customer_t inherits all the fields
2-159
Field Definition
Use the Field Definition clause to define a new field in a named ROW type. Field Definition:
field data_type NOT NULL
Restrictions See Restrictions on Serial and Simple-Large-Object Data Types on page 2-161. Must be unique among field names of this ROW type and of its supertype
The NOT NULL constraint on the named ROW type field applies to the corresponding columns when a typed table of the named ROW type is created.
2-160
You cannot create a ROW type that has a BYTE or TEXT value that is stored in a separate storage space. That is, you cannot use the IN clause to specify the storage location. For example, the following example produces an error:
CREATE ROW TYPE row1 (field1 byte IN blobspace1) --INVALID CODE
A table hierarchy can include no more than one SERIAL and no more than one SERIAL8 column. If a supertable has a SERIAL column, none of its subtables can contain a SERIAL column (but a subtable can have a SERIAL8 column if no other subtable contains a SERIAL8 column). Consequently, when you create the named ROW types on which the table hierarchy is to be based, they can contain at most one SERIAL field and one SERIAL8 field among them. You cannot set the starting SERIAL or SERIAL8 value in the CREATE ROW TYPE statement. To modify the value for a serial field, you must use either the MODIFY clause of the ALTER TABLE statement, or else use the INSERT statement to insert a value that is larger than the current maximum (or default) serial value. Serial fields in ROW types have performance implications across a table hierarchy. To insert data into a subtable whose supertable (or its supertable) contains the serial counter, the database server must also open the supertable, update the serial value, and close the supertable, thus adding extra overhead. 3 3 3 In contexts where these restrictions or performance issues for SERIAL and SERIAL8 data types conflict with your design goals, you might consider using sequence objects to emulate the functionality of serial fields or serial columns.
Related Information
Related statements: DROP ROW TYPE, CREATE TABLE, CREATE CAST, GRANT, and REVOKE For a discussion of named ROW types, see the IBM Informix Database Design and Implementation Guide and the IBM Informix Guide to SQL: Reference.
2-161
CREATE SCHEMA
Use the CREATE SCHEMA statement to issue a block of data definition language (DDL) and GRANT statements as a unit. Use this statement with DBAccess.
Syntax
CREATE SCHEMA AUTHORIZATION user
(1) CREATE TABLE Statement (2) CREATE VIEW Statement (3) GRANT Statement (4) CREATE INDEX Statement CREATE SYNONYM Statement (7) CREATE TRIGGER Statement (8) (9) CREATE OPTICAL CLUSTER Statement (10) CREATE SEQUENCE Statement (11) CREATE ROW TYPE Statement (12) CREATE OPAQUE TYPE Statement (13) CREATE DISTINCT TYPE Statement (14) CREATE CAST Statement ;
(5) (6)
Notes: 1 2 3 4 5 6 7 8 9 10 11 12 13 See CREATE TABLE on page 2-171 See CREATE VIEW on page 2-247 See GRANT on page 2-371 Informix extension See CREATE INDEX on page 2-116 See CREATE SYNONYM on page 2-168 See CREATE TRIGGER on page 2-216 Dynamic Server only Optical Subsystem only. See the IBM Informix: Optical Subsystem Guide. See CREATE SEQUENCE on page 2-165 See CREATE ROW TYPE on page 2-158 See CREATE OPAQUE TYPE on page 2-136 See CREATE DISTINCT TYPE on page 2-93
2-162
CREATE SCHEMA
14 See CREATE CAST on page 2-87
Description User who owns the database objects that this statement creates Restrictions Syntax
Element user
If you have DBA privileges, you can specify the name of any Owner user. Otherwise, you must have the Resource privilege, and Name on you must specify your own user name. page 5-43
Usage
The CREATE SCHEMA statement allows the DBA to specify an owner for all database objects that the CREATE SCHEMA statement creates. You cannot issue CREATE SCHEMA until you have created the database that stores the objects. Users with the Resource privilege can create a schema for themselves. In this case, user must be the name of the person with the Resource privilege who is running the CREATE SCHEMA statement. Anyone with the DBA privilege can also create a schema for someone else. In this case, user can specify a user other than the person who is running the CREATE SCHEMA statement. You can put CREATE and GRANT statements in any logical order, as the following example shows. Statements are considered part of the CREATE SCHEMA statement until a semicolon ( ; ) or an end-of-file symbol is reached.
CREATE SCHEMA AUTHORIZATION sarah CREATE TABLE mytable (mytime DATE, mytext TEXT) GRANT SELECT, UPDATE, DELETE ON mytable TO rick CREATE VIEW myview AS SELECT * FROM mytable WHERE mytime > 12/31/2004 CREATE INDEX idxtime ON mytable (mytime);
Related Information
Related statements: CREATE CAST, CREATE DISTINCT TYPE, CREATE INDEX, CREATE OPAQUE TYPE, CREATE OPCLASS, CREATE ROW TYPE, CREATE SEQUENCE, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, and GRANT For a discussion of how to create a database, see the IBM Informix Database Design and Implementation Guide.
2-163
Syntax
CREATE SCRATCH TABLE table Scratch Table Definition
IN
LOCK MODE
Notes: 1 2 3 See Column Definition on page 2-173 See Multiple-Column Constraint Format on page 2-211 See FRAGMENT BY Clause on page 2-190
Description Name of dbslice to store table Name of dbspace to store table. Default is the dbspace that stores the current database. Name that you declare here for a nonlogging temporary table Restrictions Must already exist Must already exist Must be unique in the current session Syntax Identifier on page 5-22 Identifier on page 5-22 Database Object Name on page 5-17
Usage
The CREATE SCRATCH TABLE statement is a special case of the CREATE Temporary TABLE statement. For the complete syntax and semantics of the CREATE SCRATCH TABLE statement, see CREATE Temporary TABLE on page 2-209.
2-164
CREATE SEQUENCE
Use the CREATE SEQUENCE statement to create a sequence database object from which multiple users can generate unique integers. Only Dynamic Server supports this statement, which is an extension to the ANSI/ISO standard for SQL.
Syntax
CREATE SEQUENCE owner . sequence
BY INCREMENT step WITH START origin NOMAXVALUE MAXVALUE max NOMINVALUE MINVALUE min NOCYCLE CYCLE CACHE size NOCACHE ORDER NOORDER
(1)
Notes: 1
Element max min origin owner sequence size step Description Upper limit of values Lower limit of values First number in the sequence Owner of sequence Name that you declare here for the new sequence Number of values that are preallocated in memory
Usage
A sequence (sometimes called a sequence generator or sequence object) returns a monotonically ascending or descending series of unique integers, one at a time. The CREATE SEQUENCE statement defines a new sequence object, declares its identifier, and registers it in the syssequences system catalog table.
2-165
CREATE SEQUENCE
Authorized users of a sequence can request a new value by including the sequence.NEXTVAL expression in DML statements. The sequence.CURRVAL expression returns the current value of the specified sequence. NEXTVAL and CURRVAL expressions are valid only within SELECT, DELETE, INSERT, and UPDATE statements; Dynamic Server returns an error if you attempt to invoke the built-in NEXTVAL or CURRVAL functions in any other context. Generated values logically resemble the SERIAL8 data type, but can be negative, and are unique within the sequence. Because the database server generates the values, sequences support a much higher level of concurrency than a serial column can. The values are independent of transactions; a generated value cannot be rolled back, even if the transaction in which it was generated fails. You can use a sequence to generate primary key values automatically, using one sequence for many tables, or each table can have its own sequence. CREATE SEQUENCE can specify the following characteristics of a sequence: v Initial value v Size and sign of the increment between values. v Maximum and minimum values v Whether the sequence recycles values after reaching its limit v How many values are preallocated in memory for rapid access A database can support multiple sequences concurrently, but the name of a sequence (or in an ANSI-compliant database, the owner.sequence combination) must be unique within the current database among the names of tables, temporary tables, views, synonyms, and sequences. An error occurs if you include contradictory options, such as specifying both the MINVALUE and NOMINVALUE options, or both CACHE and NOCACHE.
INCREMENT BY Option
Use the INCREMENT BY option to specify the interval between successive numbers in the sequence. The BY keyword is optional. The interval, or step value, can be a positive whole number (for an ascending sequence) or a negative whole number (for a descending sequence) in the INT8 range. If you do not specify any step value, the default interval between successive generated values is 1, and the sequence is an ascending sequence.
2-166
CREATE SEQUENCE
If you do not specify a max value, the default is NOMAXVALUE. This default setting supports values that are less than or equal to 2e64 for ascending sequences, or less than or equal to -1 for descending sequences.
Related Information
Related statements: ALTER SEQUENCE, DROP SEQUENCE, RENAME SEQUENCE, CREATE SYNONYM, DROP SYNONYM, GRANT, and REVOKE For information about the syssequences system catalog table in which sequence objects are registered, see the IBM Informix Guide to SQL: Reference. For information about initializing a sequence and generating or reading values from a sequence, see NEXTVAL and CURRVAL Operators (IDS) on page 4-61.
2-167
CREATE SYNONYM
Use the CREATE SYNONYM statement to declare and register an alternative name for an existing table, view, or sequence object. This statement is an extension to the ANSI/ISO standard for SQL.
Syntax
PUBLIC CREATE PRIVATE SYNONYM synonym FOR table view (1) sequence
Notes: 1
Element sequence table, view synonym Description Name of a local sequence Name of table or view for which synonym is being created Synonym declared here for the name of a table, view, or sequence
Usage
Users have the same privileges for a synonym that they have for the database object that the synonym references. The syssynonyms, syssyntable, and systables system catalog tables maintain information about synonyms. You cannot create a synonym for a synonym in the same database. The identifier of the synonym must be unique among the names of tables, temporary tables, views, and sequence objects in the same database. (See, however, the section Synonyms with the Same Name on page 2-169.) Once a synonym is created, it persists until the owner executes the DROP SYNONYM statement. (This persistence distinguishes a synonym from an alias that you can declare in the FROM clause of a SELECT statement; the alias is in scope only during execution of that SELECT statement.) If a synonym refers to a table, view, or sequence in the same database, the synonym is automatically dropped if the referenced table, view, or sequence object is dropped.
You can also create a synonym for a table or view that exists in a database of a database server that is not your current database server. Both database servers must be online when you create the synonym. In a network, the remote database
2-168
CREATE SYNONYM
server verifies that the table or view referenced by the synonym exists when you create the synonym. The next example creates a synonym for a table supported by a remote database server:
CREATE SYNONYM mysum FOR payables@phoenix:jean.summary
The identifier mysum now refers to the table jean.summary, which is in the payables database on the phoenix database server. If the summary table is dropped from the payables database, the mysum synonym is left intact. Subsequent attempts to use mysum return the error: Table not found. Dynamic Server, however, does not support synonyms for these external objects: v Typed tables (including any table that is part of a table hierarchy) v Tables or views that contain any extended data types v Sequence objects outside the local database
In a database that is not ANSI-compliant, no two public synonyms can have the same identifier, and the identifier of a synonym must also be unique among the names of tables, views, and sequences in the same database. The owner.synonym combination of a private synonym must be unique among all the synonyms in the database. That is, more than one private synonym with the same name can exist in the same database, but a different user must own each of these synonyms. The same user cannot create both a private and a public synonym that have the same name. For example, the following code generates an error:
CREATE SYNONYM our_custs FOR customer; CREATE PRIVATE SYNONYM our_custs FOR cust_calls;-- ERROR!!!
A private synonym can be declared with the same name as a public synonym only if the two synonyms have different owners. If you own a private synonym, and a public synonym exists with the same name, the database server resolves the unqualified name as the private synonym. (In this case, you must specify owner.synonym to reference the public synonym.) If you use DROP SYNONYM with the unqualified synonym identifier when your private synonym and the public synonym of another user both have the same identifier, only your private
2-169
CREATE SYNONYM
synonym is dropped. If you repeat the same DROP SYNONYM statement, the database server drops the public synonym.
Chaining Synonyms
If you create a synonym for a table or view that is not in the current database, and this table or view is dropped, the synonym stays in place. You can create a new synonym for the dropped table or view with the name of the dropped table or view as the synonym, which points to another external or remote table or view. (Synonyms for external sequence objects are not supported.) In this way, you can move a table or view to a new location and chain synonyms together so that the original synonyms remain valid. (You can chain up to 16 synonyms in this manner.) The following steps chain two synonyms together for the customer table, which will ultimately reside on the zoo database server (the CREATE TABLE statements are not complete): 1. In the stores_demo database on the database server that is called training, issue the following statement:
CREATE TABLE customer (lname CHAR(15)...)
The synonym cust on the accntg database server now points to the customer table on the zoo database server. The following steps show an example of chaining two synonyms together and changing the table to which a synonym points: 1. On the database server called training, issue the following statement:
CREATE TABLE customer (lname CHAR(15)...)
The synonym cust on the accntg database server now points to a new version of the customer table on the training database server.
Related Information
Related statement: DROP SYNONYM. For a discussion of concepts related to synonyms, see the IBM Informix Database Design and Implementation Guide.
2-170
CREATE TABLE
Use the CREATE TABLE statement to create a new permanent table in the current database, to place data-integrity constraints on columns, to designate where the table is stored, to define a fragmentation strategy, to indicate the size of its initial and subsequent extents, and to specify how to lock the new table. You can use the CREATE TABLE statement to create relational-database tables or to create typed tables (object-relational tables). For information on how to create temporary tables, see CREATE Temporary TABLE on page 2-209.
Syntax
CREATE STANDARD RAW (1) STATIC (1) OPERATIONAL TABLE table Table Definition
Table Definition:
, (2) ( Column Definition , (3) , Multiple-Column Constraint (2) Column Definition ) Options (4)
(5)
(7)
Notes: 1 2 3 4 5 6 7 Extended Parallel Server only See page 2-173 See page 2-184 See page 2-187 Informix extension Dynamic Server only See page 2-204
Description Name that you declare here for the new table Restrictions Must be unique among the names of tables, synonyms, views, and sequences in the database Syntax Identifier, p. 5-22
Element table
Usage
The CREATE TABLE statement can include various clauses, some of which are identified in the following list. Clauses whose names are followed by (IDS) are not supported by Extended Parallel Server. Similarly, clauses whose names are followed by (XPS) are not supported by Dynamic Server.
2-171
CREATE TABLE
Clause Logging Options Column Definition DEFAULT Single-Column Constraint REFERENCES CHECK Constraint Definition (IDS) Multiple-Column Constraint WITH CRDCOLS (IDS) Storage Options IN FRAGMENT BY or PARTITION BY WITH ROWIDS (IDS) RANGE METHOD (XPS) PUT (IDS) EXTENT SIZE USING ACCESS METHOD (IDS) LOCK MODE OF TYPE (IDS) UNDER (IDS) Page 2-172 2-173 2-174 2-176 2-179 2-181 2-182 2-184 2-188 2-188 2-189 2-190 2-191 2-195 2-199 2-201 2-202 2-203 2-204 2-205 What the Clause Specifies Logging characteristics of the new table Name and other attributes of an individual column Default value for an individual column Data-integrity constraints on an individual column Referential-integrity constraints with other columns Check constraints with other columns Name and other attributes of a constraint Data-integrity constraints on a set of columns Two shadow columns for Enterprise Replication Attributes of where the table is physically stored Storage object to hold the new table (or part of it) Distribution scheme of a fragmented table Hidden column in a fragmented table Fragmentation strategy based on column values Storage location for BLOB or CLOB column values Size of the first and subsequent extents How to access the new table Locking granularity of the new table Named ROW type of a new typed table Supertable of a new subtable in a table hierarchy
When you create a new table, every column must have a data type associated with it. The table name must be unique among all the names of tables, views, sequences, and synonyms within the same database, but the names of columns need only be unique among the column names of the same table. In an ANSI-compliant database, the combination owner.table must be unique among tables, synonyms, views, and sequence objects within the database. In DBAccess, using CREATE TABLE outside the CREATE SCHEMA statement generates warnings if you use the -ansi flag or if you set DBANSIWARN. In ESQL/C, using CREATE TABLE generates warnings if you compile with the -ansi flag or set DBANSIWARN. For information about the DBANSIWARN environment variable, see the IBM Informix Guide to SQL: Reference.
Logging Options
Use the Logging Type options to specify logging characteristics that can improve performance in various bulk operations on the table. Other than the default option (STANDARD) that is used for OLTP databases, these logging options are used primarily to improve performance in data warehousing databases. A table can have either of the following logging characteristics. Logging Type STANDARD Effect Logging table that allows rollback, recovery, and restoration from archives. This type is the default.
2-172
CREATE TABLE
Use this type of table for all the recovery and constraints functionality that OLTP databases require. RAW Nonlogging table that cannot have indexes or referential constraints but can be updated. Use this type of table for quickly loading data.
By using raw tables with Extended Parallel Server, you can take advantage of light appends and avoid the overhead of logging, checking constraints, and building indexes. Warning: Use raw tables for fast loading of data, but set the logging type to STANDARD and perform a level-0 backup before you use the table in a transaction or modify the data within the table. If you must use a raw table within a transaction, either set the isolation level to Repeatable Read or lock the table in exclusive mode to prevent concurrency problems. Extended Parallel Server supports two additional logging type options. Option OPERATIONAL Effect Logging table that uses light appends; it cannot be restored from archive. Use this type on tables that are refreshed frequently, because light appends allow the quick addition of many rows. Nonlogging table that can contain index and referential constraints but cannot be updated. Use this type for read-only operations, because no logging or locking overhead occurs.
STATIC
For more information on these logging types of tables, refer to your IBM Informix Administrator's Guide.
Column Definition
Use the column definition portion of CREATE TABLE to list the name, data type, default values, and constraints of a single column. Column Definition:
(1) column Data Type (2) DEFAULT Clause
Notes: 1 2 3 See page 4-18 See page 2-174 See page 2-176
Chapter 2. SQL Statements
2-173
CREATE TABLE
Element column Description Name of a column in the table Restrictions Must be unique in this table Syntax Identifier, p. 5-22
Because the maximum row size is 32,767 bytes, no more than approximately 195 columns in the table can be of the data types BYTE, TEXT, ROW, LVARCHAR, NVARCHAR, VARCHAR, and varying-length UDTs. (The upper limit on columns of these data types also depends on other data describing the table that the database server stores in the same partition.) No more than approximately 97 columns can be of COLLECTION data types (SET, LIST, and MULTISET). As with any SQL identifier, syntactic ambiguities (and sometimes error messages or unexpected behavior) can occur if the column name is a keyword, or if it is the same as the table name, or the name of another table that you subsequently join with the table). For information about the keywords of Dynamic Server, see Appendix A, Reserved Words for IBM Informix Dynamic Server, on page A-1. For more information on the keywords of Extended Parallel Server, see Appendix B, Reserved Words for IBM Informix Extended Parallel Server, on page B-1. For more information on the ambiguities that can occur, see Use of Keywords as Identifiers on page 5-23. In Dynamic Server, if you define a column of a table to be of a named ROW type, the table does not adopt any constraints of the named ROW.
DEFAULT Clause
Use the DEFAULT clause to specify the default value for the database server to insert into a column when no explicit value for the column is specified. DEFAULT Clause:
DEFAULT NULL literal USER (1) CURRENT (2) DATETIME Field Qualifier TODAY SITENAME DBSERVERNAME
Notes: 1 2
Element literal Description String of alphabetic or numeric characters
2-174
CREATE TABLE
DATE literals must be of the format that the DBDATE (or else GL_DATE) environment variable specifies. In the default locale, if neither DBDATE nor GL_DATE is set, date literals must be of the mm/dd/yyyy format.
2-175
CREATE TABLE
These column sizes are recommended because, if the column length is too small to store the default value during INSERT or ALTER TABLE operations, the database server returns an error. In Dynamic Server, you cannot designate a built-in function (that is, CURRENT, USER, TODAY, SITENAME, or DBSERVERNAME) as the default value for a column that holds an opaque or distinct data type. In addition, larger column sizes are required if the data values are encrypted, or if they are encoded in the Unicode character set of the UTF-8 locale. (See the description of the SET ENCRYPTION statement later in this chapter for more information about the storage size requirements for encrypted data.) For descriptions of these functions, see Constant Expressions on page 4-53. The following example creates a table called accounts. In accounts, the acc_num,acc_type, and acc_descr columns have literal default values. The acc_id column defaults to the login name of the user.
CREATE TABLE accounts ( acc_num INTEGER DEFAULT 1, acc_type CHAR(1) DEFAULT A, acc_descr CHAR(20) DEFAULT New Account, acc_id CHAR(32) DEFAULT USER)
(1) DISTINCT UNIQUE PRIMARY KEY (3) REFERENCES Clause (4) CHECK Clause (1) Constraint Definition (2)
2-176
CREATE TABLE
4 See page 2-181 The following example creates a standard table with two constraints: num, a primary-key constraint on the acc_num column; and code, a unique constraint on the acc_code column:
CREATE TABLE acc_num acc_code acc_descr accounts ( INTEGER PRIMARY KEY CONSTRAINT num, INTEGER UNIQUE CONSTRAINT code, CHAR(30))
The types of constraints used in this example are defined in sections that follow.
You cannot specify NULL as the explicit default value for a column if you also specify the NOT NULL constraint.
2-177
CREATE TABLE
Opaque data types support a unique constraint only where a secondary-access method supports uniqueness for that type. The default secondary-access method is a generic B-tree, which supports the equal( ) operator function. Therefore, if the definition of the opaque type includes the equal( ) function, a column of that opaque type can have a unique constraint. The following example creates a simple table that has a unique constraint on one of its columns:
CREATE TABLE accounts (acc_name CHAR(12), acc_num SERIAL UNIQUE CONSTRAINT acc_num)
For an explanation of the constraint name, refer to Declaring a Constraint Name on page 2-182.
2-178
CREATE TABLE
CREATE TABLE accounts (acc_name CHAR(12), acc_num SERIAL PRIMARY KEY CONSTRAINT acc_num)
REFERENCES Clause
Use the REFERENCES clause to establish a referential relationship: v Within a table (that is, between two columns of the same table) v Between two tables (in other words, create a foreign key) REFERENCES Clause:
REFERENCES table , ( column ) (1) ON DELETE CASCADE
Notes: 1
Element column table Description A referenced column The referenced table
Informix extension
Restrictions See Restrictions on Referential Constraints on page 2-179. Must reside in the same database as the referencing table Syntax Identifier, p. 5-22 Database Object Name, p. 5-17
The referencing column (the column being defined) is the column or set of columns that refers to the referenced column or set of columns. The referencing column can contain NULL and duplicate values, but values in the referenced column (or set of columns) must be unique. The relationship between referenced and referencing columns is called a parent-child relationship, where the parent is the referenced column (primary key) and the child is the referencing column (foreign key). The referential constraint establishes this parent-child relationship. When you create a referential constraint, the database server automatically creates an internal index on the constrained column or columns.
2-179
CREATE TABLE
v When you use the single-column constraint format, you can reference only one column. v In Dynamic Server, when you use the multiple-column constraint format, the maximum number of columns in the REFERENCES clause is 16, and the total length of the columns cannot exceed 390 bytes if the page size is 2 kilobytes. (The maximum length increases with the page size.) v In Extended Parallel Server, when you use the multiple-column constraint format, the maximum number of columns in the REFERENCES clause is 16, and the total length of the columns cannot exceed 380 bytes.
When you use the single-column constraint format, you do not explicitly specify the ref_num column as a foreign key. To use the FOREIGN KEY keyword, use the Multiple-Column Constraint Format on page 2-184.
2-180
CREATE TABLE
not specify cascading deletes, the default behavior of the database server prevents you from deleting data in a table if other tables reference it. If you specify this option, later when you delete a row in the parent table, the database server also deletes any rows associated with that row (foreign keys) in a child table. The principal advantage to the cascading-deletes feature is that it allows you to reduce the quantity of SQL statements you need to perform delete actions. For example, the all_candy table contains the candy_num column as a primary key. The hard_candy table refers to the candy_num column as a foreign key. The following CREATE TABLE statement creates the hard_candy table with the cascading-delete option on the foreign key:
CREATE TABLE all_candy (candy_num SERIAL PRIMARY KEY, candy_maker CHAR(25)); CREATE TABLE hard_candy (candy_num INT, candy_flavor CHAR(20), FOREIGN KEY (candy_num) REFERENCES all_candy ON DELETE CASCADE)
Because ON DELETE CASCADE is specified for the dependent table, when a row of the all_candy table is deleted, the corresponding rows of the hard_candy table are also deleted. For information about syntax restrictions and locking implications when you delete rows from tables that have cascading deletes, see Considerations When Tables Have Cascading Deletes on page 2-276.
CHECK Clause
Use the CHECK clause to designate conditions that must be met before data can be assigned to a column during an INSERT or UPDATE statement. CHECK Clause:
(1) CHECK ( Condition )
In Dynamic Server, the condition cannot include a user-defined routine. During an insert or update, if the check constraint of a row evaluates to false, the database server returns an error. The database server does not return an error if a row evaluates to NULL for a check constraint. In some cases, you might want to use both a check constraint and a NOT NULL constraint.
2-181
CREATE TABLE
the condition. When you specify a two-digit year, the DBCENTURY environment variable can produce unpredictable results if the condition depends on an abbreviated year value. For more information about DBCENTURY, see the IBM Informix Guide to SQL: Reference. More generally, the database server saves the settings of environment variables from the time of creation of check constraints. If any of these settings are subsequently changed in a way that can affect the evaluation of a condition in a check constraint, the new settings are disregarded, and the original environment variable settings are used when the condition is evaluated. With a BYTE or TEXT column, you can check for NULL or not-NULL values. This constraint is the only constraint allowed on a BYTE or TEXT column.
Both acct1 and acct2 are columns of MONEY data type whose values must be between 0 and 99999. If, however, you want to test that acct1 has a larger balance than acct2, you cannot use the single-column constraint format. To create a constraint that checks values in more than one column, you must use the Multiple-Column Constraint Format on page 2-184.
Constraint Definition
Use the constraint definition portion of CREATE TABLE for these purposes: v To declare a name for the constraint v To set a constraint to disabled, enabled, or filtering mode (IDS) Constraint Definition:
CONSTRAINT constraint
(1)
Notes: 1
Element constraint Description Name of constraint
2-182
CREATE TABLE
column, but without declaring a constraint name, the database server creates a constraint and adds a row for that constraint in the sysconstraints system catalog table. The database server also generates an identifier and adds a row to the sysindexes system catalog table for each new primary-key, unique, or referential constraint that does not share an index with an existing constraint. Even if you declare a name for a constraint, the database server generates the name that appears in the sysindexes table. If you want, you can specify a meaningful name for the constraint. The name must be unique among the names of constraints and indexes in the database. Constraint names appear in error messages having to do with constraint violations. You can use this name when you use the DROP CONSTRAINT clause of the ALTER TABLE statement. In Dynamic Server, you also specify a constraint name when you change the mode of constraint with the SET Database Object Mode statement or the SET Transaction Mode statement, and in the DROP INDEX statement for constraints that are implemented as indexes with user-defined names. In an ANSI-compliant database, when you declare the name of a constraint of any type, the combination of the owner name and constraint name must be unique within the database. In Dynamic Server, the system catalog table that holds information about indexes is the sysindices table. Constraint Names That the Database Server Generates: If you do not specify a constraint name, the database server generates a constraint name using the following template:
<constraint_type><tabid>_<constraintid>
In this template, constraint_type is the letter u for unique or primary-key constraints, r for referential constraints, c for check constraints, and n for NOT NULL constraints. In the template, tabid and constraintid are values from the tabid and constrid columns of the systables and sysconstraints system catalog tables, respectively. For example, the constraint name for a unique constraint might look like u111_14 (with a leading blank space). If the generated name conflicts with an existing identifier, the database server returns an error, and you must then supply an explicit constraint name. The generated index name in sysindexes (or sysindices) has this format:
[blankspace]<tabid>_<constraintid>
For example, the index name might be something like 111_14 (quotation marks used here to show the blank space).
2-183
CREATE TABLE
ENABLED Enforces the constraint during INSERT, DELETE, and UPDATE operations If a target row causes a violation of the constraint, the statement fails. This mode is the default. Enforces the constraint during INSERT, DELETE, and UPDATE operations If a target row causes a violation of the constraint, the statement continues processing. The database server writes the row in question to the violations table associated with the target table and writes diagnostic information to the associated diagnostics table.
FILTERING
If you choose filtering mode, you can specify the WITHOUT ERROR or WITH ERROR options. The following list explains these options. Error Option WITHOUT ERROR Effect Does not return an integrity-violation error when a filtering-mode constraint is violated during an insert, delete, or update operation. This is the default error option. Returns an integrity-violation error when a filtering-mode constraint is violated during an insert, delete, or update operation
WITH ERROR
To reset the constraint mode of a table, see SET Database Object Mode on page 2-539. For information about where the database server stores rows that violate a constraint set to FILTERING, see START VIOLATIONS TABLE on page 2-609.
2-184
CREATE TABLE
4
Element column Description Columns on which to place constraint
You can include a maximum of 16 columns in a constraint list. For databases where the page size is two kilobytes, the total length of the list of columns cannot exceed 380 bytes. When you define a unique constraint (by using the UNIQUE or DISTINCT keyword), a column cannot appear in the constraint list more than once. Using the multiple-column constraint format, you can perform these tasks: v Create data-integrity constraints for a set of one or more columns v Specify a mnemonic name for a constraint v Specify the constraint-mode option that controls the behavior of a constraint during insert, delete, and update operations. When you use this format, you can create composite primary and foreign keys, or define check constraints that compare data in different columns. See also the section Differences Between a Unique Constraint and a Unique Index on page 2-178.
2-185
CREATE TABLE
You can find detailed discussions of specific constraints in the following sections:
Constraint CHECK DISTINCT For more information, see CHECK Clause on page 2-181 For an example, see Defining Check Constraints Across Columns on page 2-186
Using the UNIQUE or Examples of the Multiple-Column DISTINCT Constraints on page Constraint Format on page 2-186 2-177 Using the FOREIGN KEY Constraint on page 2-186 Using the PRIMARY KEY Constraint on page 2-178 Defining Composite Primary and Foreign Keys on page 2-187 Defining Composite Primary and Foreign Keys on page 2-187
Using the UNIQUE or Examples of the Multiple-Column DISTINCT Constraints on page Constraint Format on page 2-186 2-177
For constraint names, see Declaring a Constraint Name on page 2-182. Defining Check Constraints Across Columns: When you use the multiple-column constraint format to define check constraints, a check constraint can apply to more than one column in the same table. (You cannot, however, create a check constraint whose condition uses a value from a column in another table.) This example compares two columns, acct1 and acct2, in the new table:
CREATE TABLE my_accounts ( chk_id SERIAL PRIMARY KEY, acct1 MONEY, acct2 MONEY, CHECK (0 < acct1 AND acct1 < 99999), CHECK (0 < acct2 AND acct2 < 99999), CHECK (acct1 > acct2) )
2-186
CREATE TABLE
In this example, the acct1 column must be greater than the acct2 column, or the insert or update fails. Defining Composite Primary and Foreign Keys: When you use the multiple-column constraint format, you can create a composite key. A composite key specifies multiple columns for a primary-key or foreign-key constraint. The next example creates two tables. The first table has a composite key that acts as a primary key, and the second table has a composite key that acts as a foreign key.
CREATE TABLE accounts ( acc_num INTEGER, acc_type INTEGER, acc_descr CHAR(20), PRIMARY KEY (acc_num, acc_type)) CREATE TABLE sub_accounts ( sub_acc INTEGER PRIMARY KEY, ref_num INTEGER NOT NULL, ref_type INTEGER NOT NULL, sub_descr CHAR(20), FOREIGN KEY (ref_num, ref_type) REFERENCES accounts (acc_num, acc_type))
In this example, the foreign key of the sub_accounts table, ref_num and ref_type, references the composite key, acc_num and acc_type, in the accounts table. If, during an insert or update, you tried to insert a row into the sub_accounts table whose value for ref_num and ref_type did not exactly correspond to the values for acc_num and acc_type in an existing row in the accounts table, the database server would return an error. A referential constraint must have a one-to-one relationship between referencing and referenced columns. In other words, if the primary key is a set of columns (a composite key), then the foreign key also must be a set of columns that corresponds to the composite key. Because of the default behavior of the database server, when you create the foreign-key reference, you do not have to reference the composite-key columns (acc_num and acc_type) explicitly. You can rewrite the references section of the previous example as follows:
FOREIGN KEY (ref_num, ref_type) REFERENCES accounts
Options Clauses
The Options clauses of the CREATE TABLE statement let you specify storage locations, extent size, locking modes, and user-defined access methods. Options:
(3)
(5)
2-187
CREATE TABLE
Notes: 1 2 3 4 5 Dynamic Server only Informix extension See page 2-188 See page 2-203 See page 2-202
v They do not appear in DBAccess when you ask for information about the columns of the table. v They are not included in the number of columns (ncols) in the systables system catalog table entry for tablename. To view the contents of cdrserver and cdrtime, explicitly specify the columns in the projection list of a SELECT statement, as the following example shows:
SELECT cdrserver, cdrtime FROM tablename
For more information about how to use this option, refer to the IBM Informix Dynamic Server Enterprise Replication Guide.
Storage Options
3 Use these options to specify the storage location, distribution scheme, and extent size for the table. This is an extension to the ANSI/ISO standard for SQL syntax. Storage Options:
IN
(4)
FRAGMENT BY Clause
2-188
CREATE TABLE
2 3 4 5
Element dbslice dbspace extspace Description Dbslice to store the table Dbspace to store the table Name declared in the onspaces command to a storage area outside the database server
Dynamic Server only See page 2-190 See page 2-199 See page 2-201
Restrictions Must already exist Must already exist Must already exist Syntax Identifier, p. 5-22 Identifier, p. 5-22 See documentation for your access method.
If you use the USING Access-Method Clause (IDS) on page 2-202 to specify an access method, that method must support the storage space. You can specify a dbspace for the table that is different from the storage location for the database, or fragment the table among dbspaces, or among partitions of one or more dbspaces. If you specify no IN clause nor fragmentation scheme, the new table resides in the same dbspace where the current database resides. In Dynamic Server, you can use the PUT clause to specify storage options for smart large objects. For more information, see PUT Clause (IDS) on page 2-199. Note: If your table contains simple large objects (TEXT or BYTE), you can specify a separate blobspace for each object. For information on storing simple large objects, refer to Large-Object Data Types on page 4-25.
For more information about how to store and manage your tables in separate dbspaces, see your IBM Informix Administrator's Guide. Storing Data in a Partition of a dbspace (IDS): Besides the option of storing the table (or a fragment of it) in a dbspace, Dynamic Server supports storing fragments of a table in a subset of a dbspace, called a partition. Unless you explicitly declare names for the fragments in the PARTITION BY clause, each fragment, by default, has the same name as the dbspace where it resides. This includes all fragmented tables and indexes migrated from earlier releases of Dynamic Server.
Chapter 2. SQL Statements
2-189
CREATE TABLE
You can store fragments of the same table in multiple partitions of the same dbspace, but each name that you declare after the PARTITION keyword must be unique among partitions of that dbspace. The PARTITION keyword is required when you store more than one fragment of the table in the same dbspace. You can also use the PARTITION keyword to declare a more meaningful name for a dbspace that has only one partition. Storing Data in a dbslice (XPS): If you are using Extended Parallel Server, the IN dbslice clause allows you to fragment a table across a group of dbspaces that share the same naming convention. The database server fragments the table by round-robin in the dbspaces that make up the dbslice at the time the table is created. To fragment a table across a dbslice, you can use either the IN dbslice syntax or the FRAGMENT BY ROUND ROBIN IN dbslice syntax. Storing Data in an extspace (IDS): In general, use the extspace storage option in conjunction with the USING Access-Method Clause (IDS) on page 2-202. For more information, refer to the documentation of your access method.
FRAGMENT BY Clause
Use the FRAGMENT BY clause to create fragmented tables and to specify their distribution scheme. (The keywords PARTITION BY are a synonym for FRAGMENT BY in Dynamic Server.) FRAGMENT BY Clause for Tables:
FRAGMENT (1) PARTITION BY
, (1) EXPRESSION (1) USING opclass , (2) HASH ( column , HYBRID ( column ) (3) RANGE Method Clause EXPRESSION Clause ) IN dbspace dbslice , PARTITION part IN dbspace Fragment List
2-190
CREATE TABLE
Fragment List:
, , ( expr ) (1) PARTITION part IN dbspace
IN dbspace
EXPRESSION Clause:
, EXPRESSION expr IN dbspace dbslice , ( dbspace ) , REMAINDER expr IN dbspace dbslice , ( dbspace ) )
Notes: 1 2 3 Dynamic Server only Extended Parallel Server only See page 2-195
Description Column to which to apply the fragmentation strategy Dbslice or dbspace to store the table fragment Restrictions Must be a column within the table The dbslice must be defined. You can specify no more than 2,048 dbspaces. Syntax Identifier, p. 5-22 Identifier, p. 5-22 Expression, p. 4-34 Identifier, p. 5-22 Identifier, p. 5-22
Expression that defines a table Columns can be from the current table only, and fragment using a range, hash, or data values can be from only a single row. Value arbitrary rule returned must be Boolean (true or false). No default operator class Name that you declare here for a partition of a dbspace Must be defined and must be associated with a B-tree index Required for any partition in the same dbspace as another partition of the same table
opclass part
When you fragment a table, the IN keyword is followed by the name of the storage space where a table fragment is to be stored.
2-191
CREATE TABLE
row and that the database server can use to find the physical location of the row. Each row requires an additional four bytes to store the rowid. Important: This is a deprecated feature. Use primary keys as an access method rather than the rowid column. You cannot use the WITH ROWIDS clause with typed tables.
Fragmenting by EXPRESSION
In an expression-based distribution scheme, each fragment expression in a rule specifies a storage space. Each fragment expression in the rule isolates data and aids the database server in searching for rows. To fragment a table by expression, specify one of the following rules: v Range rule A range rule specifies fragment expressions that use a range to specify which rows are placed in a fragment, as the next example shows:
FRAGMENT BY EXPRESSION c1 < 100 IN dbsp1, c1 >= 100 AND c1 < 200 IN dbsp2, c1 >= 200 IN dbsp3
v Arbitrary rule An arbitrary rule specifies fragment expressions based on a predefined SQL expression that typically uses OR clauses to group data, as the following example shows:
FRAGMENT BY EXPRESSION zip_num = 95228 OR zip_num = 95443 IN dbsp2, zip_num = 91120 OR zip_num = 92310 IN dbsp4, REMAINDER IN dbsp5
Warning: See the note about the DBCENTURY environment variable and date values in fragment expressions in the section Logging Options on page 2-172. The USING Opclass Option (IDS): With the USING operator class option, you can specify a nondefault operator class for the fragmentation strategy. The secondary-access method of the chosen operator class must have a B-tree index structure.
2-192
CREATE TABLE
In the following example, the abs_btree_ops operator class specifies several user-defined strategy functions that order integers based on their absolute values:
CREATE OPCLASS abs_btree_ops FOR btree STRATEGIES (abs_lt, abs_lte, abs_eq, abs_gte, abs_gt) SUPPORT (abs_cmp)
For the fragmentation strategy, you can specify the abs_btree_ops operator class in the USING clause and use its strategy functions to fragment the table, as follows:
FRAGMENT BY EXPRESSION USING abs_btree_ops (abs_lt(x,345)) IN dbsp1, (abs_gte(x,345) AND abs_lte(x,500)) IN dbsp2, (abs_gt(x,500)) IN dbsp3
For information on how to create and extend an operator class, see IBM Informix User-Defined Routines and Data Types Developer's Guide. User-Defined Functions in Fragment Expressions (IDS): For rows that include user-defined data types, you can use comparison conditions or user-defined functions to define the range rules. In the following example, comparison conditions define the range rules for the long1 column, which contains an opaque data type:
FRAGMENT BY EXPRESSION long1 < 3001 IN dbsp1, long1 BETWEEN 3001 AND 6000 IN dbsp2, long1 > 6000 IN dbsp3
An implicit, user-defined cast converts 3001 and 6000 to the opaque type. Alternatively, you can use user-defined functions to define the range rules for the opaque data type of the long1 column:
FRAGMENT BY EXPRESSION (lessthan(long1,3001)) IN dbsp1, (greaterthanorequal(long1,3001) AND lessthanorequal(long,6000)) IN dbsp2, (greaterthan(long1,6000)) IN dbsp3
Explicit user-defined functions require parentheses around the entire fragment expression before the IN clause, as the previous example shows. User-defined functions in a fragment expression can be written in SPL or in the C or Java language. These functions must satisfy four requirements: v They must evaluate to a Boolean value. v They must be nonvariant. v They must reside within the same database as the table. v They must not generate OUT nor INOUT parameters. For information on how to create UDRs for fragment expressions, refer to IBM Informix User-Defined Routines and Data Types Developer's Guide. Using the REMAINDER Keyword: Use the REMAINDER keyword to specify the storage space in which to store valid values that fall outside the specified expression or expressions. If you do not specify a remainder, and a row is inserted or updated with values that do not correspond to any fragment definition, the database server returns an error.
2-193
CREATE TABLE
The following example uses an arbitrary rule to define five fragments for specific values of the c1 column, and a sixth fragment for all other values: 3 3 3 3 3 3 3
CREATE TABLE T1 (c1 INT) FRAGMENT BY EXPRESSION PARTITION PART_1 (c1 = 10) IN dbs1, PARTITION PART_2 (c1 = 20) IN dbs1, PARTITION PART_3 (c1 = 30) IN dbs1, PARTITION PART_4 (c1 = 40) IN dbs2, PARTITION PART_5 (c1 = 50) IN dbs2, PARTITION PART_6 REMAINDER IN dbs2;
Here the first three fragments are stored in partitions of the dbs1 dbspace, and the other fragments, including the remainder, are stored in partitions of the dbs2 dbspace. Explicit fragment names are required in this example, because each dbspace has multiple partitions.
You can also specify a dbslice. When you specify a dbslice, the database server fragments the table across the dbspaces that make up the dbslice. Serial Columns in HASH-Distribution Schemes: If you base table fragmentation on a SERIAL or SERIAL8 column, only a hash-distribution scheme is valid. In addition, the serial column must be the only column in the hashing key. (These restrictions apply only to table distributions. Fragmentation schemes for indexes that are based on SERIAL or SERIAL8 columns are not subject to these restrictions.) The following excerpt is from a CREATE TABLE statement:
2-194
CREATE TABLE
CREATE TABLE customer ( cust_id serial, . . . ) FRAGMENT BY HASH (cust_id) IN customer1_spc, customer2_spc
You might notice a difference between serial-column values in fragmented and nonfragmented tables. The database server assigns serial values round-robin across fragments, so a fragment might contain values from noncontiguous ranges. For example, if there are two fragments, the first serial value is placed in the first fragment, the second serial value is placed in the second fragment, the third value is placed in the first fragment, and so on.
For more information on an expression-based distribution scheme, see Fragmenting by EXPRESSION on page 2-192.
2-195
CREATE TABLE
, (1) RANGE ( column HYBRID Range Definition dbspace dbslice Second Range Specification ) IN REMAINDER IN dbspace
For hybrid strategies with two range definitions, the second column must have a different column name from the first. For hybrid strategies with exactly one range definition, both occurrences of column must specify the same column. If you list more than one dbslice, including a remainder dbslice, each dbslice must contain the same number of dbspaces. Unless you are specifying the dbspace in the REMAINDER option, you must specify at least two dbspaces.
Range Definition
Use the range definition to specify the minimum and maximum values of the entire range. Range Definition:
max_val MIN min_val MAX
2-196
CREATE TABLE
Element max_val min_val Description Maximum value in the range Minimum value in the range; the default is 0. Restrictions Must be an INT or SMALLINT greater than or equal to min_val, if min_val is supplied Must be an INT or SMALLINT less than or equal to max_val Syntax Literal Number, p. 4-137 Literal Number, p. 4-137
You do not need to specify a minimum value. The minimum and maximum values define the exact range of values to allocate for each storage space.
Range IN Clause
Use the IN clause to specify the storage spaces in which to distribute the data. Range IN Clause:
, IN dbslice , ( dbspace ) ( dbspace )
REMAINDER IN
dbslice ,
Description Dbslice that contains the dbspaces to store table fragments Dbspace to store the table fragment
If you specify more than one dbslice, including a remainder dbslice, each dbslice must contain the same number of dbspaces. Unless you are specifying the dbspace in the REMAINDER option, the minimum number of dbspaces that you can specify is two. The maximum number of dbspaces that you can specify is 2,048. When you use a range-fragmentation method, the number of integer values between the minimum and maximum specified values must be equal to or greater than the number of storage spaces specified, so that the database server can allocate non-overlapping contiguous ranges across the dbspaces. For example, the following code returns an error, because the allocations for the range cannot be distributed across all specified dbspaces:
CREATE TABLE Tab1 (Col1 INT...) FRAGMENT BY RANGE (Col1 MIN 5 MAX 7) IN db1, db2, db3, db4, db5, db6 -- returns an error
The error for this example occurs because the specified range contains three values (5, 6, and 7), but six dbspaces are specified; three values cannot be distributed across six dbspaces.
2-197
CREATE TABLE
If you do not specify a remainder and a row is inserted or updated such that it no longer belongs to any storage space, the database server returns an error.
Restrictions
If you fragment a table with range fragmentation, you cannot perform the following operations on the table after it is created: v You cannot change the fragmentation strategy (ALTER FRAGMENT). v You cannot rename the columns of the table (RENAME COLUMN). v You cannot alter the table in any way except to change the table type or to change the lock mode. That is, the Usage-TYPE options and the Lock Mode clause are the only valid options of ALTER TABLE for a table that has range fragmentation.
Examples
The following examples illustrate range fragmentation in its simple and hybrid forms. Simple Range-Fragmentation Strategy: The following example shows a simple range-fragmentation strategy:
CREATE TABLE Tab1 (Col1 INT...) FRAGMENT BY RANGE (Col1 MIN 100 MAX 200) IN db1, db2, db3, db4
In this example, the database server fragments the table according to the following allocations.
Storage Space db1 db2 Holds Values 100 <= Col1 < 125 125 <= Col1 < 150 Storage Space db3 db4 Holds Values 150 <= Col1 < 175 175 <= Col1 < 200
The previous table shows allocations that can also be made with an expression-based fragmentation scheme:
CREATE TABLE ... FRAGMENT Col1 >= 100 AND Col1 < Col1 >= 125 AND Col1 < Col1 >= 150 AND Col1 < Col1 >= 175 AND Col1 < BY EXPRESSION 125 IN db1 150 IN db2 175 IN db3 200 IN db4
As the examples show, the range-fragmentation example requires much less coding to achieve the same results. The same is true for the hybrid-range fragmentation compared to hybrid-expression fragmentation methods. Column-Major-Range Allocation: The following example demonstrates the syntax for column-major-range allocation, a hybrid-range fragmentation strategy:
CREATE TABLE tab2 (col2 INT, colx char (5)) FRAGMENT BY HYBRID ( RANGE (col2 MIN 100 MAX 220)) RANGE (col2) IN dbsl1, dbsl2, dbsl3
This type of fragmentation creates a distribution across dbslices and provides a further subdivision within each dbslice (across the dbspaces in the dbslice) such that when a query specifies a value for col1 (for example, WHERE col1 = 127), the
2-198
CREATE TABLE
query uniquely identifies a dbspace. To take advantage of the additional subdivision, you must specify more than one dbslice. Row-Major-Range Allocation: The following example demonstrates the syntax for row-major-range allocation, a hybrid-range fragmentation strategy:
CREATE TABLE tab3 (col3 INT, colx char (5)) FRAGMENT BY HYBRID ( RANGE (col3) ) RANGE (col3 MIN 100 MAX 220) IN dbsl1, dbsl2, dbsl3
This fragmentation strategy is the counterpart to the column-major-range allocation. The advantages and restrictions are equivalent. Independent-Range Allocation: The following example demonstrates the syntax for an independent-range allocation, a hybrid-range fragmentation strategy:
CREATE TABLE tab4 (col4 INT, colx char (5), col5 INT) FRAGMENT BY HYBRID ( RANGE (col4 MIN 100 MAX 200) ) RANGE (col5 MIN 500 MAX 800) IN dbsl1, dbsl2, dbsl3
In this type of range fragmentation, the two columns are independent, and therefore the range allocations are independent. The range allocation for a dbspace on both columns is the conjunctive combination of the range allocation on each of the two independent columns. This type of fragmentation does not provide subdivisions within either column. With this type of fragmentation, a query that specifies values for both columns (such as, WHERE col4 = 128 and col5 = 650) uniquely identifies the dbspace at the intersection of the two dbslices identified by the columns independently.
2-199
CREATE TABLE
Element column kbytes sbspace Description Column to store in sbspace Number of kilobytes to allocate for the extent size Name of an area of storage Restrictions Must contain a BLOB, CLOB, user-defined, or complex data type Must be an integer value Must exist Syntax Identifier, p. 5-22 Literal Number, p. 4-137 Identifier, p. 5-22
The column cannot be in the form column.field. That is, the smart large object that you are storing cannot be one field of a row type. A smart large object is contained in a single sbspace. The SBSPACENAME configuration parameter specifies the system default in which smart large objects are created, unless you specify another area. Specifying more than one sbspace distributes the smart large objects in a round-robin distribution scheme, so that the number of smart large objects in each space is approximately equal. The syscolattribs system catalog table contains one row for each sbspace that you specify in the PUT clause. When you fragment smart large objects across different sbspaces, you can work with smaller sbspaces. If you limit the size of an sbspace, backup and archive operations can perform more quickly. For an example that uses the PUT clause, see Alternative to Full Logging on page 2-201. Six storage options are available to store BLOB and CLOB data: Option EXTENT SIZE Effect Specifies how many kilobytes in a smart-large-object extent. The database server might round the EXTENT SIZE up so that the extents are multiples of the sbspace page size. Produces user-data pages that contain a page header and a page trailer to detect incomplete writes and data corruption. This is the default data-integrity behavior. Records, in the smart-large-object metadata, the system time when the smart large object was last read or written. Follows the logging procedure used with the current database log for the corresponding smart large object. This option can generate large amounts of log traffic and increase the risk of filling the logical log. (See also Alternative to Full Logging on page 2-201.) Does not record the system time when the smart large object was last read or written. This provides better performance than the KEEP ACCESS TIME option and is the default tracking behavior. Turns off logging. This option is the default behavior.
HIGH INTEG
LOG
NO LOG
2-200
CREATE TABLE
If a user-defined or complex data type contains more than one large object, the specified large-object storage options apply to all large objects in the type unless the storage options are overridden when the large object is created. Important: The PUT clause does not affect the storage of simple-large-object data types (BYTE and TEXT). For information on how to store BYTE and TEXT data, see Large-Object Data Types on page 4-25.
Description Length in kilobytes of the first extent for the table; default is 16. Length in kilobytes of each subsequent extent; default is 16.
Restrictions Must return a positive number; maximum is the chunk size Must return a positive number; maximum is the chunk size
The minimum length of first_kilobytes (and of next_kilobytes) is four times the disk-page size on your system. For example, if you have a 2-kilobyte page system, the minimum length is 8 kilobytes. The next example specifies a first extent of 20 kilobytes and allows the rest of the extents to use the default size:
CREATE TABLE emp_info ( f_name CHAR(20),
Chapter 2. SQL Statements
2-201
CREATE TABLE
l_name CHAR(20), position CHAR(20), start_date DATETIME YEAR TO DAY, comments VARCHAR(255) ) EXTENT SIZE 20
If you need to revise the extent sizes of a table, you can modify the extent and next-extent sizes in the generated schema files of an unloaded table. For example, to make a database more efficient, you might unload a table, modify the extent sizes in the schema files, and then create and load a new table. For information about how to optimize extents, see your IBM Informix Administrator's Guide.
, ( config_keyword =config_value )
Notes: 1
Element config_keyword config_value
Description Configuration keyword associated with the specified access method Value of the specified configuration keyword
A primary-access method is a set of routines to perform DDL and DML operations, such as create, drop, insert, delete, update, and scan, to make a table available to the database server. Dynamic Server provides a built-in primary-access method. You store and manage a virtual table either outside of the database server in an extspace or inside the database server in an sbspace. (See Storage Options on page 2-188.) You can access a virtual table with SQL statements. Access to a virtual table requires a user-defined primary-access method. DataBlade modules can provide other primary-access methods to access virtual tables. When you access a virtual table, the database server calls the routines associated with that access method rather than the built-in table routines. For more information on these other primary-access methods, refer to your access-method documentation.
2-202
CREATE TABLE
You can retrieve a list of configuration values for an access method from a table descriptor (mi_am_table_desc) using the MI_TAB_AMPARAM macro. Not all keywords require configuration values. The access method must already exist. For example, if an access method called textfile exists, you can specify it with the following syntax:
CREATE TABLE mybook (... ) IN myextspace USING textfile (DELIMITER=:)
You can subsequently change the lock mode of the table with the ALTER TABLE statement.
Granularity PAGE Effect Obtains and releases one lock on a whole page of rows This is the default locking granularity. Page-level locking is especially useful when you know that the rows are grouped into pages in the same order that you are using to process all the rows. For example, if you are processing the contents of a table in the same order as its cluster index, page locking is appropriate. ROW Obtains and releases one lock per row Row-level locking provides the highest level of concurrency. If you are using many rows at one time, however, the lock-management overhead can become significant. You might also exceed the maximum number of locks available, depending on the configuration of your database server. TABLE (XPS) Places a lock on the entire table This type of lock reduces update concurrency compared to row and page locks. A table lock reduces the lock-management overhead for the table With table locking, multiple read-only transactions can still access the table.
2-203
CREATE TABLE
You can set the IFX_DEF_TABLE_LOCKMODE environment variable to specify the lock mode of new tables during your current session. v Database server (all sessions on the database server) If you are a DBA, you can set the DEF_TABLE_LOCKMODE configuration parameter in the ONCONFIG file to determine the lock mode of all new tables in the database server. If you are not a DBA, you can set the IFX_DEF_TABLE_LOCKMODE environment variable for the database server, before you run oninit, to specify the lock mode of all new tables of the database server. The LOCK MODE setting in a CREATE TABLE statement takes precedence over the settings of the IFX_DEF_TABLE_LOCKMODE environment variable and the DEF_TABLE_LOCKMODE configuration parameter. If CREATE TABLE specifies no lock mode setting, the default mode depends on the setting of the IFX_DEF_TABLE_LOCKMODE environment variable or the DEF_TABLE_LOCKMODE configuration parameter. For information about IFX_DEF_TABLE_LOCKMODE, refer to the IBM Informix Guide to SQL: Reference. For information about the DEF_TABLE_LOCKMODE configuration parameter, refer to the IBM Informix Administrator's Reference.
If you use the UNDER clause, the row_type must be derived from the ROW type of the supertable. A type hierarchy must already exist in which the named ROW type of the new table is a subtype of the named ROW type of the supertable.
2-204
CREATE TABLE
Jagged rows are any set rows from a table hierarchy in which the number of columns is not fixed among the typed tables within the hierarchy. Some APIs, such as ESQL/C and JDBC, do not support queries that return jagged rows. When you create a typed table, CREATE TABLE cannot specify names for its columns, because the column names were declared when you created the ROW type. Columns of a typed table correspond to the fields of the named ROW type. The ALTER TABLE statement cannot add additional columns to a typed table. For example, suppose you create a named ROW type, student_t, as follows:
CREATE ROW TYPE student_t (name VARCHAR(30), average REAL, birthdate DATETIME YEAR TO DAY)
If a table is assigned the type student_t, the table is a typed table whose columns are of the same name and data type, and in the same order, as the fields of the type student_t. For example, the following CREATE TABLE statement creates a typed table named students whose type is student_t:
CREATE TABLE students OF TYPE student_t
For more information about ROW types, refer to the CREATE ROW TYPE statement on page 1-194.
When you use the UNDER clause, the subtable inherits these properties: v All columns in the supertable v All constraints defined on the supertable
Chapter 2. SQL Statements
2-205
CREATE TABLE
v v v v v All indexes defined on the supertable All triggers defined on the supertable. All referential integrity constraints The access method The storage option specification (including fragmentation strategy) If a subtable defines no fragments, but if its supertable has fragments defined, then the subtable inherits the fragments of the supertable.
Tip: Any heritable attributes that are added to a supertable after subtables have been created are automatically inherited by existing subtables. You do not need to add all heritable attributes to a supertable before you create its subtables. Restrictions on Table Hierarchies: Inheritance occurs in one direction only, namely from supertable to subtable. Properties of subtables are not inherited by supertables. The section System Catalog Information on page 2-207 lists the inherited database objects for which the system catalog maintains no information regarding subtables. No two tables in a table hierarchy can have the same data type. For example, the final line of the next code example is invalid, because the tables tab2 and tab3 cannot have the same row type (rowtype2):
create create create create --Invalid --> row type rowtype1 (...); row type rowtype2 (...) under rowtype1; table tab1 of type rowtype1; table tab2 of type rowtype2 under tab1; create table tab3 of type rowtype2 under tab1;
Privileges on Tables
The privileges on a table describe both who can access the information in the table and who can create new tables. For more information about access privileges, see GRANT on page 2-371. In an ANSI-compliant database, no default table-level privileges exist. You must grant these privileges explicitly. Setting the environment variable NODEFDAC to yes prevents default privileges from being granted to PUBLIC on new tables in a database that is not ANSI compliant, as described in the IBM Informix Guide to SQL: Reference. For more information about privileges, see the IBM Informix Guide to SQL: Tutorial.
2-206
CREATE TABLE
table. Instead, use the CREATE INDEX statement to create a unique index with the desired fragmentation strategy. Then use the ALTER TABLE statement to add the constraint. The new constraint uses the previously defined index. Important: In a database without logging, detached checking is the only kind of constraint checking available. Detached checking means that constraint checking is performed on a row-by-row basis.
Related Information
Related statements: ALTER TABLE, CREATE INDEX, CREATE DATABASE, CREATE EXTERNAL TABLE (XPS), CREATE ROW TYPE, CREATE Temporary TABLE, DROP TABLE, SET Database Object Mode, and SET Transaction Mode For Extended Parallel Server, see also SET Default Table Type and SET Default Table Space. For discussions of database and table creation, including discussions on data types, data-integrity constraints, and tables in hierarchies, see the IBM Informix Database Design and Implementation Guide. For information about the system catalog tables that store information about objects in the database, see the IBM Informix Guide to SQL: Reference. For information about the syschunks table (in the sysmaster database) that contains information about the location of smart large objects, see your IBM Informix Administrator's Reference.
2-207
Syntax
CREATE TEMP TABLE table Table Definition
Table Definition:
, (1) ( Column Definition , (2) , Multiple-Column Constraint (1) Column Definition ) Options (3)
WITH NO LOG
Notes: 1 2 3 See page 2-209 See page 2-211 See page 2-212
Description Name declared here for a temporary table Restrictions Must be unique among the names of temporary tables in the same session Syntax Identifier, p. 5-22
Element table
Usage
The CREATE TEMP TABLE statement is a special case of the CREATE Temporary TABLE statement. The CREATE Temporary TABLE statement can also create a SCRATCH table in an Extended Parallel Server database. For the complete syntax and semantics of the CREATE TEMP TABLE statement, see CREATE Temporary TABLE on page 2-209.
2-208
Syntax
CREATE TEMP (1) SCRATCH TABLE table Table Definition
Table Definition:
, (2) ( Column Definition , (3) , Multiple-Column Constraint Format (2) Column Definition )
Notes: 1 2 3 4 Extended Parallel Server only See page 2-209 See page 2-211 See page 2-212
Description Name declared here for a table Restrictions Must be unique in session Syntax Database Object Name, p. 5-17
Element table
Usage
You must have the Connect privilege on the database to create a temporary table. The temporary table is visible only to the user who created it. In DBAccess, using the CREATE Temporary Table statement outside the CREATE SCHEMA statement generates warnings if you set DBANSIWARN. In ESQL/C, the CREATE Temporary TABLE statement generates warnings if you use the -ansi flag or set the DBANSIWARN environment variable.
2-209
Column Definition
Use the Column Definition segment of CREATE Temporary TABLE to list the name, data type, default value, and constraints of a single column. Column Definition:
(1) column Data Type (2) DEFAULT Clause
2-210
This portion of the CREATE Temporary TABLE statement is almost identical to the corresponding section in the CREATE TABLE statement. The difference is that fewer types of constraints are allowed in a temporary table.
This is a subset of the syntax of Single-Column Constraint Format on page 2-176 that the CREATE TABLE statement supports. You can find detailed discussions of specific constraints in these sections. Constraint CHECK DISTINCT NOT NULL PRIMARY KEY Using the PRIMARY KEY Constraint on page 2-178 UNIQUE Using the UNIQUE or DISTINCT Constraints on page 2-177 For more information, see CHECK Clause on page 2-181 Using the UNIQUE or DISTINCT Constraints on page 2-177 Using the NOT NULL Constraint on page 2-177
2-211
Notes: 1 2
Element column Description Name of column or columns on which the constraint is placed
This is a subset of the syntax of Multiple-Column Constraint Format on page 2-184 that the CREATE TABLE statement supports. This alternative to the single-column constraint segment of CREATE Temporary TABLE can associate multiple columns with a constraint. Constraints that you define on temporary tables are always enabled. The following table indicates where you can find detailed discussions of specific constraints.
Constraint CHECK DISTINCT PRIMARY KEY UNIQUE For more information, see CHECK Clause on page 2-181 Using the UNIQUE or DISTINCT Constraints on page 2-177 Using the PRIMARY KEY Constraint on page 2-178 Using the UNIQUE or DISTINCT Constraints on page 2-177 For an example, see Defining Check Constraints Across Columns on page 2-186 Examples of the Multiple-Column Constraint Format on page 2-186 Defining Composite Primary and Foreign Keys on page 2-187 Examples of the Multiple-Column Constraint Format on page 2-186
See also the section Differences Between a Unique Constraint and a Unique Index on page 2-178.
(3)
2-212
(5)
Notes: 1 2 3 4 5 Dynamic Server only Informix extension See page 2-213 See page 2-203 See page 2-202
This is a subset of the syntax of Options Clauses on page 2-187 that the CREATE TABLE statement supports.
Storage Options
Use the storage-option segment of the CREATE Temporary Table statement to specify the storage location and distribution scheme for the table. This is an extension to the ANSI/ISO standard for SQL syntax. Unlike the corresponding storage-options segment of the CREATE TABLE statement, no EXTENT SIZE options are supported for temporary tables. Storage Options:
IN
(4)
FRAGMENT BY Clause
Notes: 1 2 3 4
Element dbspace dbslice extspace Description Dbspace in which to store the table. Name of the dbslice in which to store the table
Extended Parallel Server only Dynamic Server only See page 2-190 See page 2-199
Restrictions Must already exist Must already exist Syntax Identifier, p. 5-22 Identifier, p. 5-22 See documentation for access method.
Name that onspaces assigned to a storage area outside Must already exist the database server
To create a fragmented, unique index on a temporary table, you must specify an explicit expression-based distribution scheme for a temporary table in the CREATE Temporary TABLE statement. If you are using Extended Parallel Server, you can fragment a temporary table across multiple dbspaces that different coservers manage.
Chapter 2. SQL Statements
2-213
3 3 3 3 3
In Extended Parallel Server, to re-create this example, use the EXECUTE PROCEDURE statement instead of the EXECUTE FUNCTION statement.
2-214
Related Information
Related statements: ALTER TABLE, CREATE TABLE, CREATE DATABASE, DROP TABLE, and SELECT If you use Extended Parallel Server, see also SET Default Table Type and SET Default Table Space. For additional information about the DBANSIWARN and DBSPACETEMP environment variables, refer to the IBM Informix Guide to SQL: Reference. For additional information about the ONCONFIG parameter DBSPACETEMP, see your IBM Informix Administrator's Guide.
2-215
CREATE TRIGGER
CREATE TRIGGER
Use the CREATE TRIGGER statement to define a trigger on a table. This is an extension to the ANSI/ISO standard for SQL. For Dynamic Server, you can also use CREATE TRIGGER to define an INSTEAD OF trigger on a view.
Syntax
CREATE TRIGGER (1) Owner Name trigger
ENABLED DISABLED
Notes: 1 2 3 4
Element trigger Description Name that you declare here for a new trigger
See page 5-43 See page 2-217 Dynamic Server only See page 2-243
Restrictions Must be unique among the names of triggers in the current database Syntax Identifier, p. 5-22
Usage
A trigger, unless disabled, automatically executes a specified set of SQL statements, called the trigger action, when a specified trigger event occurs. The trigger event that initiates the trigger action can be an INSERT, DELETE, UPDATE, or (for triggers on IDS tables only) a SELECT statement. The event must specify the table or view on which the trigger is defined. (SELECT or UPDATE events for triggers on tables can also specify one or more columns.) You can use the CREATE TRIGGER statement in two distinct ways: v You can define a trigger on a table in the current database. v For Dynamic Server, you can also define an INSTEAD OF trigger on a view in the current database. Any SQL statement that is an instance of the trigger event is called a triggering statement. When the event occurs, triggers defined on tables and triggers defined on views differ in whether the triggering statement is executed: v For tables, the trigger event and the trigger action both execute. v For views, only the trigger action executes, instead of the event.
2-216
CREATE TRIGGER
The CREATE TRIGGER statement can support the integrity of data in the database by defining rules by which specified DML operations (the triggering events) cause the database server to take specified actions. The following sections describe the syntax elements. Sections whose names are followed by (IDS) describe features that are not supported by Extended Parallel Server.
Clause Defining a Trigger Event and Actions Trigger Modes (IDS) Insert Events and Delete Events Update Events Select Events (IDS) Action Clause REFERENCING Clause for Delete REFERENCING Clause for Insert REFERENCING Clause for Update REFERENCING Clause for Select (IDS) Correlated Table Action Triggered Action List INSTEAD OF Trigger on Views (IDS) Action Clause of INSTEAD OF Triggers (IDS) Page 2-217 2-219 2-222 2-222 2-223 2-226 2-228 2-229 2-230 2-231 2-231 2-232 2-243 2-244 Effect Associates triggered actions with an event Enables or disables the trigger Defines Insert events and Delete events Defines Update events Defines Select events Defines triggered actions Declares qualifier for deleted values Declares qualifier for inserted values Declares qualifiers for old and new values Declares qualifier for result set values Defines triggered actions Defines triggered actions Defines a trigger on views Triggered actions on views
UPDATE Subclauses
2-217
CREATE TRIGGER
UPDATE Subclauses:
(4) Action Clause (5) OLD Declaration (5) OLD Declaration Correlated Table Action (2) NEW Declaration (3) (3) Correlated Table Action
Trigger on a View:
(1) INSERT ON view REFERENCING NEW AS DELETE ON view REFERENCING OLD AS UPDATE ON view REFERENCING OLD AS REFERENCING NEW AS new OLD AS (6) INSTEAD OF Triggered Action old old NEW AS new old new FOR EACH ROW
Notes: 1 2 3 4 5 6 Dynamic Server only See page 2-229 See page 2-231 See page 2-226 See page 2-230 See page 2-243
Description The name of a column in the triggering table Old or new correlation name that you declare here Name or synonym of the triggering table or view. The table or view can include an owner. qualifier. Restrictions Must exist Unique in this trigger Syntax Identifier, p. 5-22 Identifier, p. 5-22
The left-hand portion of the main diagram (including the table or view) defines the trigger event (sometimes called the triggering event). The rest of the diagram declares correlation names and defines the trigger action (sometimes called the triggered action). (For triggers on tables, see Action Clause on page 2-226 and Correlated Table Action on page 2-231. For INSTEAD OF triggers on views, see The Action Clause of INSTEAD OF Triggers (IDS) on page 2-244.)
2-218
CREATE TRIGGER
Restrictions on Triggers
To create a trigger on a table or a view, you must own the table or view, or have DBA status. For the relationship between the privileges of the trigger owner and those of other users, see Privileges to Execute Trigger Actions on page 2-239. The table on which you create a trigger must exist in the current database. You cannot create a trigger on any of the following types of tables: v A diagnostics table, a violations table, or a table in another database v A raw table, a temporary table, or a system catalog table Extended Parallel Server does not support triggers on a static or scratch tables, and does not support light appends on operational tables on which a trigger is defined. (Light appends are described in the IBM Informix Administrator's Guide.) For additional restrictions on INSTEAD OF triggers on views, see Restrictions on INSTEAD OF Triggers on Views (IDS) on page 2-245. In DBAccess, if you want to define a trigger as part of a schema, place the CREATE TRIGGER statement inside a CREATE SCHEMA statement. If you are embedding the CREATE TRIGGER statement in an ESQL/C program, you cannot use a host variable in the trigger definition. You can use the DROP TRIGGER statement to remove an existing trigger. If you use DROP TABLE or DROP VIEW to remove triggering tables or views from the database, all triggers on those tables or views are also dropped.
You can create triggers on tables or on views in ENABLED or DISABLED mode. v When a trigger is created in ENABLED mode, the database server executes the trigger action when the trigger event is encountered. (If you specify no mode when you create a trigger, ENABLED is the default mode.) v When a trigger is created in DISABLED mode, the trigger event does not cause execution of the trigger action. In effect, the database server ignores the trigger and its action, even though the systriggers system catalog table maintains information about the disabled trigger. You can also use the SET TRIGGERS option of the Database Object Mode statement to set an existing trigger to the ENABLED or DISABLED mode. After a DISABLED trigger is enabled by the SET TRIGGERS statement, the database server can execute the trigger action when the trigger event is encountered, but the trigger does not perform retroactively. The database server does not attempt to execute the trigger for rows that were selected, inserted, deleted, or updated while the trigger was disabled and before it was enabled.
2-219
CREATE TRIGGER
Warning: Because the behavior of a trigger varies according to its ENABLED or DISABLED mode, be cautious about disabling a trigger. If disabling a trigger will eventually destroy the semantic integrity of the database, do not disable the trigger.
SPL variables are not valid in CREATE TRIGGER statements. An SPL routine called by a trigger cannot perform INSERT, DELETE, or UPDATE operations on any table or view that is not local to the current database. See also Rules for SPL Routines on page 2-238 for additional restrictions on SPL routines that are invoked in triggered actions.
Trigger Events
The trigger event specifies what DML statements can initiate the trigger. The event can be an INSERT, DELETE, or UPDATE operation on the table or view, or (for IDS tables only) a SELECT operation that manipulates the table. You must specify exactly one trigger event. Any SQL statement that is an instance of the trigger event is called a triggering statement. For each table, you can define only one trigger that is activated by an INSERT statement and only one trigger that is activated by a DELETE statement. The same table, however, can have multiple triggers that are activated by UPDATE or SELECT statements, provided that each trigger specifies a disjunct set of columns in defining the UPDATE or SELECT event on the table.
2-220
CREATE TRIGGER
You cannot specify a DELETE event if the triggering table has a referential constraint that specifies ON DELETE CASCADE. You are responsible for guaranteeing that the triggering statement returns the same result with and without the trigger action on a table. See also the sections Action Clause on page 2-226 and Triggered-Action List on page 2-232. A triggering statement from an external database server can activate the trigger. As the following example shows, an Insert trigger on newtab, managed by dbserver1, is activated by an INSERT statement from dbserver2. The trigger executes as if the INSERT originated on dbserver1.
-- Trigger on stores_demo@dbserver1:newtab CREATE TRIGGER ins_tr INSERT ON newtab REFERENCING new AS post_ins FOR EACH ROW(EXECUTE PROCEDURE nt_pct (post_ins.mc)); -- Triggering statement from dbserver2 INSERT INTO stores_demo@dbserver1:newtab SELECT item_num, order_num, quantity, stock_num, manu_code, total_price FROM items;
Dynamic Server also supports INSTEAD OF triggers on views, which are initiated when a triggering DML operation references the specified view. The INSTEAD OF trigger replaces the trigger event with the specified trigger action on a view, rather than execute the triggering INSERT, DELETE, or UPDATE operation. A view can have no more than one INSTEAD OF trigger defined for each type of event (INSERT, DELETE, or UPDATE). You can, however, define a trigger on one or more other views, each with its own INSTEAD OF trigger.
2-221
CREATE TRIGGER
The execution time for a trigger event depends on the complexity of the trigger action and whether it initiates other triggers. The time increases as the number of cascading triggers increases. For more information on triggers that initiate other triggers, see Cascading Triggers on page 2-240.
Element table
An Insert trigger is activated when an INSERT statement includes the specified table (or a synonym for table) in its INTO clause. Similarly, a Delete trigger is activated when a DELETE statement includes the specified table (or a synonym for table) in its FROM clause. For triggers on views, the INSTEAD OF keywords must immediately precede the INSERT, DELETE, or UPDATE keyword that specifies the type of trigger event, and the name or synonym of a view (rather than of a table) must follow the ON keyword. The section INSTEAD OF Triggers on Views (IDS) on page 2-243 describes the syntax for defining INSTEAD OF trigger events. No more than one Insert trigger, and no more than one Delete trigger, can be defined on one table. If you define a trigger on a subtable within a table hierarchy, and the subtable supports cascading deletes, then a DELETE operation on the supertable activates the Delete trigger on the subtable. See also the section Re-Entrancy of Triggers on page 2-236 for information about dependencies and restrictions on the actions of Insert triggers and Delete triggers.
UPDATE Event
UPDATE events (and SELECT events) can include an optional column list. UPDATE Event:
UPDATE , OF column ON table
Description Column that activates the trigger Name of the triggering table
Restrictions Must exist in the triggering table Must exist in the database
2-222
CREATE TRIGGER
If you define more than one Update trigger on the same table, the column list is required, and the column lists for each trigger must be mutually exclusive. If you omit the OF column list, updating any column of table activates the trigger. The OF column clause is not valid for an INSTEAD OF trigger on a view. An UPDATE on the triggering table can activate the trigger in two cases: v The UPDATE statement references any column in the column list. v The UPDATE event definition has no OF column list specification. Whether it updates one column or more than one column from the column list, a triggering UPDATE statement activates the Update trigger only once.
When an UPDATE statement updates multiple columns that have different triggers, the column numbers of the triggering columns determine the order of trigger execution. Execution begins with the smallest triggering column number and proceeds in order to the largest triggering column number. The following example shows that table taba has four columns (a, b, c, d):
CREATE TABLE taba (a int, b int, c int, d int)
Define trig1 as an update on columns a and c, and define trig2 as an update on columns b and d, as the following example shows:
CREATE TRIGGER trig1 UPDATE OF a, c ON taba AFTER (UPDATE tabb SET y = y + 1); CREATE TRIGGER trig2 UPDATE OF b, d ON taba AFTER (UPDATE tabb SET z = z + 1);
The following example shows a triggering statement for the Update trigger:
UPDATE taba SET (b, c) = (b + 1, c + 1)
Then trig1 for columns a and c executes first, and trig2 for columns b and d executes next. In this case, the smallest column number in the two triggers is column 1 (a), and the next is column 2 (b).
2-223
CREATE TRIGGER
SELECT , OF column ON table
Description Column that activates the trigger Name of the triggering table
Restrictions Must exist in the triggering table Must exist in the database
If you define more than one Select trigger on the same table, the column list is required, and the column lists for each trigger must be mutually exclusive. A SELECT on the triggering table can activate the trigger in two cases: v The SELECT statement references any column in the column list. v The SELECT event definition has no OF column list specification. (Sections that follow, however, describe additional circumstances that can affect whether or not a SELECT statement activates a Select trigger.) Whether it specifies one column or more than one column from the column list, a triggering SELECT statement activates the Select trigger only once. The action of a Select trigger cannot include an UPDATE, INSERT, or DELETE on the triggering table. The action of a Select trigger can include UPDATE, INSERT, and DELETE actions on tables other than the triggering table. The following example defines a Select trigger on one column of a table:
CREATE TRIGGER mytrig SELECT OF cola ON mytab REFERENCING OLD AS pre FOR EACH ROW (INSERT INTO newtab(for each action))
2-224
CREATE TRIGGER
For example, if a Select trigger is defined to execute whenever column col1 of table tab1 is selected, then both of the following stand-alone SELECT statements activate the Select trigger:
SELECT * FROM tab1; SELECT col1 FROM tab1;
Now suppose that the following SELECT statement invokes the my_rtn UDR in its select list:
SELECT my_rtn() FROM tab2
This SELECT statement activates the Select trigger defined on column col1 of table tab1 when the my_rtn UDR is executed.
Now suppose that the following statement invokes the my_rtn procedure:
EXECUTE PROCEDURE my_rtn()
This statement activates the Select trigger defined on column col1 of table tab1 when the SELECT statement within the statement block is executed.
2-225
CREATE TRIGGER
If you add a Select trigger to a subtable, this Select trigger can override the Select trigger that the subtable inherits from its supertable. For example, if the Select trigger trig1 is defined on column col1 in supertable tab1, the subtable tab2 inherits this trigger. But if you define a Select trigger named trig2 on column col1 in subtable tab2, and a SELECT statement selects from col1 in supertable tab1, this SELECT statement activates trigger trig1 for the rows in table tab1 and trigger trig2 (not trigger trig1) for the rows in table tab2. In other words, the trigger that you add to the subtable overrides the trigger that the subtable inherits from the supertable.
v A SELECT statement that includes the UNION or UNION ALL operator does not activate a Select trigger. v The SELECT clause of INSERT does not activate a Select trigger. v If the Projection clause of a SELECT includes the DISTINCT or UNIQUE keywords, the SELECT statement does not activate a Select trigger. v Select triggers are not supported on scroll cursors. v If a SELECT statement refers to a remote triggering table, the Select trigger is not activated on the remote database server. v Columns in the ORDER BY list of a query activate no Select triggers (nor any other triggers) unless also listed in the Projection clause.
Action Clause
Action Clause:
(1) BEFORE Triggered Action List (1) FOR EACH ROW Triggered Action List FOR EACH ROW Triggered Action List
(1)
2-226
CREATE TRIGGER
Notes: 1 See page Triggered-Action List on page 2-232 The action clause defines trigger actions and can specify when they occur. You must define at least one trigger action, using the keywords BEFORE, FOR EACH ROW, or AFTER to indicate when the action occurs relative to execution of the triggering statement. You can specify actions for any or all of these three options on a single trigger, but any BEFORE action list must be specified first, and any AFTER action list must be specified last. For more information on the action clause when a REFERENCING clause is also specified, see Correlated Table Action on page 2-231.
BEFORE Actions
The list of BEFORE trigger actions execute once before the triggering statement executes. Even if the triggering statement does not process any rows, the database server executes the BEFORE trigger actions.
AFTER Actions
The specified set of AFTER trigger actions executes once after the action of the triggering statement is complete. If the triggering statement does not process any rows, the AFTER trigger actions still execute.
Next, assume that you define trig1 on columns a and c, and trig2 on columns b and d. If both triggers specify BEFORE, FOR EACH ROW, and AFTER actions, then the trigger actions are executed in the following order: 1. BEFORE action list for trigger (a, c) 2. BEFORE action list for trigger (b, d) 3. FOR EACH ROW action list for trigger (a, c) 4. FOR EACH ROW action list for trigger (b, d) 5. AFTER action list for trigger (a, c) 6. AFTER action list for trigger (b, d)
2-227
CREATE TRIGGER
The database server treats all the triggers that are activated by the same triggering statement as a single trigger, and the trigger action is the merged-action list. All the rules that govern a trigger action apply to the merged list as one list, and no distinction is made between the two original triggers.
RERERENCING Clauses
The REFERENCING clause for any event declares a correlation name that can be used to qualify column values in the triggering table. These names enable FOR EACH ROW actions to reference new values in the result of trigger events. They also enable FOR EACH ROW actions to reference old column values that existed in the triggering table prior to modification by trigger events. Correlation names are not valid if the triggered action includes both the INSERT statement and the BEFORE WHEN or AFTER WHEN keywords. This restriction does not affect triggered actions that specify the FOR EACH ROW keywords without the BEFORE or AFTER keywords, or that include no INSERT statement.
2-228
CREATE TRIGGER
Element correlation Description Name declared here to qualify old column value for use within the trigger action Restrictions Syntax
The correlation is a qualifier for the column value in the triggering table before the triggering statement executed. The correlation is in scope in the FOR EACH ROW trigger action list. See Correlated Table Action on page 2-231. To use a correlation name in a trigger action to refer to an old column value, prefix the column name with the correlation name and a period ( . ) symbol. For example, if the NEW correlation name is post, refer to the new value for the column fname as post.fname. If the trigger event is a DELETE statement, using the new correlation name as a qualifier causes an error, because the column has no value after the row is deleted. For the rules that govern the use of correlation names, see Using Correlation Names in Triggered Actions on page 2-234. You can use the REFERENCING clause for Delete only if you define a FOR EACH ROW trigger action. In Extended Parallel Server, the OLD correlation value cannot be a BYTE or TEXT value. That is, it cannot refer to a BYTE or TEXT column.
Element correlation
Description Name that you declare here for a new column value for use within the trigger action
The correlation is a name for the new column value after the triggering statement has executed. Its scope of reference is only the FOR EACH ROW trigger action list; see Correlated Table Action on page 2-231. To use the correlation name, precede the column name with the correlation name, followed by a period ( . ) symbol. Thus, if the NEW correlation name is post, refer to the new value for the column fname as post.fname. If the trigger event is an INSERT statement, using the old correlation name as a qualifier causes an error, because no value exists before the row is inserted. For the rules that govern how to use correlation names, see Using Correlation Names in Triggered Actions on page 2-234. You can use the INSERT REFERENCING clause only if you define a FOR EACH ROW trigger action. The following example illustrates use of the INSERT REFERENCING clause. This example inserts a row into backup_table1 for every row that is inserted into table1. The values that are inserted into col1 and col2 of backup_table1 are an exact copy of the values that were just inserted into table1.
2-229
CREATE TRIGGER
CREATE TABLE table1 (col1 INT, col2 INT); CREATE TABLE backup_table1 (col1 INT, col2 INT); CREATE TRIGGER before_trig INSERT ON table1 REFERENCING NEW AS new FOR EACH ROW ( INSERT INTO backup_table1 (col1, col2) VALUES (new.col1, new.col2) );
As the preceding example shows, the INSERT REFERENCING clause allows you to refer to data values produced by the trigger action.
Notes: 1
Element correlation Description Name that you declare here for an old or new column value within the trigger action
The OLD correlation is the name of the value of the column in the triggering table before execution of the triggering statement; the NEW correlation identifies the corresponding value after the triggering statement executes. The scope of reference of the correlation names that you declare here is only within the FOR EACH ROW trigger action list. See Correlated Table Action on page 2-231. To refer to an old or new column value, prefix the column name with the correlation name and a period ( . ) symbol. For example, if the new correlation name is post, you can refer to the new value in column fname as post.fname. If the trigger event is an UPDATE statement, you can define both old and new correlation names to refer to column values before and after the triggering UPDATE statement. For rules that govern the use of correlation names, see Using Correlation Names in Triggered Actions on page 2-234. You can use the UPDATE REFERENCING clause only if you define a FOR EACH ROW trigger action. In Extended Parallel Server, the OLD correlation value cannot be a BYTE or TEXT value. That is, it cannot refer to a BYTE or TEXT column.
2-230
CREATE TRIGGER
Element correlation
Description Name that you declare here for old column value for use within the trigger action
This has the same syntax as the REFERENCING Clause for Delete on page 2-228. The scope of reference of the correlation name that you declare here is only within the FOR EACH ROW trigger action list. See Correlated Table Action on page 2-231. You use the correlation name to refer to an old column value by preceding the column name with the correlation name and a period ( . ) symbol. For example, if the old correlation name is pre, you can refer to the old value for the column fname as pre.fname. If the trigger event is a SELECT statement, using the new correlation name as a qualifier causes an error because the column does not have a new value after the column is selected. For the rules that govern the use of correlation names, see Using Correlation Names in Triggered Actions on page 2-234. You can use the SELECT REFERENCING clause only if you define a FOR EACH ROW trigger action.
If the CREATE TRIGGER statement contains an INSERT REFERENCING clause, a DELETE REFERENCING clause, an UPDATE REFERENCING clause, or (for Dynamic Server only) a SELECT REFERENCING clause, you must include a FOR EACH ROW triggered-action list in the action clause. You can also include BEFORE and AFTER triggered-action lists, but they are optional.
Chapter 2. SQL Statements
2-231
CREATE TRIGGER
For information on the BEFORE, FOR EACH ROW, and AFTER triggered-action lists, see Action Clause on page 2-226 In Extended Parallel Server, you cannot specify FOR EACH ROW actions on tables that have globally-detached indexes.
Triggered-Action List
Triggered Action List:
, , (2) ( (1) WHEN ( Condition ) DELETE Statement (4) UPDATE Statement (5) EXECUTE PROCEDURE Statement (6) EXECUTE FUNCTION Statement INSERT Statement (3) )
(7)
Notes: 1 2 3 4 5 6 7 See page 4-5 See page 2-395 See page 2-275 See page 2-636 See pag