SQLGuide
SQLGuide
Solid
Database Engine
SQL
Guide
Version 4.2
June 2004
All rights reserved. No portion of this product may be used in any way except as expressly authorized in writing by
Solid Information Technology.
Solid logo with the text "SOLID" is a registered trademark of Solid Information Technology Inc.
Solid AcceleratorLib™, Solid Availability™, Solid Bonsai Tree™, Solid BoostEngine™, Solid CarrierGrade
Option™, Solid Database Engine™, Solid Diskless™, Solid EmbeddedEngine™, Solid FlowControl™, Solid Flo-
wEngine™, Solid High Availability™, Solid HotStandby™, Solid Information Technology™, Solid Intelligent
Transaction™, Solid Remote Control ™, Solid SmartFlow™, Solid SQL Editor™, Solid SynchroNet™, and Built
Solid™ are trademarks of Solid Information Technology Inc. All other products, services, companies and publica-
tions are trademarks or registered trademarks of their respective owners.
Solid Intelligent Transaction patented Solid Information Technology (U.S. patent 6144941).
This product contains the skeleton output parser for bison ("Bison"). Copyright (c) 1984, 1989, 1990 Bob Corbett
and Richard Stallman.
For a period of three (3) years from the date of this license, Solid Information Technology Inc. will provide you, the
licensee, with a copy of the Bison source code upon receipt of your written request and the payment of Solid's rea-
sonable costs for providing such copy.
1 Database Concepts
Relational Databases ....................................................................................................................... 1-1
Client-Server Architecture............................................................................................................... 1-5
Multi-User Capability...................................................................................................................... 1-6
Transactions..................................................................................................................................... 1-6
Transaction Logging and Recovery................................................................................................. 1-6
Summary.......................................................................................................................................... 1-7
iii
3 Stored Procedures, Events, Triggers, and Sequences
Stored Procedures ................................................................................................................................. 3-1
Basic procedure structure ................................................................................................................ 3-2
Naming Procedures.......................................................................................................................... 3-2
Parameter Section ............................................................................................................................ 3-2
Declare Section................................................................................................................................ 3-4
Procedure Body ............................................................................................................................... 3-5
Assignments..................................................................................................................................... 3-5
Expressions ...................................................................................................................................... 3-7
Control Structures.......................................................................................................................... 3-10
Remote Stored Procedures ................................................................................................................. 3-19
Using SQL in a Stored Procedure ..................................................................................................... 3-23
EXECDIRECT .............................................................................................................................. 3-23
Using A Cursor .............................................................................................................................. 3-23
Error Handling ............................................................................................................................... 3-27
Parameter Markers in Cursors ....................................................................................................... 3-30
Calling other Procedures.................................................................................................................... 3-32
Positioned Updates and Deletes..................................................................................................... 3-33
Transactions................................................................................................................................... 3-35
Default Cursor Management.......................................................................................................... 3-35
Notes on SQL ................................................................................................................................ 3-36
Functions for Procedure Stack Viewing ....................................................................................... 3-36
Procedure Privileges ........................................................................................................................... 3-37
Using Triggers ..................................................................................................................................... 3-37
How Triggers Work....................................................................................................................... 3-38
Creating Triggers ........................................................................................................................... 3-39
Keywords and Clauses................................................................................................................... 3-40
Triggers Comments and Restrictions............................................................................................. 3-44
Triggers and Procedures .................................................................................................................... 3-45
Triggers and Transactions.................................................................................................................. 3-50
Recursion and Concurrency Conflict Errors.................................................................................. 3-50
Triggers and Referential Integrity ................................................................................................. 3-58
Trigger Privileges and Security ..................................................................................................... 3-62
Raising Errors From Inside Triggers ............................................................................................. 3-62
Trigger Example ............................................................................................................................ 3-63
v
Row value constructors.................................................................................................................. 4-38
6 Performance Tuning
Tuning SQL Statements and Applications.......................................................................................... 6-1
Evaluating Application Performance............................................................................................... 6-2
Using Stored Procedure Language .................................................................................................. 6-2
Optimizing Single-table SQL Queries................................................................................................. 6-3
Using Indexes to Improve Query Performance.................................................................................. 6-4
Waiting On Events ................................................................................................................................ 6-6
Optimizing Batch Inserts and Updates ............................................................................................... 6-7
Increasing Speed of Batch Inserts and Updates............................................................................... 6-7
Using Optimizer Hints.......................................................................................................................... 6-8
Diagnosing Poor Performance ........................................................................................................... 6-10
A Data Types
Supported Data Types .......................................................................................................................... A-1
vii
DELETE .............................................................................................................................................. B-73
DELETE (positioned) ......................................................................................................................... B-73
DROP CATALOG............................................................................................................................... B-73
DROP EVENT .................................................................................................................................... B-74
DROP INDEX ..................................................................................................................................... B-74
DROP MASTER ................................................................................................................................. B-75
DROP PROCEDURE ......................................................................................................................... B-76
DROP PUBLICATION ...................................................................................................................... B-76
DROP PUBLICATION REGISTRATION....................................................................................... B-77
DROP REPLICA ................................................................................................................................ B-79
DROP ROLE ....................................................................................................................................... B-80
DROP SCHEMA................................................................................................................................. B-80
DROP SEQUENCE ............................................................................................................................ B-81
DROP SUBSCRIPTION .................................................................................................................... B-81
DROP SYNC BOOKMARK.............................................................................................................. B-83
DROP TABLE ..................................................................................................................................... B-85
DROP TRIGGER ............................................................................................................................... B-85
DROP USER........................................................................................................................................ B-86
DROP VIEW ....................................................................................................................................... B-86
EXPLAIN PLAN FOR ....................................................................................................................... B-87
EXPORT SUBSCRIPTION ............................................................................................................... B-87
GET_PARAM()................................................................................................................................... B-90
GRANT ................................................................................................................................................ B-92
GRANT REFRESH ............................................................................................................................ B-94
HINT .................................................................................................................................................... B-95
IMPORT ........................................................................................................................................... B-102
INSERT............................................................................................................................................. B-104
INSERT (Using Query) ................................................................................................................... B-105
LOCK TABLE.................................................................................................................................. B-105
MESSAGE APPEND ....................................................................................................................... B-109
MESSAGE BEGIN .......................................................................................................................... B-112
MESSAGE DELETE ....................................................................................................................... B-114
MESSAGE DELETE CURRENT TRANSACTION.................................................................... B-116
MESSAGE END .............................................................................................................................. B-118
MESSAGE EXECUTE.................................................................................................................... B-120
ix
WAIT EVENT .................................................................................................................................. B-171
Table_reference ................................................................................................................................ B-172
Query_specification ......................................................................................................................... B-172
Search_condition.............................................................................................................................. B-173
Check_condition............................................................................................................................... B-173
Expression......................................................................................................................................... B-174
String Functions ............................................................................................................................... B-176
Numeric Functions........................................................................................................................... B-178
Date Time Functions........................................................................................................................ B-179
System Functions ............................................................................................................................. B-181
Miscellaneous Functions.................................................................................................................. B-181
Data_type .......................................................................................................................................... B-182
Date and Time Literals.................................................................................................................... B-182
Pseudo Columns ............................................................................................................................... B-183
Wildcard characters ........................................................................................................................ B-184
C Reserved Words
xi
SYS_SYNC_REPLICA_RECEIVED_MSGS ............................................................................. D-27
SYS_SYNC_REPLICA_STORED_MSGS ................................................................................. D-28
SYS_SYNC_REPLICA_STORED_MSGPARTS ....................................................................... D-28
SYS_SYNC_REPLICA_VERSIONS .......................................................................................... D-28
SYS_SYNC_REPLICAS ............................................................................................................. D-29
SYS_SYNC_SAVED_STMTS .................................................................................................... D-29
SYS_SYNC_USERMAPS ........................................................................................................... D-30
SYS_SYNC_USERS.................................................................................................................... D-30
SYSTEM VIEWS............................................................................................................................... D-31
COLUMNS................................................................................................................................... D-31
SERVER_INFO............................................................................................................................ D-32
TABLES ....................................................................................................................................... D-33
USERS.......................................................................................................................................... D-33
SYNCHRONIZATION-RELATED VIEWS ................................................................................... D-33
SYNC_FAILED_MESSAGES..................................................................................................... D-34
SYNC_FAILED_MASTER_MESSAGES .................................................................................. D-34
SYNC_ACTIVE_MESSAGES .................................................................................................... D-35
SYNC_ACTIVE_MASTER_MESSAGES.................................................................................. D-36
E
System Stored Procedures
Synchronization-Related Stored Procedures...................................................................................... E-1
SYNC_SETUP_CATALOG ........................................................................................................... E-1
SYNC_REGISTER_REPLICA....................................................................................................... E-2
SYNC_UNREGISTER_REPLICA ................................................................................................. E-4
SYNC_REGISTER_PUBLICATION............................................................................................. E-5
SYNC_UNREGISTER_PUBLICATION ....................................................................................... E-6
SYNC_SHOW_SUBSCRIPTIONS ................................................................................................ E-8
SYNC_SHOW_REPLICA_SUBSCRIPTIONS ............................................................................. E-9
SYNC_DELETE_MESSAGES..................................................................................................... E-10
SYNC_DELETE_REPLICA_MESSAGES .................................................................................. E-11
Miscellaneous Stored Procedures ...................................................................................................... E-12
SYS_GETBACKGROUNDJOB_INFO........................................................................................ E-12
Glossary
Index
xiii
xiv Solid Database Engine SQL Guide
Welcome
Organization
This guide contains the following chapters:
■ Chapter 1, Database Concepts, familiarizes you with the basics of relational databases.
■ Chapter 2, Getting Started with SQL, familiarizes you with the basics of SQL (Struc-
tured Query Language).
■ Chapter 3, Stored Procedures, Events, Triggers, and Sequences, explains how to use
programming features, including Stored Procedures, Triggers, etc.
■ Chapter 4, Using Solid SQL for Data Management, explains the use of SQL for tasks
such as limiting access to particular users or roles, etc.
■ Chapter 5, Diagnostics and Troubleshooting, explains how to diagnose and cure some
types of problems.
■ Chapter 6, Performance Tuning, explains how to improve performance of SQL state-
ments.
■ Appendix A, “Data Types”, lists the valid SQL data types.
■ Appendix B, “Solid SQL Syntax”, shows the syntax for each SQL statement accepted by
the Solid Database Engine.
■ Appendix C, “Reserved Words”, lists the reserved words in SQL statements.
xv
■ Appendix D, “Database System Tables and System Views”, lists system tables and
views.
■ Appendix E, “System Stored Procedures”, lists stored procedures that are pre-defined
by the server.
■ Appendix F, “System Events”, lists events that are pre-defined by the server.
■ Glossary The Glossary provides definitions of terms.
Audience
This guide is for users who want to learn about SQL in general or who want to learn about
Solid-specific SQL.
Conventions
Product Name
In this guide:
■ "Solid Database Engine", "Solid engine", or "Solid server" refers to the database server
used in Solid products, such as Solid BoostEngine, Solid FlowEngine, or Solid Embed-
ded Engine.
■ "Solid" used alone refers to all Solid data management products. In addition, "Solid" is
the short company name for Solid Information Technology.
Typographic
This manual uses the following typographic conventions.
xvi
[] Brackets indicate optional items; if in bold
text, brackets must be included in the syn-
tax.
| A vertical bar separates two mutually
exclusive choices in a syntax line.
{} Braces delimit a set of mutually exclusive
choices in a syntax line; if in bold text,
braces must be included in the syntax.
... An ellipsis indicates that arguments can be
repeated several times.
. A column of three dots indicates continua-
. tion of previous lines of code.
.
xvii
xviii
1
Database Concepts
If you are not already familiar with relational database servers such as the Solid Database
Engine family, you may want to read this chapter.
This chapter explains the following concepts:
■ Relational Databases
■ Tables, Rows, and Columns
■ Relating data in different tables
■ Multi-User Capability / Concurrency Control and Locking
■ Client-Server architecture
■ Transactions
■ Transaction Logging and Recovery
Relational Databases
This table contains 3 rows of data. (The top "row", which has the labels "ID", "NAME", and
"ADDRESS" is shown here for the convenience of the reader. The actual table in the data-
base does not have such a row.) The table contains 3 columns (ID, NAME, and ADDRESS).
SQL provides commands to create tables, insert rows into tables, update data in tables,
delete rows from tables, and query the rows in tables.
Tables in SQL, unlike arrays in programming languages like C, are not homogenous. In SQL
one column may have one data type (such as INTEGER), while an adjacent column may
have a very different data type (such as CHAR(20), which means an array of 20 characters).
A table may have varying numbers of rows. Rows may be inserted and deleted at any time;
you do not need to pre-allocate space for a maximum number of rows. (All database servers
have some maximum number of rows that they can handle. For example, most database
servers that run on 32-bit operating systems have a limit of approximately 2 billion rows. In
most applications, the maximum is far more than you are likely to need.)
Each row ("record") must have at least one value, or combination of values, that is unique. If
we have two composers named David Jones to our table, and we need to update the address
of only one of them, then we need some way to tell them apart. In some cases, you can find a
combination of columns that is unique, even if you can’t find any single column that con-
tains unique values. For example, if the name column is not sufficient, then perhaps the com-
bination of name and address will be unique. However, without knowing all the data ahead
of time, it is difficult to absolutely guarantee that each value will be unique. Most database
designers add an "extra" column that has no purpose other than to uniquely and easily iden-
tify each record. In our table above, for example, the ID numbers are unique. As you may
have noticed, when we actually try to update or delete a record, we identify it by its unique
ID (e.g. "... WHERE id = 1") rather than by using another value, such as name, that might
not be unique.
Client-Server Architecture
Solid’s database engines use the client-server model. In a client-server model, a single
"server" may process requests from 1 or more "clients". This is quite similar to the way that
a restaurant works -- a single waiter and cook may handle requests from many customers.
In a client-server database model, the server is a specialized computer program that knows
how to store and retrieve data efficiently. The server typically accepts 4 basic types of
requests:
■ Insert a new piece of information
■ Update an existing piece of information
■ Retrieve an existing piece of information
■ Delete an existing piece of information
The server can store almost any type of data, but generally doesn’t know the "meaning" of
the data. The server typically knows little or nothing about "business issues", such as
accounting, inventory, etc. It doesn’t know whether a particular piece of information is an
inventory record, a description of a bank deposit, or a digitized copy of the song "American
Pie".
The "clients" are responsible for knowing something about the particular business issues and
about the "meaning" of the data. For example, we might write a client program that knows
something about accounting. The client program might know how to calculate interest on
late payments, for example. Or, the client might recognize that a particular piece of data is a
song, and might convert the digital data to analog audio output.
Of course, it’s possible to write a single program that does both the "client" and the "server"
part of the work. A program that reads digitized music and plays it could also store that data
to disk and look it up on request. However, it’s not very efficient for every company to write
its own data storage and retrieval routines. It is usually more efficient to buy an off-the-shelf
data storage solution that is general enough to meet your needs, yet has relatively high per-
formance.
Transactions
SQL allows you to group multiple statements into a single "atomic" (indivisible) piece of
work called a transaction. For example, if you write a check to a grocery store, then the gro-
cery store’s bank account should be given the money at the same instant that the money is
withdrawn from your account. It wouldn’t make sense for you to pay the money without the
grocery store receiving it, and it wouldn’t make sense for the grocery store to be paid with-
out your account having the money subtracted. If either of these operations (adding to the
grocery store’s account or subtracting from yours) fails, then the other one ought to fail, too.
If both statements are in the same transaction, and either statement fails, then you can use
the ROLLBACK command to restore things as they were before the transaction started --
this prevents half-successful transactions from occurring. Naturally, if both halves of our
financial transaction are successful, then we’d like our database transaction to be successful,
too. Successful transactions are preserved with the command COMMIT WORK. Below is a
simplistic example.
COMMIT WORK; -- Finish the previous transaction.
UPDATE stores SET balance = balance + 199.95
WHERE store_name = ’Big Tyke Bikes’;
UPDATE checking_accounts SET balance = balance - 199.95
WHERE name = ’Jay Smith’;
COMMIT WORK;
Background
Suppose that you are writing data to a disk drive (or other permanent storage medium) and
suddenly the power fails. The data that you write might not be written completely. For exam-
ple, you might try to write the account balance "122.73", but because of the power failure
you just write "12". The person whose account is missing some money will be quite dis-
pleased. How do we ensure that we always write complete data? Part of the solution is to use
what is called a "transaction log". (Note that in the world of computers, many different
things are called "logs". For example, the Solid Database Engines write multiple log files,
including a transaction log file and an error message log file. For the moment, we are dis-
cussing only the transaction log file.)
As we mentioned previously, work is usually done in "transactions". An entire transaction is
either committed or rolled back. No partial transactions are allowed. In the situation
described here, where we started to write a person’s new account balance to disk but lost
power before we could finish, we’d like to roll back this transaction. Any transactions that
were already completed and were correctly written to disk should, of course, be preserved.
To help us track what data has been written successfully and what data has not been written
successfully, we actually write data to a "transaction log" as well as to the database tables.
The transaction log is essentially a linear sequence of the operations that have been per-
formed -- i.e. the transactions that have been committed. There are markers in the file to
indicate the end of each transaction. If the last transaction in the file does not have an "end-
of-transaction" marker, then we know that fractional transaction was not completed, and it
should be rolled back rather than committed.
When the server re-starts after a failure, it reads the transaction log and applies the com-
pleted transactions one by one. In other words, it updates the tables in the database, using the
information in the transaction log file. This is called "recovery". When done properly, recov-
ery can even protect against power failures during the recovery process itself.
This is not a complete description of how transaction logging protects against data corrup-
tion. We have explained how the server makes sure that it doesn’t lose transactions. But we
haven’t really explained how the server prevents the database file from becoming corrupted
if a write failure occurs while the server is in the middle of writing a record to a table in the
disk drive. That topic is more advanced and is not discussed here.
Summary
This brief introduction to relational databases has explained the concepts that you need to
start using a relational database. You should now be able to answer the following questions:
Table 2–1
ID NAME ADDRESS
1 Beethoven 23 Ludwig Lane
2 Dylan 46 Robert Road
3 Nelson 79 Willie Way
This table contains 3 rows of data. (The top "row", which has the labels "ID", "NAME", and
"ADDRESS" is shown here for the convenience of the reader. The actual table in the data-
base does not have such a row.) The table contains 3 columns (ID, NAME, and ADDRESS).
SQL provides commands to create tables, insert rows into tables, update data in tables,
delete rows from tables, and query the rows in tables.
If Mr. Dylan moves to 61 Bob Street, you can update his data with the command:
UPDATE composers SET ADDRESS = ’61 Bob Street’ WHERE ID = 2;
Because the ID field is unique for each composer, and because the WHERE clause in this
command specifies only one ID, this update will be performed on only one composer.
If Mr. Beethoven dies and you need to delete his record, you can do so with the command:
DELETE FROM composers WHERE ID = 1;
Finally, if you would like to list all the composers in your table, you can use the command:
SELECT id, name, address FROM composers;
Note that the SELECT statement, unlike the UPDATE and DELETE statements listed
above, did not include a WHERE clause. Therefore, the command applied to ALL records in
the specified table. Thus the result of this SQL statement is to select (and list) all of the com-
posers listed in the table.
ID NAME ADDRESS
1 Beethoven 23 Ludwig Lane
2 Dylan 46 Robert Road
3 Nelson 79 Willie Way
Note that although you entered the strings with quotes, they are displayed without quotes.
Even this simple series of commands helps show some important points about SQL.
■ SQL is a relatively "high level" language. A single command can create a table with as
many columns as you wish. Similarly, a single command can execute an UDPATE of
almost any complexity. Although we didn’t show it here, you can update multiple col-
umns at a time, and you can even update more than one row at a time. Operations that
might take dozens, or hundreds, of lines of code in languages like C or Java can be exe-
cuted in a single SQL command.
Customer Smith has two accounts, and customer Jones has 1 account.
INSERT INTO accounts (id, balance, customer_id) VALUES (1001, 200.00, 1);
INSERT INTO accounts (id, balance, customer_id) VALUES (1002, 5000.00, 1);
INSERT INTO accounts (id, balance, customer_id) VALUES (1003, 222.00, 2);
As you probably realized by reading the preceding code, when we create an account for a
customer, we store that customer’s ID number as part of the account information. Specifi-
cally, each row in the accounts table has a customer_id value, and that customer_id value
matches the id of the customer who owns that account. Smith has customer id 1, and each of
Smith’s accounts has a 1 in the customer_id field. That means that a user can find all of
Smith’s account records by doing the following:
1. Look up Smith’s record in the customers table.
2. When we find Smith’s record, look at the id number in that record. (In Smith’s case, the
id is 1.)
3. Now look up all accounts in the accounts table that have a value of 1 in the customer_id
field.
Of course, if a person has multiple accounts, she might want to know the total amount of
money that she has in all accounts. The computer can provide this information by using the
following query:
SELECT customers.id, SUM(balance)
FROM customers, accounts
WHERE accounts.customer_id = customers.id
GROUP BY customers.id;
Note that this time, Smith appears only once, and she appears with the total amount of
money in all her accounts.
This query uses the GROUP BY clause and an aggregate function named SUM(). The topic
of GROUP BY clauses is more complex than we want to go into during this simple introduc-
tion to SQL. This query is just to give you a little taste of the type of useful work that SQL
can do in a single statement. Getting the same result in a language like C would take many
statements.
Note that join operations are not limited to 2 tables. It’s possible to create joins with an
almost arbitrary number of tables. As a realistic extension of our banking example, we might
have another table, "checks", which holds information about each check written. Thus we
would have not only a 1-to-many relationship from each customer to her accounts, but also a
1-to-many relationship from each checking account to all of the checks written on that
account. It’s quite possible to write a query that will list all the checks that a customer has
written, even if that customer has multiple checking accounts.
Table Aliases
SQL allows you to use an "alias" in place of a table name in some queries. In some cases,
aliases are merely an optional convenience. In some queries, however, aliases are actually
required (for reasons we won’t explain here). We’ll introduce the topic of aliases here
because they are required for some examples later in this chapter. The query below is the
same as an earlier query, except that we’ve added the table alias "a" for the accounts table
and "c" for the customers table.
SELECT name, balance
FROM customers c, accounts a
WHERE a.customer_id = c.id;
As you can see, we defined an alias in the "FROM" clause and then used it elsewhere in the
query (in the WHERE clause in this case).
Subqueries
SQL allows one query to contain another query, called a "subquery".
Returning to our bank example, over time, some customers add accounts and other custom-
ers terminate accounts. In some cases, a customer might gradually terminate accounts until
he has no more accounts. Our bank may want to identify all customers that don’t have any
The subquery (also called the "inner query") is the query inside the parentheses. The inner
query is executed once for each record selected by the outer query. (This functions a lot like
nested loops would function in another programming language, except that with SQL we can
do nested loops in a single statement.) Naturally, if there are any accounts for the particular
customer that the outer loop is processing, then those account records are returned to the
outer query.
The "EXISTS" clause in the outer query says, effectively, "We don’t care what values are in
those records; all we care about is whether there are any records or not." Thus EXISTS
returns TRUE if the customer has any accounts. If the customer has no accounts, then the
EXISTS returns false. The EXISTS clause doesn’t care whether there are multiple accounts
or single accounts. It doesn’t care what values are in the accounts. All the EXISTS wants to
know is "Is there at least one record?"
Thus, the entire statement lists those customers who have at least one account. No matter
how many accounts the customer has (as long as it’s at least 1), the customer is listed only
once.
Now let’s list all those customers who don’t have any accounts:
SELECT id, name
FROM customers c
WHERE NOT EXISTS (SELECT * FROM accounts a WHERE a.customer_id = c.id);
Merely adding the keyword NOT reverses the sense of the query.
Subqueries may themselves have subqueries. In fact, subqueries may be nested almost arbi-
trarily deep.
As you can see, timestamp values are entered in order from the "most significant" digit to
the "least significant" digit. Similarly, date and time values are also entered from the most
significant digit to the least significant digit. And all 3 of these data types (timestamp, date,
time) use punctuation to separate individual fields.
The reason for requiring particular formats is that some of the other possible formats are
ambiguous. For example, to someone in the U.S., ’07-04-1776’ is July 4, 1776, since Ameri-
cans usually write dates in the ’mm-dd-yyyy’ (or ’mm/dd/yyyy’ format). But to a person
from Europe, this date is obviously April 7, not July 4th, since most Europeans write dates
in the format ’dd-mm-yyyy’. Although it may seem that the problem of having too many
formats is not well solved by adding still another format, there are some advantages to
SQL’s approach of using a format that starts with the most significant digit and moves
steadily towards the least significant digit. First, it means that all 3 data types (date, time,
and timestamp) follow the same rule. Second, the date format and the time format are both
perfect subsets of the timestamp format. Third, although it’s yet another format to memo-
rize, the rule is reasonably simple and is consistent with the way that "western" languages
write numbers (most significant digit is furthest to the left). Finally, by being obviously
incompatible with the existing formats, there’s no chance that a person will accidentally
write one date (e.g. ’07-04-1776’) and have it interpreted by the machine as another date.
NULL IS NOT NULL (or "How to say ’None of the Above’ in SQL")
Sometimes you don’t have enough information to fill out a form completely. SQL uses the
keyword NULL to represent "Unknown" or "No Value". (This is different from the meaning
of NULL in programming languages such as C.) For example, if we are inserting a record
for Joni Mitchel into our table of composers, and we don’t know Ms. Mitchel’s address, then
we might execute the following:
INSERT INTO composers (id, name, address) VALUES (5, ’Mitchel’, NULL);
To give you some information about NULL, and also give you some practice reading SQL
code, we’ve written our explanation of NULL as a sample program (with comments, of
course!). You can read this now. When you’re ready to run it, simply cut and paste part or all
of it into a program that executes SQL, such as the solsql utility provided with the Solid
Development Kit. (For more information about solsql, see the Administrator’s Guide.)
-- This sample script shows some unusual characteristics
-- of the value NULL.
-- Since NULL doesn't equal NULL, what will the following query return?
SELECT * FROM table1 WHERE x != x;
As another example, the following statement uses the built-in SQRT function to calculate
the square root of each value in the column named "variance".
SELECT SQRT(variance) FROM table1;
Our next example uses the "REPLACE" function to convert numbers from U.S. format to
European format. In U.S. format, numbers use the period character (’.’) as the decimal point,
but in Europe the comma (’,’) is used. For example, in the U.S. the approximation of pi is
written as "3.14", while in Europe it is written as "3,14". We can use the REPLACE func-
tion to replace the ’.’ character with the ’,’ character. The following series of statements
shows an example of this.
CREATE TABLE number_strings (n VARCHAR);
INSERT INTO number_strings (n) VALUES ('3.14'); -- input in US format.
SELECT REPLACE(n, '.', ',') FROM number_strings; -- output in European.
The output of course looks like
n
---------
3,14
Note that one function can call another. The following expression takes the square root of a
number and then takes the natural log of that square root:
SELECT LOG(SQRT(x)) FROM table1;
When you use expressions, you may want to specify a new name for a column. For exam-
ple, if you use the expression
SELECT monthly_average * 12 FROM table1;
you probably don’t want the output column to be called "monthly_average". Solid’s data-
base server will actually use the expression itself as the name of the column. In this case, the
name of the column would be "monthly_average * 12". That’s certainly descriptive, but for
a long expression this can get very message. You can use the "AS" keyword to give an out-
put column a specific name. In the following example, the output will have the column
heading "yearly_average".
SELECT monthly_average * 12 AS yearly_average FROM table1;
Note that the AS clause works for any output column, not just for expressions. If you like,
you may do something like the following:
SELECT ssn AS SocialSecurityNumber FROM table2;
A CASE clause allows you to control the output based on the input. Below is a simple exam-
ple, which converts a number (1-12) to the name of a month:
CREATE TABLE dates (m INT);
INSERT INTO dates (m) VALUES (1);
-- ...etc.
INSERT INTO dates (m) VALUES (12);
INSERT INTO dates (m) VALUES (13);
SELECT
CASE m
WHEN 1 THEN 'January'
-- etc.
WHEN 12 THEN 'Decmber'
In some situations, you may want to cast a value to a different data type. For example, when
inserting BLOB data, it is convenient to create a string that contains your data, and then
insert that string into a BINARY column. You may use a cast as shown below:
CREATE TABLE table1 (b BINARY(4));
INSERT INTO table1 VALUES (CAST('FF00AA55' AS BINARY));
This cast allows you to take data that is a series of hexadecimal digits and input it as though
it were a string. Each of the hexadecimal pairs in the quoted string represents a single byte of
data. There are 8 hexadecimal digits, and thus 4 bytes of input.
A cast can be used to change output as well as input. In the rather complex code sample
below, the expression in the CASE clause converts the output from the format
'2003-01-20 15:33:40' to
'2003-Jan-20 15:33:40'.
CREATE TABLE sample1(dt TIMESTAMP);
COMMIT WORK;
SELECT
CASE MONTH(dt)
WHEN 1 THEN REPLACE(CAST(dt AS varchar), '-01-', '-Jan-')
WHEN 2 THEN REPLACE(CAST(dt AS varchar), '-02-', '-Feb-')
WHEN 3 THEN REPLACE(CAST(dt AS varchar), '-03-', '-Mar-')
WHEN 4 THEN REPLACE(CAST(dt AS varchar), '-04-', '-Apr-')
WHEN 5 THEN REPLACE(CAST(dt AS varchar), '-05-', '-May-')
WHEN 6 THEN REPLACE(CAST(dt AS varchar), '-06-', '-Jun-')
This takes a value from a column named "dt", converts that value from timestamp to VAR-
CHAR, then replaces the month number with an abbreviation for the month (e.g. it replaces
"-01-" with "-Jan-"). By using the CASE/WHEN/END syntax, we can specify exactly what
output we want for each possible input. Note that because this expression is so complicated,
it is almost mandatory to use an AS clause to specify the column header in the output.
Note that
In Solid, a number of features are available that make it possible to move parts of the appli-
cation logic into the database. These features include:
■ stored procedures
■ deferred procedure calls ("Start After Commit")
■ event alerts
■ triggers
■ sequences
Stored Procedures
Stored procedures are simple programs, or procedures, that are executed in Solid databases.
The user can create procedures that contain several SQL statements or whole transactions,
and execute them with a single call statement. In addition to SQL statements, 3GL type con-
trol structures can be used enabling procedural control. In this way complex, data-bound
transactions may be run on the server itself, thus reducing network traffic.
Granting execute rights on a stored procedure automatically invokes the necessary access
rights to all database objects used in the procedure. Therefore, administering database access
rights may be greatly simplified by allowing access to critical data through procedures.
This section explains in detail how to use stored procedures. In the beginning of this section,
the general concepts of using the procedures are explained. Later sections go more in-depth
and describe the actual syntax of different statements in the procedures. The end of this sec-
tion discusses transaction management, sequences and other advanced stored procedure fea-
tures.
Note
Note
Naming Procedures
Procedure names have to be unique within a database schema.
All the standard naming restrictions applicable to database objects, like using reserved
words, identifier lengths, etc., apply to stored procedure names. For an overview and com-
plete list of reserved words, see Appendix C, “Reserved Words”.
Parameter Section
A stored procedure communicates with the calling program using parameters. Stored proce-
dures accept two types of parameters:
■ Input parameters, which are used as input to the procedure.
■ Output parameters, which are returned values from the procedure. Stored procedures
may return a result set of several rows with output parameters as the columns.
The types of parameters must be declared. For supported data types, see Appendix A, “Data
Types”.
The syntax used in parameter declaration is:
parameter_name parameter_datatype
Input parameters are declared between parentheses directly after the procedure name, output
parameters are declared in a special RETURNS section of the procedure definition:
"CREATE PROCEDURE procedure_name
[ (input_param1 datatype[,
input_param2 datatype, … ]) ]
[ RETURNS
(output_param1 datatype[,
output_param2 datatype, … ]) ]
BEGIN
END";
There can be any number of input and output parameters. Input parameters have to be sup-
plied in the same order as they are defined when the procedure is called.
Declaring input parameters in the procedure heading make their values accessible inside the
procedure by referring to the parameter name.
The output parameters will appear in the returned result set. The parameters will appear as
columns in the result set in the same order as they are defined. A procedure may return one
or more rows. Thus, select statements can be wrapped into database procedures.
The following statement creates a procedure that has two input parameters and two output
parameters:
"CREATE PROCEDURE PHONEBOOK_SEARCH
(FIRST_NAME VARCHAR, LAST_NAME VARCHAR)
RETURNS (PHONE_NR NUMERIC, CITY VARCHAR)
BEGIN
-- procedure_body
END";
This procedure should be called using two input parameter of data type VARCHAR. The
procedure returns an output table consisting of 2 columns named PHONE_NR of type
NUMERIC and CITY of type VARCHAR.
For example:
call phonebook_search ( 'JOHN','DOE');
Result looks like the following (when the procedure body has been
programmed)
PHONE_NR CITY
3433555 NEW YORK
2345226 LOS ANGELES
Declare Section
Local variables that are used inside the procedure for temporary storage of column and con-
trol values are defined in a separate section of the stored procedure directly following the
BEGIN keyword.
The syntax of declaring a variable is:
DECLARE variable_name datatype;
Note that every declare statement should be ended with a semicolon (;).
The variable name is an alphanumeric string that identifies the variable. The data type of the
variable can be any valid SQL data type supported. For supported data types, see Appendix
A, “Data Types”.
For example:
"CREATE PROCEDURE PHONEBOOK_SEARCH
(FIRST_NAME VARCHAR, LAST_NAME VARCHAR)
RETURNS (PHONE_NR NUMERIC, CITY VARCHAR)
BEGIN
DECLARE i INTEGER;
END";
Note that input and output parameters are treated like local variables within a procedure with
the exception that input parameters have a preset value and output parameter values are
returned or can be appended to the returned result set.
Procedure Body
The procedure body contains the actual stored procedure program based on assignments,
expressions, and SQL statements.
Any type of expression, including scalar functions, can be used in a procedure body. For
valid expressions, see Appendix B, “Solid SQL Syntax”.
Assignments
To assign values to variables either of the following syntax is used:
SET variable_name = expression ;
or
variable_name := expression ;
Example:
SET i = i + 20 ;
i := 100;
For a list of Solid-supported scalar functions (SQL-92), see Appendix B, “Solid SQL Syn-
tax”. Note that the Programmer Guide contains an appendix that describes ODBC scalar
functions, which contain some differences for SQL-92.
Expressions
Comparison Operators
Comparison operators compare one expression to another. The result is always TRUE,
FALSE, or NULL. Typically, comparisons are used in conditional control statements and
allow comparisons of arbitrarily complex expressions. The following table gives the mean-
ing of each operator:
Operator Meaning
= is equal to
<> is not equal to
< is less than
> is greater than
<= is less than or equal to
>= is greater than or equal to
Note that the != notation cannot be used inside a stored procedure, use the ANSI-SQL com-
pliant <> instead.
Logical Operators
The logical operators can be used to build more complex queries. The logical operators
AND, OR, and NOT operate according to the tri-state logic illustrated by the truth tables
shown below. AND and OR are binary operators; NOT is a unary operator.
As the truth tables show, AND returns the value TRUE only if both its operands are true. On
the other hand, OR returns the value TRUE if either of its operands is true. NOT returns the
opposite value (logical negation) of its operand. For example, NOT TRUE returns FALSE.
NOT NULL returns NULL because nulls are indeterminate.
When not using parentheses to specify the order of evaluation, operator precedence deter-
mines the order.
Note that ‘true’ and ‘false’ are not literals accepted by SQL parser but values. Logical
expression value can be interpreted as a numeric variable:
false = 0 or NULL
true = 1 or any other numeric value
Example:
IF expression = TRUE THEN
can be simply written
IF expression THEN
IS NULL Operator
The IS NULL operator returns the Boolean value TRUE if its operand is null, or FALSE if it
is not null. Comparisons involving nulls always yield NULL. To test whether a value is
NULL, do not use the expression,
IF variable = NULL THEN...
because it never evaluates to TRUE.
Instead, use the following statement:
IF variable IS NULL THEN...
Note that when using multiple logical operators in Solid stored procedures the individual
logical expressions should be enclosed in parentheses like:
((A >= B) AND (C = 2)) OR (A = 3)
Control Structures
The following sections describe the statements that can be used in the procedure body,
including branch and loop statements.
IF Statement
Often, it is necessary to take alternative actions depending on circumstances. The IF state-
ment executes a sequence of statements conditionally. There are three forms of IF state-
ments: IF-THEN, IF-THEN-ELSE, and IF-THEN-ELSEIF.
IF-THEN
The simplest form of IF statement associates a condition with a statement list enclosed by
the keywords THEN and END IF (not ENDIF), as follows:
IF condition THEN
statement_list;
END IF
The sequence of statements is executed only if the condition evaluates to TRUE. If the con-
dition evaluates to FALSE or NULL, the IF statement does nothing. In either case, control
passes to the next statement. An example follows:
IF sales > quota THEN
SET pay = pay + bonus;
END IF
IF-THEN-ELSE
The second form of IF statement adds the keyword ELSE followed by an alternative state-
ment list, as follows:
IF condition THEN
statement_list1;
ELSE
statement_list2;
END IF
The statement list in the ELSE clause is executed only if the condition evaluates to FALSE
or NULL. Thus, the ELSE clause ensures that a statement list is executed. In the following
example, the first or second assignment statement is executed when the condition is true or
false, respectively:
IF-THEN-ELSEIF
Occasionally it is necessary to select an action from several mutually exclusive alternatives.
The third form of IF statement uses the keyword ELSEIF to introduce additional conditions,
as follows:
IF condition1 THEN
statement_list1;
ELSEIF condition2 THEN
statement_list2;
ELSE
statement_list3;
END IF
If the first condition evaluates to FALSE or NULL, the ELSEIF clause tests another condi-
tion. An IF statement can have any number of ELSEIF clauses; the final ELSE clause is
optional. Conditions are evaluated one by one from top to bottom. If any condition evaluates
to TRUE, its associated statement list is executed and the rest of the statements (inside the
IF-THEN-ELSEIF) are skipped. If all conditions evaluate to FALSE or NULL, the sequence
in the ELSE clause is executed. Consider the following example:
statement_list1; statement_list1;
ELSE statement_list3;
statement_list3;
END IF
END IF
END IF
These statements are logically equivalent, but the first statement obscures the flow of logic,
whereas the second statement reveals it.
WHILE-LOOP
The WHILE-LOOP statement associates a condition with a sequence of statements enclosed
by the keywords LOOP and END LOOP, as follows:
WHILE condition LOOP
statement_list;
END LOOP
Before each iteration of the loop, the condition is evaluated. If the condition evaluates to
TRUE, the statement list is executed, then control resumes at the top of the loop. If the con-
dition evaluates to FALSE or NULL, the loop is bypassed and control passes to the next
statement. An example follows:
WHILE total <= 25000 LOOP
...
total := total + salary;
END LOOP
The number of iterations depends on the condition and is unknown until the loop completes.
Since the condition is tested at the top of the loop, the sequence might execute zero times. In
the latter example, if the initial value of "total" is greater than 25000, the condition evalu-
ates to FALSE and the loop is bypassed altogether.
Loops can be nested. When an inner loop is finished, control is returned to the next loop.
The procedure continues from the next statement after END LOOP.
Leaving Loops
It may be necessary to force the procedure to leave a loop prematurely. This can be imple-
mented using the LEAVE keyword:
WHILE total < 25000 LOOP
total := total + salary;
IF exit_condition THEN
LEAVE;
END IF
END LOOP
statement_list2
Upon successful evaluation of the exit_condition the loop is left, and the procedure contin-
ues at the statement_list2.
Note
Note
Although Solid databases support the ANSI-SQL CASE syntax, the CASE construct cannot
be used inside a stored procedure as a control structure.
x := x - 1;
END LOOP;
Handling Nulls
Nulls can cause confusing behavior. To avoid some common errors, observe the following
rules:
■ comparisons involving nulls always yield NULL
■ applying the logical operator NOT to a null yields NULL
■ in conditional control statements, if the condition evaluates to NULL, its associated
sequence of statements is not executed
In the example below, you might expect the statement list to execute because "x" and "y"
seem unequal. Remember though that nulls are indeterminate. Whether "x" is equal to "y" or
not is unknown. Therefore, the IF condition evaluates to NULL and the statement list is
bypassed.
x := 5;
y := NULL;
...
IF x <> y THEN -- evaluates to NULL, not TRUE
NOT Operator
Applying the logical operator NOT to a null yields NULL. Thus, the following two state-
ments are not always equivalent:
high := x; high := y;
ELSE ELSE
high := y; high := x;
END IF END IF
The sequence of statements in the ELSE clause is executed when the IF condition evaluates
to FALSE or NULL. If either or both "x" and "y" are NULL, the first IF statement assigns
the value of "y" to "high", but the second IF statement assigns the value of "x" to "high". If
neither "x" nor "y" is NULL, both IF statements assign the corresponding value to "high".
Zero-Length Strings
Zero length strings are treated by a Solid server like they are a string of zero length, instead
of a null. NULL values should be specifically assigned as in the following:
SET a = NULL;
This also means that checking for NULL values will return FALSE when applied to a zero-
length string.
Example
Following is an example of a simple procedure that determines whether a person is an adult
on the basis of a birthday as input parameter.
Note the usage of {fn ...} on scalar functions, and semicolons to end assignments.
"CREATE PROCEDURE grown_up
(birth_date DATE)
RETURNS (description VARCHAR)
BEGIN
DECLARE age INTEGER;
-- determine the number of years since the day of birth
age := {fn TIMESTAMPDIFF(SQL_TSI_YEAR, birth_date, now())};
IF age >= 18 THEN
-- If age is at least 18, then it’s an adult
description := 'ADULT';
ELSE
-- otherwise it’s still a minor
description := 'MINOR';
END IF
END";
Exiting a Procedure
A procedure may be exited prematurely by issuing the keyword
RETURN;
at any location. After this keyword, control is directly handed to the program calling the pro-
cedure returning the values bound to the output parameters as indicated in the returns-sec-
tion of the procedure definition.
Returning Data
By default a stored procedure returns one row of data. The row is returned when the com-
plete procedure has been run or has been forced to exit. This row conforms to the declared
output parameters in the parameter section of the procedure.
It is also possible to return result sets from a procedure using the following syntax:
return row;
Every RETURN ROW call adds a new row into the returned result set where column values
are the current values of the output parameters.
Important
Transaction handling for remote stored procedures is different from transaction handling for
local stored procedures. When a stored procedure is called remotely, the execution of the
stored procedure is NOT a part of the transaction that contained the call. Therefore, you
cannot roll back a stored procedure call by rolling back the transaction that called it.
The full syntax of the command to call a remote stored procedure is:
CALL <proc-name>[(param [, param...])] AT node-def;
node-def ::= DEFAULT | ‘replica name’ | ‘master name’
For example:
CALL MyProc('Smith', 750) AT replica1;
CALL MyProcWithoutParameters AT replica2;
See Appendix B, “Solid SQL Syntax”, for more details about the CALL statement.
The node definition "DEFAULT" is used only with the START AFTER COMMIT state-
ment. See the section on START AFTER COMMIT for more details.
Note that you may only list one node definition per CALL. If you wish to notify multiple
replicas, for example, then you need to call each of them separately. You may, however, cre-
ate a stored procedure that contains multiple CALL statements, and then simply make a sin-
gle call to that procedure.
The remote stored procedure is always created on the server that executes the procedure, not
on the server that calls the procedure. For example, if the master is going to call procedure
foo() to execute on replica1, then procedure foo() must have been created on replica1. The
master does not know the "content" of the stored procedure that it calls remotely. In fact, the
master does not know anything at all about the stored procedure other than the information
specified in the CALL statement itself, for example:
CALL foo(param1, param2) AT replica1
which of course includes the procedure's name, some parameter values, and the name of the
replica on which the procedure is to be executed. The stored procedure is not registered with
the caller. This means that the caller in some sense calls the procedure "blindly", without
even knowing if it's there. Of course, if the caller tries to call a procedure that doesn't exist,
then the caller will get an error message that says that the procedure doesn't exist.
Dynamic parameter binding is supported. For example, the following is legal:
CALL MYPROC(?, ?) AT MYREPLICA1;
Calls to the stored procedure are not buffered or queued. If you call the stored procedure
and the procedure does not exist, the call does not "persist", waiting until the stored proce-
dure appears. Similarly, if the procedure does exist but the server that has that procedure is
shut down or is disconnected from the network is not accessible for any other reason, then
the call is not held "open" and retried when the server becomes accessible again. This is
important to know when using the "Sync Pull Notify" (push synchronization) feature.
ACCESS RIGHTS
To call a stored procedure, the caller must have EXECUTE privilege on that procedure.
(This is true for any stored procedure, whether it is called locally or remotely.)
When a procedure is called locally, it is executed with the privileges of the caller. When a
procedure is called remotely, it may be executed either with the privileges of a specified user
on the remote server, or with the privileges of the remote user who corresponds to the local
caller. (The replica and master users must already be mapped to each other before the stored
procedure is called. For more information about mapping replica users to master users, see
the SmartFlow Guide.)
If a remote stored procedure was called from the replica (and is to be executed on the mas-
ter), then you have the option of specifying which master user’s privileges you would like
the procedure to be executed with.
If the remote stored procedure was called from the master (and is to be executed on the rep-
lica), or if you do not specify which user’s privileges to use, then the calling server will fig-
ure out which user’s privileges should be used, based on which user called the stored
procedure and the mapping between replica and master users.
These possibilities are explained in more detail below.
1. If the procedure was called from a replica (and will be executed on the master), then
you may execute the SET SYNC USER statement to specify which master user’s privi-
leges to use. You must execute SET SYNC USER on the local server before calling the
remote stored procedure. Once the sync user has been specified on the calling server, the
calling server will send the user name and password to the remote server (the master
server) each time a remote stored procedure is called. The remote server will try to exe-
cute the procedure using the user id and password that were sent with the procedure
call. The user id and password must exist in the remote server, and the specified user
must have appropriate access rights to the database and EXECUTE privilege on the
called procedure.
The SET SYNC USER statement is valid only on a replica, so you can only specify the
sync user when a replica calls a stored procedure on a master.
2. If the caller is a master, or if the call was made from a replica and you did not specify a
sync user before the call, then the servers will attempt to determine which user on the
remote server corresponds to the user on the local server.
If the calling server is a replica (R -> M)
The calling server sends the following information to the remote server when call-
ing a remote procedure:
Name of the master (SYS_SYNC_MASTERS.NAME).
Replica id (SYS_SYNC_MASTERS.REPLICA_ID).
Master user id (This master user id is the master user id that corresponds to the
user id of the local user who called the procedure. Obviously, this local user
must already be mapped to the corresponding master user.)
Note that this method of selecting the master user id is the same as the method used
when a replica refreshes data -- the replica looks up in the SYS_SYNC_USERS
table to find the master user who is mapped to the current local replica user.
If the calling server is a master (M -> R)
The calling server sends the following information to the remote server when call-
ing a remote procedure:
Name of the master (SYS_SYNC_REPLICAS.MASTER_NAME).
Replica id (SYS_SYNC_REPLICAS.ID).
User name of the caller.
User id of the caller.
When the replica receives the master user id, the replica looks up the local user who
is mapped to that master id. Since more than one replica user may be mapped to a
single master user, the server will use the first local user it finds who is mapped to
the specified master user and who has the privileges required to execute this stored
procedure.
Before a master server can call a stored procedure on a replica server, the master must of
course know the connect string of the replica. If a replica allows calls from a master, then the
replica should define its own connect string information in the solid.ini file. This informa-
tion is provided to the master (the replica includes a copy when it forwards any message to
master). When the master receives the connect string from the replica, the master replaces
the previous value (if the new value differs).
Example:
[Synchronizer]
ConnectStrForMaster=tcp replicahost 1316
It is also possible to inform the master of the replica’s connect string by using the statement:
SET SYNC CONNECT <connect-info> TO REPLICA <replica-name>
This is useful if the master needs to call the replica but the replica has not yet provided its
connect string to the master (i.e. has not yet forwarded any message to the master).
EXECDIRECT
The EXECDIRECT syntax is particularly appropriate for statements where there is no result
set, and where you do not have to use any variable to specify a parameter value. For exam-
ple, the following statement inserts a single row of data:
EXEC SQL EXECDIRECT insert into table1 (id, name) values (1, ’Smith’);
For more information about EXECDIRECT, see “EXECDIRECT” on page B-40.
Using A Cursor
Cursors are appropriate for statements where there is a result set, or where you want to
repeat a single basic statement but use different values from a local variable as a parameter
(e.g. in a loop).
A cursor is a specific allocated part of the server process memory that keeps track of the
statement being processed. Memory space is allocated for holding one row of the underly-
ing statement, together with some status information on the current row (in SELECTS) or
the number of rows affected by the statement (in UPDATES, INSERTS and DELETES).
In this way query results are processed one row at a time. The stored procedure logic should
take care of the actual handling of the rows, and the positioning of the cursor on the required
row(s).
There are five basic steps in handling a cursor:
1. Preparing the cursor - the definition
2. Executing the cursor - executing the statement
3. Fetching on the cursor (for select procedure calls) - getting the results row by row
4. Closing the cursor after use - still enabling it to re-execute
5. Dropping the cursor from memory - removing it
The statement is now executed and the resulting table names will be returned into variable
tab in the subsequent Fetch statements.
Note that after the completion of the loop, the variable tab will contain the last fetched table
name.
Example
Here is an example of a stored procedure that uses EXECDIRECT in one place and uses a
cursor in another place.
"CREATE PROCEDURE p2
BEGIN
END";
Error Handling
SQLSUCCESS
The return value of the latest EXEC SQL statement executed inside a procedure body is
stored into variable SQLSUCCESS. This variable is automatically generated for every pro-
cedure. If the previous SQL statement was successful, the value 1 is stored into SQLSUC-
CESS. After a failed SQL statement, a value 0 is stored into SQLSUCCESS.
The value of SQLSUCCESS may be used, for instance, to determine when the cursor has
reached the end of the result set as in the following example:
EXEC SQL FETCH sel_tab;
-- loop as long as last statement in loop is successful
END LOOP
SQLERRNUM
This variable contains the error code of the latest SQL statement executed. It is automati-
cally generated for every procedure. After successful execution, SQLERRNUM contains
zero (0).
SQLERRSTR
This variable contains the error string from the last failed SQL statement.
SQLROWCOUNT
After the execution of UPDATE, INSERT and DELETE statements, an additional variable is
available to check the result of the statement. Variable SQLROWCOUNT contains the num-
ber of rows affected by the last statement.
SQLERROR
To generate user errors from procedures, the SQLERROR variable may be used to return an
actual error string that caused the statement to fail to the calling application. The syntax is:
RETURN SQLERROR 'error string'
RETURN SQLERROR char_variable
The error is returned in the following format:
User error: error_string
SQLERROR OF cursorname
For error checking of EXEC SQL statements, the SQLSUCCESS variable may be used as
described under SQLSUCCESS in the beginning of this section. To return the actual error
that caused the statement to fail to the calling application, the following syntax may be used:
EXEC SQL PREPARE cursorname sql_statement;
EXEC SQL EXECUTE cursorname;
IF NOT SQLSUCCESS THEN
END LOOP
-- close and drop the used cursors
EXEC SQL CLOSE sel_tables;
EXEC SQL DROP sel_tables;
END";
The parameters in a SQL statement have no intrinsic data type or explicit declaration. There-
fore, parameter markers can be included in a SQL statement only if their data types can be
inferred from another operand in the statement.
For example, in an arithmetic expression such as ? + COLUMN1, the data type of the
parameter can be inferred from the data type of the named column represented by
COLUMN1. A procedure cannot use a parameter marker if the data type cannot be deter-
mined.
The following table describes how a data type is determined for several types of parameters.
In the following example, a stored procedure will read rows from one table and insert parts
of them in another, using multiple cursors:
"CREATE PROCEDURE tabs_in_schema (schema_nm VARCHAR)
RETURNS (nr_of_rows INTEGER)
BEGIN
DECLARE tab_nm VARCHAR;
EXEC SQL PREPARE sel_tab
SELECT table_name
FROM sys_tables
WHERE table_schema = ?;
EXEC SQL PREPARE ins_tab
INSERT INTO my_table (table_name, schema) VALUES (?,?);
nr_of_rows := 0;
Like all SQL statements, a cursor should be prepared and executed like:
EXEC SQL PREPARE cp CALL myproc( ?,?);
EXEC SQL EXECUTE cp USING (var1, var2);
If procedure myproc returns one or more values, then subsequently a fetch should be done on
the cursor cp to retrieve those values:
EXEC SQL PREPARE cp call myproc(?,?);
EXEC SQL EXECUTE cp USING (var1, var2) INTO (ret_var1,
ret_var2);
EXEC SQL FETCH cp;
Note that if the called procedure uses a return row statement, the calling procedure should
utilize a WHILE LOOP construct to fetch all results.
Recursive calls are possible, but discouraged because cursor names are unique at connection
level.
Below is an example written with pseudo code that will cause an endless loop with a Solid
server (error handling, binding variables and other important tasks omitted for brevity and
clarity):
"CREATE PROCEDURE ENDLESS_LOOP
BEGIN
EXEC SQL PREPARE MYCURSOR SELECT * FROM TABLE1;
EXEC SQL PREPARE MYCURSOR_UPDATE
UPDATE TABLE1 SET COLUMN2 = 'new data';
WHERE CURRENT OF MYCURSOR;"
EXEC SQL EXECUTE MYCURSOR;
EXEC SQL FETCH MYCURSOR;
WHILE SQLSUCCESS LOOP
EXEC SQL EXECUTE MYCURSOR_UPDATE;
EXEC SQL COMMIT WORK;
EXEC SQL FETCH MYCURSOR;
END LOOP
END";
The endless loop is caused by the fact that when the update is committed, a new version of
the row becomes visible in the cursor and it is accessed in the next FETCH statement. This
happens because the incremented row version number is included in the key value and the
cursor finds the changed row as the next greater key value after the current position. The row
gets updated again, the key value is changed and again it will be the next row found.
In the above example, the updated column2 is not assumed to be part of the primary key for
the table, and the row version number was the only part of the index entry that changed.
However, if a column value is changed that is part of the index through which the cursor has
searched the data, the changed row may jump further forward or backward in the search set.
For these reasons, using positioned update is not recommended in general and searched
update should be used instead whenever possible. However, sometimes the update logic may
be too complex to be expressed in SQL WHERE clause and in such cases positioned update
can be used as follows:
Positioned cursor update works deterministically in Solid, when the where clause is such
that the updated row does not match the criteria and therefore does not reappear in the fetch
loop. Constructing such a search criteria may require using additional column only for this
purpose.
Note that in an open cursor user changes do not become visible unless they are committed
within the same database session.
Transactions
Stored procedures use transactions like any other interface to the database uses transactions.
A transaction may be committed or rolled back either inside the procedure or outside the
procedure. Inside the procedure a commit or roll back is done using the following syntax:
EXEC SQL COMMIT WORK;
EXEC SQL ROLLBACK WORK;
These statements end the previous transaction and start a new one.
If a transaction is not committed inside the procedure, it may be ended externally using:
■ A Solid API
■ Another stored procedure
■ By autocommit, if the connection has AUTOCOMMIT switch set to ON
Note that when a connection has autocommit activated it does not force autocommit inside a
procedure. The commit is done when the procedure exits.
Note that transactions are not related to procedures or other statements. Commit or rollback
therefore does NOT release any resources in a procedure.
Notes on SQL
■ There is no restriction on the SQL statements used. Any valid SQL statement can be
used inside a stored procedure, including DDL and DML statements.
■ Cursors may be declared anywhere in a stored procedure. Cursors that are certainly
going to be used are best prepared directly following the declare section.
■ Cursors that are used inside control structures, and are therefore not always necessary,
are best declared at the point where they are activated, to limit the amount of open cur-
sors and hence the memory usage.
■ The cursor name is an undeclared identifier, not a variable; it is used only to reference
the query. You cannot assign values to a cursor name or use it in an expression.
■ Cursors may be re-executed repeatedly without having to re-prepare them. Note that this
can have a serious influence on performance; repetitively preparing cursors on similar
statements may decrease the performance by around 40% in comparison to re-execut-
ing already prepared cursors!
■ Any SQL statement will have to be preceded by the keywords EXEC SQL.
Procedure Privileges
Stored procedures are owned by the creator, and are part of the creator’s schema. Users who
need to run stored procedures in other schemas need to be granted EXECUTE privilege on
the procedure:
GRANT EXECUTE ON Proc_name TO { USER | ROLE };
All database objects accessed within the granted procedure, even subsequently called proce-
dures, are accessed according to the rights of the owner of the procedure. No special grants
are necessary.
Since the procedure is run with the privileges of the creator, the procedure not only has the
creator’s rights to access objects such as tables, but also uses the creator’s schema and cata-
log. For example, suppose that user ’Sally’ runs a procedure named ’Proc1’ created by user
’Jasmine’. Suppose also that both Sally and Jasmine have a table named ’table1’. By default,
the stored procedure Proc1 will use the table1 that is in Jasmine’s schema, even if Proc1 was
called by user Sally.
See also “ACCESS RIGHTS” on page 3-21 for more information about privileges and
remote stored procedure calls.
Using Triggers
A trigger activates stored procedure code, which a Solid server automatically executes when
a user attempts to change the data in a table. You may create one or more triggers on a table,
with each trigger defined to activate on a specific INSERT, UPDATE, or DELETE com-
mand. When a user modifies data within the table, the trigger that corresponds to the com-
mand is activated.
Triggers enable you to:
■ Implement referential integrity constraints, such as ensuring that a foreign key value
matches an existing primary key value.
■ Prevent users from making incorrect or inconsistent data changes by ensuring that
intended modifications do not compromise a database's integrity.
■ Take action based on the value of a row before or after modification.
■ Transfer much of the logic processing to the backend, reducing the amount of work that
your application needs to do as well as reducing network traffic.
Note
Note
A trigger itself can cause the DML to be executed, which applies to the steps shown in the
above model.
Creating Triggers
Use the CREATE TRIGGER statement (described below) to create a trigger. You can dis-
able an existing trigger or all triggers defined on a table by using the ALTER TRIGGER
statement. For details, read “Altering Trigger Attributes” on page 3-70. The ALTER TRIG-
GER statement causes a Solid server to ignore the trigger when an activating DML state-
ment is issued. With this statement, you can also enable a trigger that is currently inactive.
To drop a trigger from the system catalog, use DROP TRIGGER. For details, read “Drop-
ping Triggers” on page 3-69.
[, REFERENCING column_reference ]
Trigger_name
The trigger_name can contain up to 254 characters.
The BEFORE clause can verify that modified data follows integrity constraint rules
before processing the UPDATE. If the REFERENCING NEW AS new_col_identifier
clause is used with the BEFORE UPDATE clause, then the updated values are available
to the triggered SQL statements. In the trigger, you can set the default column values or
derived column values before performing an UPDATE.
The AFTER clause can perform operations on newly modified data. For example, after
a branch address update, the sales for the branch can be computed.
If the REFERENCING OLD AS old_col_identifier clause is used with the AFTER
UPDATE clause, then the values that existed prior to the invoking update are accessible
to the triggered SQL statements.
■ INSERT Operation
The BEFORE clause can verify that new data follows integrity constraint rules before
performing an INSERT. Column values passed as parameters are visible to the trig-
gered SQL statements but the inserted rows are not. In the trigger, you can set default
column values or derived column values before performing an INSERT.
The AFTER clause can perform operations on newly inserted data. For example, after
insertion of a sales order, the total order can be computed to see if a customer is eligible
for a discount.
Column values are passed as parameters and inserted rows are visible to the triggered
SQL statements.
■ DELETE Operation
The BEFORE clause can perform operations on data about to be deleted. Column val-
ues passed as parameters and inserted rows that are about to be deleted are visible to the
triggered SQL statements.
The AFTER clause can be used to confirm the deletion of data. Column values passed
as parameters are visible to the triggered SQL statements. Please note that the deleted
rows are visible to the triggering SQL statement.
INSERT specifies that the trigger is activated by an INSERT on the table. Loading n rows of
data is considered as n inserts.
Note
Note
There may be some performance impact if you try to load the data with triggers enabled.
Depending on your business need, you may want to disable the triggers before loading and
enable them after loading. For details, see the section “Altering Trigger Attributes” on page
3-70.
Note
Note
The above example can lead to recursive trigger execution, which you should try to avoid.
Table_name
The table_name is the name of the table on which the trigger is created. Solid server allows
you to drop a table that has dependent triggers defined on it. When you drop a table all
dependent objects including triggers are dropped. Be aware that you may still get run-time
errors. For example, assume you create two tables A and B. If a procedure SP-B inserts data
into table A, and table A is then dropped, a user will receive a run-time error if table B has a
trigger which invokes SP-B.
Trigger_body
The trigger_body contains the statement(s) to be executed when a trigger fires. The rules for
defining the body of a trigger are the same as the rules for defining the body of a stored pro-
cedure. Read“Stored Procedures” on page 3-1 for details on creating a stored procedure
body.
A trigger body may also invoke any procedure registered with a Solid server. Solid proce-
dure invocation rules follow standard procedure invocation practices.
You must explicitly check for business logic errors and raise an error.
REFERENCING Clause
This clause is optional when creating a trigger on an INSERT/UPDATE/DELETE opera-
tion. It provides a way to reference the current column identifiers in the case of INSERT and
DELETE operations, and both the old column identifier and the new updated column identi-
fier by aliasing the column(s) on which an UPDATE operation occurs.
You must specify the OLD or NEW col_identifier to access it. A Solid server does not pro-
vide access to the col_identifier unless you define it using the REFERENCING subclause.
Use the OLD AS clause to alias the table's old identifier as it exists before the UPDATE. Use
the NEW AS clause to alias the table's new identifier as it exists after the UPDATE.
If you reference both the old and new values of the same column, you must use a different
col_identifier.
Each column that is referenced as NEW or OLD should have a separate REFERENCING
subclause.
The statement atomicity in a trigger is such that operations made in a trigger are visible to
the subsequent SQL statements inside the trigger. For example, if you execute an INSERT
statement in a trigger and then also perform a select in the same trigger, then the inserted
row is visible.
In the case of AFTER trigger, an inserted row or an updated row is visible in the AFTER
insert trigger, but a deleted row cannot be seen for a select performed within the trigger. In
the case of a BEFORE trigger, an inserted or updated row is invisible within the trigger and
a deleted row is visible. In the case of an UPDATE, the pre-update values are available in a
BEFORE trigger.
The table below summarizes the statement atomicity in a trigger, indicating whether the row
is visible to the SELECT statement in the trigger body.
Note
Note
The triggers are applied to each row. This means that if there are ten inserts, a trigger is exe-
cuted ten times.
■ You cannot define triggers on a view (even if the view is based on a single table).
■ You cannot alter a table that has a trigger defined on it when the dependent columns are
affected.
■ You cannot create a trigger on a system table.
■ You cannot execute triggers that reference dropped or altered objects. To prevent this
error:
■ Recreate any referenced object that you drop.
■ Restore any referenced object you changed back to its original state (known by the
trigger).
■ You can use reserved words in trigger statements if they are enclosed in double quotes.
For example, the following CREATE TRIGGER statement references a column named
"data", which is a reserved word.
"CREATE TRIGGER TRIG1 ON TMPT BEFORE INSERT
REFERENCING NEW "DATA" AS NEW_DATA
BEGIN
END"
You can nest triggers up to 16 levels deep (the limit can be changed using a configuration
parameter). If a trigger gets into an infinite loop, a Solid server detects this recursive action
when the 16-level nesting (or system parameter) maximum is reached and returns an error to
the user. For example, you could activate a trigger by attempting to insert into the table T1
and the trigger could call a stored procedure which also attempts to insert into T1, recur-
sively activating the trigger.
If a set of nested triggers fails at any time, a Solid server rolls back the statement which orig-
inally activated the triggers.
For example, if a customer's invoice used to be for $100 and it is changed to $150, then $100
is subtracted and $150 is added to the "total_bought" field. By properly using the REFER-
ENCING clause, the trigger can "see" both the old value and the price column, thereby
allowing the update of the total_bought column.
Note that the column aliases created by the REFERENCING clause are valid only within the
trigger. Let's look at a pseudo-code example below:
CREATE TRIGGER pseudo_code_to_add_tax ON invoices
AFTER UPDATE
REFERENCING OLD total_price AS old_total_price,
REFERENCING NEW total_price AS new_total_price
BEGIN
EXEC SQL PREPARE update_cursor
UPDATE customers
SET total_bought = total_bought - old_total_price
+ new_total_price;
END
This example is "pseudo-code"; a real trigger would require some changes and additions
(such as code to execute, close, and drop the cursor). A complete, valid SQL script for this
example is provided below.
new_customer_id);
EXEC SQL CLOSE upd_curs;
EXEC SQL DROP upd_curs;
END";
-- When a new invoice is created, we update the total_bought
-- in the customers table.
"CREATE TRIGGER update_total_bought ON invoices
AFTER INSERT
REFERENCING NEW invoice_total AS new_invoice_total,
REFERENCING NEW customer_id AS new_customer_id
BEGIN
EXEC SQL PREPARE ins_curs
UPDATE customers
SET total_bought = total_bought + ?
WHERE customers.customer_id = ?;
EXEC SQL EXECUTE ins_curs
USING (new_invoice_total, new_customer_id);
EXEC SQL CLOSE ins_curs;
EXEC SQL DROP ins_curs;
END";
-- Insert a sample customer.
INSERT INTO customers (customer_id, total_bought)
VALUES (1000, 0.0);
-- Insert invoices for a customer; the INSERT trigger will
-- update the total_bought in the customers table.
INSERT INTO invoices (customer_id, invoice_id, invoice_total)
VALUES (1000, 5555, 234.00);
INSERT INTO invoices (customer_id, invoice_id, invoice_total)
VALUES (1000, 5789, 199.0);
-- Make sure that the INSERT trigger worked.
SELECT * FROM customers;
-- Now update an invoice; the total_bought in the customers
-- table will also be updated and the trigger that does
the same trigger to execute again on the same record is recursive. For example, a delete trig-
ger would be recursive if it tries to delete the same record whose deletion fired the trigger.
If the database server were to allow recursion in triggers, then the server might go into an
"infinite loop" and never finish executing the statement that fired the trigger. A concurrency
conflict error occurs when a trigger executes an operation that "competes with" the state-
ment that fired the trigger by trying to do the same type of action (for example, delete)
within the same SQL statement. For example, if you create a trigger that is supposed to be
fired when a record is deleted, and if that trigger tries to delete the same record whose dele-
tion fired the trigger, then there are in essence two different "simultaneous" delete state-
ments "competing" to delete the record; this results in a concurrency conflict. The following
section provides an example of a defective delete trigger.
employee's dependents, but also the employee himself, since his name meets the criteria in
the WHERE clause.
Every time an attempt is made to delete the employee's record, this action fires the trigger
again. The code then recursively keeps trying to delete the employee by again firing the trig-
ger, and again trying to delete. If the database server did not prohibit this or detect the situa-
tion, the server could go into an infinite loop. If the server detects this situation, it will give
you an appropriate error, such as "Too many nested triggers."
A similar situation can happen with UPDATE. Assume that a trigger adds sales tax every
time that a record is updated. Here's an example that causes a recursion error:
CREATE TRIGGER do_not_do_this_either ON invoice
AFTER UPDATE
REFERENCING NEW total_price AS new_total_price
BEGIN
-- Add 8% sales tax.
EXEC SQL PREPARE upd_curs1
UPDATE invoice SET total_price = 1.08 * total_price
WHERE ...;
-- ... execute, close, and drop the cursor...
END;
In this scenario, customer Ann Jones calls up to change her order; the new price (with sales
tax) is calculated by multiplying the new subtotal by 1.08. The record is updated with the
new total price; each time the record is updated, the trigger is fired, so updating the record
once, causes the trigger to update it again and updates are repeated in an infinite loop.
If AFTER triggers can cause recursion or looping, what happens with BEFORE triggers?
The answer is that, in some cases, BEFORE triggers can cause concurrency problems. Let's
return to the first example of the trigger that deleted medical coverage for employees and
their dependents. If the trigger were a BEFORE trigger (rather than an AFTER trigger), then
just before the employee is deleted, we would execute the trigger, which in this case deletes
everyone named John Smith. After the trigger is executed, the engine resumes its original
task of dropping employee John Smith himself, but the server finds either he isn't there or
that his record cannot be deleted because it has already been marked for deletion-- in other
words, there is a concurrency conflict because there are two separate efforts to delete the
same record.
Trigger Lock
Mode Operation Trigger Action Type Result
AFTER INSERT UPDATE the same Optimistic Record is updated.
row by adding a num-
ber to the value
AFTER INSERT UPDATE the same Pessimistic Record is updated.
row by adding a num-
ber to the value
BEFORE INSERT UPDATE the same Optimistic Record is not updated since the WHERE condition of the
row by adding a num- UPDATE within the trigger body returns a NULL result-
ber to the value set (as the desired row is not yet inserted in the table).
BEFORE INSERT UPDATE the same Pessimistic Record is not updated since the WHERE condition of the
row by adding a num- UPDATE within the trigger body returns a NULL result-
ber to the value set (as the desired row is not yet inserted in the table).
AFTER INSERT DELETE the same row Optimistic Record is deleted.
that is being inserted
Trigger Lock
Mode Operation Trigger Action Type Result
AFTER INSERT DELETE the same row Pessimistic Record is deleted.
that is being inserted
BEFORE INSERT DELETE the same row Optimistic Record is not deleted since the WHERE condition of the
that is being inserted DELETE within the trigger body returns a NULL result-
set (as the desired row is not yet inserted in the table).
BEFORE INSERT DELETE the same row Pessimistic Record is not updated since the WHERE condition of the
that is being inserted UPDATE within the trigger body returns a NULL result-
set (as the desired row is not yet inserted in the table).
AFTER INSERT INSERT a row Optimistic Too many nested triggers.
AFTER INSERT INSERT a row Pessimistic Too many nested triggers.
BEFORE INSERT INSERT a row Optimistic Too many nested triggers.
BEFORE INSERT INSERT a row Pessimistic Too many nested triggers.
AFTER UPDATE UPDATE the same Optimistic Generates Solid Table Error: Too many nested triggers.
row by adding a num-
ber to the value
AFTER UPDATE UPDATE the same Pessimistic Generates Solid Table Error: Too many nested triggers.
row by adding a num-
ber to the value
BEFORE UPDATE UPDATE the same Optimistic Record is updated, but does not get into a nested loop
row by adding a num- because the WHERE condition in the trigger body returns
ber to the value. a NULL resultset and no rows are updated to fire the trig-
ger recursively.
BEFORE UPDATE UPDATE the same Pessimistic Record is updated, but does not get into a nested loop
row by adding a num- because the WHERE condition in the trigger body returns
ber to the value. a NULL resultset and no rows are updated to fire the trig-
ger recursively.
AFTER UPDATE DELETE the same row Optimistic Record is deleted.
that is being updated.
AFTER UPDATE DELETE the same row Pessimistic Record is deleted.
that is being updated.
BEFORE UPDATE DELETE the same row Optimistic Concurrency conflict error.
that is being updated.
BEFORE UPDATE DELETE the same row Pessimistic Concurrency conflict error.
that is being updated.
Trigger Lock
Mode Operation Trigger Action Type Result
AFTER DELETE INSERT a row with Optimistic Same record is inserted after deleting.
the same value.
AFTER DELETE INSERT a row with Pessimistic Hangs at the time of firing the trigger.
the same value.
BEFORE DELETE INSERT a row with Optimistic Same record is inserted after deleting
the same value.
BEFORE DELETE INSERT a row with Pessimistic Hangs at the time of firing the trigger.
the same value.
AFTER DELETE INSERT a row with Optimistic Record is deleted.
the same value.
AFTER DELETE UPDATE the same Pessimistic Record is deleted.
row by adding a num-
ber to the value.
BEFORE DELETE UPDATE the same Optimistic Record is deleted.
row by adding a num-
ber to the value.
BEFORE DELETE UPDATE the same Pessimistic Record is deleted.
row by adding a num-
ber to the value
AFTER DELETE DELETE same row Optimistic Too many nested triggers.
AFTER DELETE DELETE same record Pessimistic Too many nested triggers
BEFORE DELETE DELETE same record Optimistic Concurrency conflict error.
BEFORE DELETE DELETE same record Pessimistic Concurrency conflict error.
In this situation, we have a trigger that fires AFTER an INSERT operation is done. The body
of the trigger contains statements that update the same row as was inserted (that is, the same
row as the one that fired the trigger). If the lock type is "optimistic", then the result will be
that the record gets updated. (Because there is no conflict, the locking [optimistic versus pes-
simistic] does not make a difference).
Note that in this case there is no recursion issue, even though we update the same row that
we just inserted. The action that "fires" the trigger is not the same as the action taken inside
the trigger, and so we do not create a recursive/looping situation.
Here's another example from the table:
In this case, we try to insert a record, but before the insertion takes place the trigger is run. In
this case, the trigger tries to update the record (for example, to add sales tax to it). Since the
record is not yet inserted, however, the UPDATE command inside the trigger does not find
the record, and never adds the sales tax. Thus the result is the same as if the trigger had never
fired. There is no error message, so you may not realize immediately that your trigger does
not do what you intended.
ROLLBACK WORK;
Note
Note
If the row that is updated/deleted were based on a unique key, instead of an ordinary column
(as in the example above), Solid generates the following error message: 1001: key value
not found.
To avoid recursion and concurrency conflict errors, be sure to check the application logic
and take precautions to ensure the application does not cause two transactions to update or
delete the same row.
Error Handling
If a procedure returns an error to a trigger, the trigger causes its invoking DML command to
fail with an error. To automatically return errors during the execution of a DML statement,
you must use WHENEVER SQLERROR ABORT statement in the trigger body. Otherwise,
errors must be checked explicitly within the trigger body after each procedure call or SQL
statement.
For any errors in the user written business logic as part of the trigger body, users must use
the RETURN SQLERROR statement. For details, see “Raising Errors From Inside Trig-
gers” on page 3-62.
If RETURN SQLERROR is not specified, then the system returns a default error message
when the SQL statement execution fails. Any changes to the database due to the current
DML statement are undone and the transaction is still active. In effect, transactions are not
rolled back if a trigger execution fails, but the current executing statement is rolled back.
Note
Note
Triggered SQL statements are a part of the invoking transaction. If the invoking DML state-
ment fails due to either the trigger or another error that is generated outside the trigger, all
SQL statements within the trigger are rolled back along with the failed invoking DML com-
mand.
It is the responsibility of the invoking transaction to commit or rollback any DML state-
ments executed within the trigger's procedure. However, this rule does not apply if the DML
command invoking the trigger fails as a result of the associated trigger. In this case, any
DML statements executed within that trigger's procedure are automatically rolled back.
The COMMIT and ROLLBACK statements must be executed outside the trigger body and
cannot be executed within the trigger body. If one executes COMMIT or ROLLBACK
within the trigger body or within a procedure called from the trigger body or another trigger,
the user will get a run-time error.
integrity.
-- BACKGROUND:
-- If a user attempts to delete a "parent" record and that parent has
-- "child" records that refer to it, then either the deletion should
-- be prevented, or else the children should also be deleted when the
-- parent is deleted. (This is sometimes called "cascading delete"
-- because deletion of a parent may set of a "cascade" of deletes
-- of children, grandchildren, etc.)
-- This script demonstrates the use of triggers to implement
-- cascading delete to ensure referential integrity.
Here is an example of using triggers to check referential integrity during an INSERT state-
ment.
-- PURPOSE:
-- This sample shows how to use triggers to enforce referential
-- integrity during INSERT.
-- BACKGROUND:
-- If a user attempts to insert a "child" record and that child has
-- no "parent", then the insertion should be prevented.
count := -1;
Note that when using triggers to enforce referential integrity rules (instead of Solid server's
declarative referential integrity) no cycle or conflict checks are performed.
Referential integrity checks on the invoking DML statement are always made after a
BEFORE trigger is fired but before an AFTER trigger is fired.
If a user does not specify the RETURN SQLERROR statement in the trigger body, then all
trapped SQL errors are raised with a default error_string determined by the system. For
details, see the appendix, "Error Codes" in the documentation for your Solid product.
Trigger Example
This example shows how simple triggers work. It contains some triggers that work correctly
and some triggers that contain errors. For the successful triggers in the example, a table
(named trigger_test) is created and six triggers are created on that table. Each trigger, when
fired, inserts a record into another table (named trigger_output). After performing the DML
statements (INSERT, UPDATE, and DELETE) that fire the triggers, the results of the trig-
gers are displayed by selecting all records from the trigger_output table.
DROP TABLE TRIGGER_TEST;
DROP TABLE TRIGGER_ERR_TEST;
DROP TABLE TRIGGER_ERR_B_TEST;
DROP TABLE TRIGGER_ERR_A_TEST;
DROP TABLE TRIGGER_OUTPUT;
COMMIT WORK;
-- Create a table that has a column for each of the possible trigger
-- types ( for example, BI = a trigger that is on Insert
-- operations and that executes as a "Before" trigger).
CREATE TABLE TRIGGER_TEST(
XX VARCHAR,
BI VARCHAR, -- BI = Before Insert
AI VARCHAR, -- AI = After Insert
BU VARCHAR, -- BU = Before Update
AU VARCHAR, -- AU = After Update
BD VARCHAR, -- BD = Before Delete
AD VARCHAR -- AD = After Delete
);
COMMIT WORK;
BI VARCHAR,
AI VARCHAR,
BU VARCHAR,
AU VARCHAR,
BD VARCHAR,
AD VARCHAR
);
------------------------------------------------------------------
-- Successful triggers
------------------------------------------------------------------
-- Create a "Before" trigger on insert operations. When a record is
-- inserted into the table named trigger_test, then this trigger is
-- fired. When this trigger is fired, it inserts a record into the
-- "trigger_output" table to show that the trigger actually executed.
-----------------------------------------------------------------
-- This attempt to create a trigger will fail. The statement
-- specifies the wrong data type for the error variable named
-- ERRSTR.
-----------------------------------------------------------------
-----------------------------------------------------------------
-- Trigger that returns an error message.
-----------------------------------------------------------------
"CREATE TRIGGER TRIGGER_ERR_BI ON TRIGGER_ERR_B_TEST
BEFORE INSERT
REFERENCING NEW BI AS NEW_BI
BEGIN
-- ...
RETURN SQLERROR 'Error in TRIGGER_ERR_BI';
END";
COMMIT WORK;
-----------------------------------------------------------------
-- Success trigger tests. These Insert, Update, and Delete
-- statements will force the triggers to fire. The SELECT
-- statements will show you the records in the trigger_test and
-- trigger_output tables.
-----------------------------------------------------------------
COMMIT WORK;
-- Show that the triggers did run and did add values to the
-- trigger_output table. You should see 6 records one for
-- each of the triggers that executed. The 6 triggers are:
-- BI, AI, BU, AU, BD, AD.
-----------------------------------------------------------------
-- Error trigger test
-----------------------------------------------------------------
Dropping Triggers
To drop a trigger defined on a table, use the DROP TRIGGER command. This command
drops the trigger from the system catalog.
You must be the owner of a table, or a user with DBA authority, to drop a trigger from the
table.
The syntax is:
DROP TRIGGER [[catalog_name.]schema_name.]trigger_name
DROP TRIGGER trigger_name
DROP TRIGGER schema_name.trigger_name
DROP TRIGGER catalog_name.schema_name.trigger_name
The trigger_name is the name of the trigger on which the table is defined.
If the trigger is part of a schema, indicate the schema name as in:
schema_name.trigger_name
alter_trigger :=
ALTER TRIGGER trigger_name_att SET ENABLED | DISABLED
trigger_name_attr := [catalog_name.[schema_name]]trigger_name
Example
ALTER TRIGGER trig_on_employee SET ENABLED;
Trigger Functions
The following system supported triggers stack functions are useful for analyzing and debug-
ging purposes.
Note
Note
The trigger stack refer to those triggers that are cached, regardless of whether they are exe-
cuted or detected for execution. Trigger stack functions can be used in the application pro-
gram like any other function.
is executed. (It is also possible to execute a START AFTER COMMIT in autocommit mode,
but there is rarely a reason to do this.)
Below is an example that shows the use of a START AFTER COMMIT statement inside a
transaction.
-- Any valid SQL statement(s)...
...
-- Creation phase. The function my_proc() is not actually called here.
START AFTER COMMIT NONUNIQUE CALL my_proc(x, y);
...
-- Any valid SQL statement(s)...
A START AFTER COMMIT does not execute unless and until the transaction is success-
fully committed. If the transaction containing the START AFTER COMMIT is rolled back,
then the body of the START AFTER COMMIT is not executed. If you want to propagate the
updated data from a replica to a master, then this is an advantage because you only want the
data propagated if it is committed. If you were to use triggers to start the propagation, the
data would be propagated before it was committed.
The START AFTER COMMIT command applies only to the current transaction, i.e. the one
that the START AFTER COMMIT command was issued inside. It does not apply to subse-
quent transactions, or to any other transactions that are currently open in other connections.
The START AFTER COMMIT command allows you to specify only one SQL statement to
be executed when the COMMIT occurs. However, that one SQL statement may be a call to a
stored procedure, and that stored procedure may have many statements, including calls to
other stored procedures. Furthermore, you may have more than one START AFTER COM-
MIT command per transaction. The body of each of these START AFTER COMMIT state-
ments will be executed when the transaction is committed. However, these bodies will run
independently and asynchronously; they will not necessarily execute in the same order as
their corresponding START AFTER COMMIT statements, and they are likely to have over-
lapping execution (there is no guarantee that one will finish before the next one starts).
A common use of START AFTER COMMIT is to help implement "Sync Pull Notify"
("Push Synchronization"), which is discussed in the SolidSmartFlow Guide.
If the body of your START AFTER COMMIT is a call to a stored procedure, that procedure
may be local or it may be remote on one remote replica (or master).
If you are using Sync Pull Notify, then you may wish to call the same procedure on many
replicas. To do this, you must use a slightly indirect method. The simplest method is to write
one local procedure that calls many procedures on replicas. For example, if the body of the
START AFTER COMMIT statement is "CALL my_proc", then you could write my_proc to
be similar to the following::
CREATE PROCEDURE my_proc
BEGIN
CALL update_inventory(x) AT replica1;
CALL update_inventory(x) AT replica2;
CALL update_inventory(x) AT replica3;
END;
This approach works fine if your list of replicas is static. However, if you expect to add new
replicas in the future, you may find it more convenient to update "groups" of replicas based
on their properties. This allows you to add new replicas with specific properties and then
have existing stored procedures operate on those new replicas. This is done by making use of
two features: the FOR EACH REPLICA clause in START AFTER COMMIT, and the
DEFAULT clause in remote stored procedure calls.
If the FOR EACH REPLICA clause is used in START AFTER COMMIT, then the stmt will
be executed once for each replica that meets the conditions in the WHERE clause. Note that
the stmt is executed once FOR each replica, not once ON each replica. If there is no "AT
node-ref" clause in the CALL statement, then the stored procedure is called locally, i.e. on
the same server as the START AFTER COMMIT was executed on. To make sure that a
stored procedure is called once ON each replica, you must use the DEFAULT clause. The
typical way to do this is to create a local stored procedure that contains a remote procedure
calling that uses the DEFAULT clause. For example, suppose that my_local_proc contains
the following:
CALL update_sales_statistics AT DEFAULT;
CALL my_local_proc;
Procedure 'push' will be called once for each replica that has a property named 'location'
with value 'India'. Each time the procedure is called, "DEFAULT" will be set to the name of
that replica. Thus
CALL remoteproc AT DEFAULT;
will call the procedure on that particular replica.
You can set the replica properties in the master with the statement:
SET SYNC PROPERTY propname = 'value' FOR REPLICA replica_name;
for example
SET SYNC PROPERTY location = 'India' FOR REPLICA asia_hq;
For more details about the DEFAULT keyword, see the section titled "More on the
DEFAULT keyword..." below.
retry the failed statement. You must specify the number of seconds to wait between each
retry.
If you do not use the RETRY clause, the server attempts only once execute the statement,
then the statement is discarded. If, for example, the statement tries to call a remote proce-
dure, and if the remote server is down (or cannot be contacted due to a network problem),
then the statement will not be executed and you will not get any error message.
Any statement, including the statement specified in a START AFTER COMMIT, executes in
a certain "context". The context includes such factors as the default catalog, the default
schema, etc. For a statement executed from within a START AFTER COMMIT, the state-
ment's context is based on the context at the time that the START AFTER COMMIT is exe-
cuted, not on the context at the time of the COMMIT WORK that actually causes the
statement inside START AFTER COMMIT to run. E.g. In the example below, 'CALL
FOO_PROC' is executed in the catalog foo_cat and schema foo_schema, not bar_cat and
bar_schema.
SET CATALOG FOO_CAT;
SET SCHEMA FOO_SCHEMA;
START AFTER COMMIT UNIQUE CALL FOO_PROC;
...
SET CATALOG BAR_CAT;
SET SCHEMA BAR_SCHEMA;
COMMIT WORK;
The UNIQUE/NONUNIQUE keywords determine whether the server tries to avoid issuing
the same command twice.
The UNIQUE keyword before <stmt> defines that the statement is executed only if there
isn’t identical statement under execution or “pending” for execution. Statements are com-
pared with simple string compare. So for example ‘call foo(1)’ is different from ‘call
foo(2)’. Replicas are also taken into account in the comparison; in other words, UNIQUE
does not prevent the server from executing the same trigger call on different replicas. Note
that "unique" only blocks overlapping execution of statements; it does not prevent the same
statement from being executed again later if it is called again after the current invocation has
finished running.
NONUNIQUE means that duplicate statements can be executed simultaneously in the back-
ground.
Examples: The following statements are all considered different and are thus executed even
though each contains the UNIQUE keyword. (Name is a unique property of replica.)
But if the following statement is executed in the same transaction as the previous ones and if
some of the replicas R1, R2, and R3 have the property “color=’blue’”, then the call is not
executed for those replicas again.
START AFTER COMMIT FOR EACH REPLICA WHERE color=’blue’ UNIQUE call
myproc;
Note that uniqueness also does not prevent "automatic" execution from overlapping "man-
ual" execution. For example, if you manually execute a command to refresh from a particu-
lar publication, and if the master also calls a remote stored procedure to refresh from that
publication, the master won’t "skip" the call because a manual refresh is already running.
Uniqueness applies only to statements started by START AFTER COMMIT.
The START AFTER COMMIT statement can be used inside a stored procedure. For exam-
ple, suppose that you want to post an event if and only if a transaction completed success-
fully. You could write a stored procedure that would execute a START AFTER COMMIT
statement that would post the event if the transaction was committed (but not if it was rolled
back). Your code might look similar to the following:
This sample also contains an example of "receiving" and then using an event parameter. See
the stored procedure named "wait_on_event_e" in script #1.
-- This inserts a row into table1. The value inserted into the is copied
-- from the parameter to the procedure.
"CREATE PROCEDURE inserter(i integer)
BEGIN
EXEC SQL PREPARE c_inserter INSERT INTO table1 (a) VALUES (?);
EXEC SQL EXECUTE c_inserter USING (i);
EXEC SQL CLOSE c_inserter;
EXEC SQL DROP c_inserter;
END";
-- When user2 calls this procedure, the procedure will wait until
-- the event named "e" is posted, and then it will call the
-- stored procedure that inserts a record into table1.
"CREATE PROCEDURE wait_on_event_e
BEGIN
-- Declare the variable that will be used to hold the event parameter.
-- Although the parameter was declared when the event was created, you
-- still need to declare it as a variable in the procedure that receives
-- that event.
DECLARE i INT;
WAIT EVENT
WHEN e (i) BEGIN
-- After we receive the event, insert a row into the table.
EXEC SQL PREPARE c_call_inserter CALL inserter(?);
EXEC SQL EXECUTE c_call_inserter USING (i);
EXEC SQL CLOSE c_call_inserter;
EXEC SQL DROP c_call_inserter;
END EVENT
END WAIT
END";
COMMIT WORK;
-- User2 should be waiting on event e, and should see the event after
-- we execute the stored procedure named sac_demo and then commit work.
-- Note that since START AFTER COMMIT statements are executed
-- asynchronously, there may be a slight delay between the COMMIT WORK
-- and the associated POST EVENT.
CALL sac_demo;
COMMIT WORK;
COMMIT WORK;
There are several important things that you should know about START AFTER COMMIT.
■ When the body of the deferred procedure call (START AFTER COMMIT) is executed,
it runs asynchronously in the background. This allows the server to immediately start
executing the next SQL command in your program without waiting for the deferred pro-
cedure call statement to finish. It also means that you do not have to wait for comple-
tion before disconnecting from the server. In most situations, this is an advantage.
However, in a few situations this may be a disadvantage. For example, if the body of
the deferred procedure call locks records that are needed by subsequent SQL com-
mands in your program, you may not appreciate having the body of the deferred proce-
dure call run in the background while your next SQL command runs in the foreground
and has to wait to access those same records. (For a way around this, see below...)
■ The statement to be executed will only be executed if the transaction is completed with
a COMMIT, not a ROLLBACK. If the entire transaction is explicitly rolled back, or if
the transaction is aborted and thus implicitly rolled back (due to a failed connection, for
example), then the body of the START AFTER COMMIT will not be executed.
■ Although the transaction in which the deferred procedure call occurs can be rolled back
(thus preventing the body of the deferred procedure call from running), the body of the
deferred procedure call cannot itself be rolled back if it has executed. Because it runs
asynchronously in the background, there is no mechanism for cancelling or rolling back
the body once it starts executing.
■ The statement in the deferred procedure call is not guaranteed to run until completion or
to be run as an "atomic" transaction. For example, if your server crashes, then the state-
ment will not resume executing the next time that the server starts, and any actions that
were completed before the server crashed may be kept. To prevent inconsistent data in
this type of situation, you must program carefully and make proper use of features like
referential constraints to ensure data integrity.
■ If you execute a START AFTER COMMIT statement in autocommit mode, then the
body of the START AFTER COMMIT will be executed "immediately" (i.e. as soon as
the START AFTER COMMIT is executed and automatically committed). At first, this
might seem useless -- why not just execute the body of the START AFTER COMMIT
directly? There are a few subtle differences, however. First, a direct call to my_proc is
synchronous; the server will not return control to you until the stored procedure has fin-
ished executing. If you call my_proc as the body of a START AFTER COMMIT, how-
ever, then the call is asynchronous; the server does not wait for the end of my_proc
before allowing you to execute the next SQL statement. In addition, because START
AFTER COMMIT statements are not truly executed "immediately" (i.e. at the time that
the transaction is committed) but may instead be delayed briefly if the server is busy,
you might or might not actually start running your next SQL statement before my_proc
even starts executing. It is rare for this to be desirable behavior. However, if you truly
want to launch an asynchronous stored procedure that will run in the background while
you continue onward with your program, it is valid to do START AFTER COMMIT in
autocommit mode.
■ If more than one deferred procedure call was executed in the same transaction, then the
bodies of all the START AFTER COMMIT statements all will run asynchronously.
This means that they will not necessarily run in the same order as you executed the
START AFTER COMMIT statements within the transaction.
■ The body of a START AFTER COMMIT must contain only one SQL statement. That
one statement may be a procedure call, however, and the procedure may contain multi-
ple SQL statements, including other procedure calls.
■ The START AFTER COMMIT statement applies only to the transaction in which it is
defined. If you execute START AFTER COMMIT in the current transaction, the body
of the deferred procedure call will be executed only when the current transaction is
committed; it will not be executed in subsequent transactions, nor will it be executed for
transactions done by any other connections. START AFTER COMMIT statements do
not create "persistent" behavior. If you would like the same body to be called at the end
of multiple transactions, then you will have to execute a "START AFTER COMMIT ...
CALL my_proc" statement in each of those transactions.
■ The "result" of the execution of the body of the deferred procedure call (START AFTER
COMMIT) statement is not returned in any way to the connection that ran the deferred
procedure call. For example, if the body of the deferred procedure call returns a value
that indicates whether an error occurred, that value will be discarded.
■ Almost any SQL statement may be used as the body of a START AFTER COMMIT
statement. Although calls to stored procedures are typical, you may also use UPDATE,
CREATE TABLE, or almost anything else. (We don’t advise putting another START
AFTER COMMIT statements inside a START AFTER COMMIT, however.) Note that a
statement like SELECT is generally useless inside an deferred procedure call because
the result is not returned.
■ Because the body is not executed at the time that the START AFTER COMMIT state-
ment is executed inside the transaction, START AFTER COMMIT statements rarely fail
unless the deferred procedure call itself or the body contains a syntax error or some
other error that can be detected without actually executing the body.
What if you don't want the next SQL statement in your program to run until deferred proce-
dure call statement has finished running? Here's a workaround:
1. At the end of the deferred procedure call statement (e.g. at the end of the stored proce-
dure called by the deferred procedure call statement), post an Event. (See the Program-
mer's Guide for a description of events.)
2. Immediately after you commit the transaction that specified the deferred procedure call,
call a stored procedure that waits on the event.
3. After the stored procedure call (to wait on the event), put the next SQL statement that
your program wants to execute.
For example, your program might look like the following:
...
The stored procedure wait_for_sac_completion would wait for the event that myproc will
post. Therefore, the UPDATE statement won't run until after the deferred procedure call
statement finishes.
Note that this workaround is slightly risky. Since deferred procedure call statements are not
guaranteed to execute until completion, there is a chance that the stored procedure
wait_for_sac_completion will never get the event that it is waiting for.
Why would anyone design a command that may or may not run to completion? The answer
is that the primary purpose of the START AFTER COMMIT feature is to support "Sync Pull
Notify". The Sync Pull Notify feature allows a master server to notify its replica(s) that data
has been updated and that the replicas may request refreshes to get the new data. If this noti-
fication process fails for some reason, it would not result in data corruption; it would simply
mean that there would be a longer delay before the replica refreshes the data. Since a rep-
lica is always given all the data since its last successful refresh operation, a delay in receipt
of data does not cause the replica to permanently miss any data. For more details, see the
section of the SmartFlow Guide that documents the Sync Pull Notify feature.
Note: The statement inside the body of the START AFTER COMMIT may be any state-
ment, including SELECT. Remember, however, that the the body of the START AFTER
COMMIT does not return its results anywhere, so a SELECT statement is generally not use-
ful inside a START AFTER COMMIT.
Note. If you are in auto-commit mode and execute START AFTER COMMIT..., then the
given statement is started immediately in the background. “Immediately” here actually
means “as soon as possible”, because it’s still executed asynchronously when the server has
time to do it.
M1
R1 R2
To carry out Sync Pull Notify, follow the steps listed below:
1. Define a Procedure Pm1 in Master M1. In Procedure Pm1, include the statements:
EXECDIRECT CALL Pr1 AT R1;
EXECDIRECT CALL Pr1 AT R2;
(You will have one call for each interested Replica. Note that the replica name changes,
but typically the procedure name is the same on each replica.)
2. Define a Procedure Pr1 in Replica R1. If a master is to invoke the Pr1 in more than one
replica, then Pr1 should be defined for every replica that is of interest. See the replica
procedure example in the example section below.
3. Define a Trigger for all relevant DML operations, such as
a. INSERT
b. UPDATE and
c. DELETE
4. In each trigger body, embed the statement
EXECDIRECT START [UNIQUE] CALL Pm1;
5. Grant EXECUTE authority to the appropriate user on each replica. (A user Ur1on the
replica should already be mapped to a corresponding user Um1 on the master. The user
Um1 must execute the
EXECDIRECT START [UNIQUE] CALL Pm1;
When Um1 calls the procedure remotely, the call will actually execute with the privi-
leges of Ur1 when the call is executed on the replica.)
Example1: A sales application has a table named CUSTOMER, which has a column named
SALESMAN. The master database contains information for all salespersons. Each salesper-
son has her own replica database, and that replica has only a "slice" of the master’s data;
specifically, each salesperson’s replica has the slice of data for that salesperson. E.g. Sales-
person Smith’s replica has only the data for salesperson Smith. If the salesperson assigned to
a particular customer changes, then the correct replicas should be notified. E.g. if XYZ cor-
poration is reassigned from salesperson Smith to salesperson Jones, then salesperson Jones’s
replica database should add the data related to XYZ corporation, and salesperson Smith’s
replica should delete the data related to XYZ corporation. Here is the code to update both
replica databases:
Suppose that in the application, the user assigns all customers in sales area ‘CA’ tosalesper-
son Mike.
In the procedure CUST(), we force the salesperson’s replica to refresh from the data in the
master. This procedure CUST() is defined on all the replicas. If we call the procedure on
both the replica that the customer was reassigned to, and the replica that the customer was
reassigned from, then the procedure update both those replicas. Effectively, this will delete
the out-of-date data from the replica that no longer has this customer, and will insert the data
to the replica that is now responsible for this customer. If the publication and its parameters
are properly defined, we don’t need to write additional detailed logic to handle each possi-
ble operation, such as reassigning a customer from one salesperson to another; instead, we
simply tell each replica to refresh from the most current data.
NOTES:
It is possible to implement a Sync Pull Notify without triggers. The application may call
appropriate procedures to implement SyncPull. Triggers are a way to achieve Sync Pull
Notify in conjunction with the statement START AFTER COMMIT and remote procedure
calls.
Sometimes, in the Sync Pull Notify process, it is possible that a replica may have to
exchange one extra round trip of messages unnecessarily. This could happen if the master
invoked procedure tries to send a message to the replica that just sent the changes to the
master, and that causes a change in the "hot data" in the master. But this can be avoided
with careful usage of the START AFTER COMMIT statement. Be careful not to create an
"infinite loop", where each update on the master leads to an immediate update on the rep-
lica, which leads to an immediate update on the master... The best way to avoid this is to be
careful when creating triggers on the replica that might "immediately" send updated data to
the master, which in turn "immediately" notifies the replica to refresh again.
SYS_BACKGROUNDJOB_INFO
(
ID INTEGER NOT NULL,
STMT WVARCHAR NOT NULL,
USER_ID INTEGER NOT NULL,
ERROR_CODE INTEGER NOT NULL,
ERROR_TEXT WVARCHAR NOT NULL,
PRIMARY KEY(ID)
);
Only failed START AFTER COMMIT statements are logged into this table. If the statement
(e.g. a procedure call) starts successfully, no information is stored into the system tables.
User can retrieve the information from the table SYS_BACKGROUNDJOB_INFO using
either SQL SELECT-query or calling a system procedure
SYS_GETBACKGROUNDJOB_INFO. The input parameters is the jobID. The returned val-
ues are: ID INTEGER, STMT WVARCHAR, USER_ID INTEGER, ERROR_CODE INTE-
GER, ERROR_TEXT INTEGER.
Also an event SYS_EVENT_SACFAILED is posted when a statement fails to start.
CREATE EVENT SYS_EVENT_SACFAILED (ENAME WVARCHAR,
POSTSRVTIME TIMESTAMP,
UID INTEGER,
NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR);
The NUMDATAINFO field contains the jobID. The application can wait for this event and
use the jobID to retrieve the reason from the system table
SYS_BACKGROUNDJOB_INFO.
The system table SYS_BACKGROUNDJOB_INFO can be emptied with the admin com-
mand ‘cleanbgjobinfo’. So only a DBA can delete the rows from the table.
Using Sequences
A sequence object is used to get sequence numbers in an efficient manner. The syntax is:
CREATE [DENSE] SEQUENCE sequence_name
Depending on how the sequence is created, there may or may not be holes in the sequence
(the sequence can be sparse or dense). Dense sequences guarantee that there are no holes in
the sequence numbers. The sequence number allocation is bound to the current transaction.
If the transaction rolls back, the sequence number allocations are also rolled back. The draw-
back of dense sequences is that the sequence is locked out from other transactions until the
current transaction ends.
If there is no need for dense sequences, a sparse sequence can be used. A sparse sequence
guarantees uniqueness of the returned values, but it is not bound to the current transaction. If
a transaction allocates a sparse sequence number and later rolls back, the sequence number
is simply lost.
A sequence object can be used, for example, to generate primary key numbers. The advan-
tage of using a sequence object instead of a separate table is that the sequence object is spe-
cifically fine-tuned for fast execution and requires less overhead than normal update
statements.
Both dense and sparse sequence numbers start from 1.
After creating the sequence with the CREATE SEQUENCE statement, you can access the
Sequence object values by using the following constructs in SQL statements:
■ sequencename.CURRVAL which returns the current value of the sequence
■ sequencename.NEXTVAL which increments the sequence by one and returns the next
value.
An example of creating unique identifiers automatically for a table is given below:
INSERT INTO ORDERS (id, ...)
VALUES (order_seq.NEXTVAL, ...);
Sequences can also be used inside stored procedures. The current sequence value can be
retrieved using the following statement:
EXEC SEQUENCE sequence_name.CURRENT INTO variable;
New sequence values can be retrieved using the following syntax:
EXEC SEQUENCE sequence_name.NEXT INTO variable;
It is also possible to set the current value of a sequence to a predefined value by using the
following syntax:
EXEC SEQUENCE sequence_name SET VALUE USING variable;
An example of using a stored procedure to retrieve a new sequence number is given below:
"CREATE PROCEDURE get_my_seq
RETURNS (val INTEGER)
BEGIN
Using Events
Event alerts are special objects in a Solid database. Events are used primarily to coordinate
timing, but may also be used to send a small amount of information. One connection "waits"
on an event until another connection "posts" that event.
More than one connection may wait on the same event. If multiple connections wait on the
same event, then all waiting connections are notified when the event is posted. A connection
may also wait on multiple events, in which case it will be notified when any of those events
are posted.
Events generally consume a much smaller amount of resources than polling consumes.
Users may create their own events. The server also has some built-in system events.
The server does not automatically post user-defined events; they must be posted by a stored
procedure. Similarly the events is received (waited on) in a stored procedure. (You may also
wait on an event outside a stored procedure by using the ADMIN EVENT command.) When
an application calls a stored procedure that waits for a specific event to happen, the applica-
tion is blocked until the event is posted and received. In multi-threaded environments, sepa-
rate threads and connections can be used to access the database during the event wait.
An event has a name that identifies it and a set of parameters. The name can be any user-
specified alphanumeric string. An event object is created with the SQL statement:
CREATE EVENT event_name
[(parameter_name datatype
[parameter_name datatype...])]
The parameter list specifies parameter names and parameter types. The parameter types are
normal SQL types. Events are dropped with the SQL statement:
DROP EVENT event_name
Events are always posted inside stored procedures. Events are usually received inside stored
procedures. Special stored procedure statements are used to post and receive events.
The event is posted with the stored procedure statement
post_statement ::= POST EVENT event_name [(parameters)]
Event parameters must be local variables or parameters in the stored procedure where the
event is triggered. All clients that are waiting for the posted event will receive the event.
Each connection has its own event queue. The events to be collected in the event queue are
specified with the stored procedure statement
wait_register-statement ::=
REGISTER EVENT event_name
Events are removed from the event queue with the stored procedure statement
UNREGISTER EVENT event_name
Event parameters must be local variables or parameters in the stored procedure where the
event is triggered.
To make a procedure wait for an event to happen, the WAIT EVENT construct is used in the
stored procedure:
wait_event_statement::=
WAIT EVENT
[event_specification...]
END WAIT
event_specification::=
WHEN event_name [(parameters)] BEGIN
statements
END EVENT
You may also wait on an event by using the ADMIN EVENT command. You may use this at
the solsql command line, for example. Below is an example of the code required to register
for and wait on an event using ADMIN EVENT commands:
ADMIN EVENT 'register sys_event_hsbstateswitch';
ADMIN EVENT 'wait';
You may wait on either system-defined events or user-defined events. Note that you cannot
post events using ADMIN EVENT. For more details about ADMIN EVENT, see “ADMIN
EVENT” on page B-11.
Event Examples
This sections includes two examples for using events. Example 1 is a pair of SQL scripts
that when used together show how to use events. Example 2 is a pair of SQL scripts, includ-
ing a stored procedure, that when used together waits for multiple events.
Example 1
In this first example of using events, we have two scripts. One script waits on an event and
the other script posts the event. Once the event has been posted, the event that is waiting will
finish waiting and move on to the next command.
To execute this example code, you will need two windows (for example, two FlowControl
windows) so that you can start the WaitOnEvent.sql script and then run the PostEvent.sql
script while WaitOnEvent.sql is waiting.
In this particular example, the stored procedure that waits does not actually do anything after
the event has posted; the script merely finishes the wait and returns to the caller. The caller
can then proceed to do whatever it wants, which in this case is to SELECT the record that
was inserted while we were waiting.
This example waits for only a single event, which is called "record_was_inserted". Later in
this chapter we will have another script that waits for multiple events using a single"WAIT".
============================= SCRIPT 1=============================
-- SCRIPT NAME: WaitOnEvent.sql
-- PURPOSE:
-- This is one of a set of scripts that demonstrates posting events
-- and waiting on events. The sequence of steps is shown below:
--
-- THIS SCRIPT (WaitOnEvent.sql) PostEvent.sql script
-- --------------------------------------------------
-- CREATE EVENT.
-- CREATE TABLE.
-- WAIT ON EVENT.
-- Insert a record into table.
-- Post event.
-- SELECT * FROM TABLE.
--
-- To perform these steps in the proper order, start running this
-- script FIRST, but remember that this script does not finish running
-- until after the post_event script runs and posts the event.
-- Therefore, you will need two open windows (for example, two
-- FlowControl windows, or two terminal windows) so that you can leave
-- this running/waiting in one window while you run the other script
-- post_event) in the other window.
-- Create a simple event that has no parameters.
-- Note that this event (like any event) does not have any
-- commands or data; the event is just a label that allows both the
-- posting process and the waiting process to identify which event has
-- been posted (more than one event may be registered at a time).
-- As part of our demonstration of events, this particular event
-- will be posted by the other user after he or she inserted a record.
CREATE EVENT record_was_inserted;
-- Create a table that the other script will insert into.
CREATE TABLE table1 (int_col INTEGER);
-- Create a procedure that will wait on an event
-- named "record_was_inserted".
-- The other script (PostEvent.sq) will post this event.
"CREATE PROCEDURE wait_for_event
BEGIN
-- If possible, avoid holding open a transaction. Note that in most
-- cases it's better to do the COMMIT WORK before the procedure,
-- not inside it. See "Waiting on Events" at the end of this example.
EXEC SQL COMMIT WORK;
-- Now wait for the event to be posted.
WAIT EVENT
WHEN record_was_inserted BEGIN
-- In this demo, we simply fall through and return from the
-- procedure call, and then we continue on to the next
-- statement after the procedure call.
END EVENT
END WAIT;
END";
-- Call the procedure to wait. Note that this script will not
-- continue on to the next step (the SELECT) until after the
-- event is posted.
CALL wait_for_event();
COMMIT WORK;
-- Display the record inserted by the other script.
SELECT * FROM table1;
Bonsai Tree and preventing its growth, read the section "Reducing Bonsai Tree Size by
Committing Transactions," in the Solid Database Engine Administrator Guide.
In this example, we have put COMMIT WORK inside the procedure immediately before the
WAIT. However, this is not usually a good solution; putting the COMMIT or ROLLBACK
inside the "wait" procedure means that if the procedure is called as part of another transac-
tion, then the COMMIT or ROLLBACK will terminate that enclosing transaction and start a
new transaction, which is probably not what you want. If, for example, you were entering
data into a "child" table with a referential constraint and you are waiting for the referenced
data to be entered into the "parent" table, then breaking the transaction into 2 transactions
would simply cause the insert of the "child" record to fail because the parent would not have
been inserted yet.
The best strategy is to design your program so that you do not need to WAIT inside a trans-
action; instead, your "wait" procedure should be called between transactions if that is possi-
ble. By using events/waits, you have some control over the order in which things are done
and you can use this to help ensure that dependencies are met without actually putting--
everything into a single transaction. For example, in an "asynchronous" situation you might
be waiting for both a child and a parent record to be inserted, and if your database server did
not have the "events" feature, then you might require that both records be inserted in the
same transaction so that you could ensure referential integrity.
By using events/waits, you can ensure that the insertion of the parent is done first; you can
then put the insertion of the child record in a second transaction because you can guarantee
that the parent will always be present when the child is inserted. (To be more precise, you
can ALMOST guarantee that the parent will be present when the child is inserted. If you
break up the insertions into 2 different transactions, then even if you ensure that the parent is
inserted before the child, there is a slight chance that the parent would be deleted before the
program tried to insert the child record.)
Example 2
The previous example showed how to wait on a single event. The next example shows how
to write a stored procedure that will wait on multiple events and that will finish the wait
when any one of those events is posted.
============================= SCRIPT 1=============================
-- SCRIPT NAME: MultiWaitExamplePart1.sql
-- PURPOSE:
-- This code shows how to wait on more than one event.
-- If you run this demonstration, you will see that a "wait" lasts only
-- until one of the events is received. Thus a wait on multiple events
-- is like an "OR" (rather than an "AND"); you wait until event1 OR
-- event2 OR ... occurs.
--
-- This demo uses 2 scripts, one of which waits for an event(s) and one
-- of which posts an event.
-- To run this example, you will need 2 windows (for example,
-- FlowControl windows).
-- 1) Run this script (MultiWaitExamplePart1.sql) in one window. After
-- this script reaches the point where it is waiting for the event, then
-- start Step 2.
-- 2) Run the script MultiWaitExamplePart2.sql in the other window.
-- This will post one of the events.
-- After the event is posted, the first script will finish.
WAIT EVENT
-- When the event named "event1" is received...
WHEN event1 BEGIN
eventresult := 'event1';
-- Insert a record into the event_records table showing that
-- this event was posted and received.
EXEC SQL PREPARE call_cursor
CALL insert_a_record(?);
EXEC SQL EXECUTE call_cursor USING (eventresult);
EXEC SQL CLOSE call_cursor;
EXEC SQL DROP call_cursor;
RETURN;
END EVENT
WHEN event2(i) BEGIN
eventresult := 'event2';
EXEC SQL PREPARE call_cursor2
CALL insert_a_record(?);
EXEC SQL EXECUTE call_cursor2 USING (eventresult);
EXEC SQL CLOSE call_cursor2;
EXEC SQL DROP call_cursor2;
RETURN;
END EVENT
WHEN event3(i, c) BEGIN
eventresult := 'event3';
EXEC SQL PREPARE call_cursor3
CALL insert_a_record(?);
EXEC SQL EXECUTE call_cursor3 USING (eventresult);
EXEC SQL CLOSE call_cursor3;
EXEC SQL DROP call_cursor3;
RETURN;
END EVENT
END WAIT
END";
COMMIT WORK;
-- Call the procedure that waits until one of the events is posted.
CALL event_wait(1);
-- See which event was posted.
SELECT * FROM event_records;
=========================== SCRIPT 2 ===================================
Example 3
This example shows very simple usage of the REGISTER EVENT and UNREGISTER
EVENT commands. You might notice that the previous scripts did not use REGISTER
EVENT, yet their WAIT commands succeeded anyway. The reason for this is that when you
wait on an event, you will be registered implicitly for that event if you did not already
explicitly register for it. Thus you only need to explicitly register events if you want them to
start being queued now but you don’t want to start WAITing for them until later.
CALL eeregister;
COMMIT WORK;
COMMIT WORK;
-- Post the events. Even though we haven't yet waited on the events,
-- they will be stored in our queue because we registered for them.
CALL eepost;
COMMIT WORK;
WAIT EVENT
WHEN e0 BEGIN
whichEvent := 'event0';
END EVENT
END WAIT
END";
COMMIT WORK;
CALL eeunregister;
COMMIT WORK;
You manage a Solid database, as well as its users and schema, using Solid SQL statements.
This chapter describes the management tasks you perform with Solid SQL. These tasks
include managing roles and privileges, tables, indexes, transactions, catalogs, and schemas.
Note
Note
ADMIN COMMAND tasks are also available as administrative commands in Solid Remote
Control (teletype). For details, read the section of the Solid Administrator Guide titled "Solid
Remote Control (teletype)".
Solid Database Engine also provides SQL extensions that implement the data synchroniza-
tion capability.
User Privileges
When using Solid databases in a multi-user environment, you may want to apply user privi-
leges to hide certain tables from some users. For example, you may not want an employee to
see the table in which employee salaries are listed, or you may not want other users to
change your test tables.
You can apply five different kinds of user privileges. A user may be able to view, delete,
insert, update or reference information in a table or view. Any combination of these privi-
leges may also be applied. A user who has none of these privileges to a table is not able to
use the table at all.
Note
Note
Once user privileges are granted, they take effect when the user who is granted the privi-
leges logs on to the database. If the user is already logged on to the database when the privi-
leges are granted, they take effect only if the user:
- accesses for the first time the table or object on which the privileges are set
- or disconnects and then reconnects to the database.
User Roles
Privileges can also be granted to an entity called a role. A role is a group of privileges that
can be granted to users as one unit. You can create roles and assign users to certain roles. A
single user may have more than one role assigned, and a single role may have more than one
user assigned.
Note
Note
1. The same string cannot be used both as a user name and a role name.
2. Once a user role is granted, it takes effect when the user who is granted the role logs on
to the database. If the user is already logged on to the database when the role is granted,
the role takes effect when the user disconnects and then reconnects to the database.
______________________________________________________________________
The following user names and roles are reserved:
Creating Users
CREATE USER username IDENTIFIED BY password;
Only an administrator has the privilege to execute this statement. The following example
creates a new user named CALVIN with the password HOBBES.
CREATE USER CALVIN IDENTIFIED BY HOBBES;
Deleting Users
DROP USER username;
Only an administrator has the privilege to execute this statement. The following example
deletes the user named CALVIN.
DROP USER CALVIN;
Changing a Password
ALTER USER username IDENTIFIED BY new password;
The user username and the administrator have the privilege to execute this command. The
following example changes CALVIN's password to GUBBES.
ALTER USER CALVIN IDENTIFIED BY GUBBES;
Creating Roles
CREATE ROLE rolename;
The following example creates a new user role named GUEST_USERS.
Deleting Roles
DROP ROLE role_name;
The following example deletes the user role named GUEST_USERS.
DROP ROLE GUEST_USERS;
Note
Note
If the autocommit mode is set OFF, you need to commit your work. To commit your work
use the following SQL statement: COMMIT WORK;
If the autocommit mode is set ON, the transactions are committed automatically.
Managing Tables
Solid Database Engine has a dynamic data dictionary that allows you to create, delete and
alter tables on-line. Solid database tables are managed using SQL commands.
In the Solid directory, you can find a SQL script named sample.sql, which gives an
example of managing tables. You can run the script using Solid FlowControl.
Below are some examples of SQL statements for managing tables. Refer to Appendix B,
“Solid SQL Syntax” for a formal definition of the Solid SQL statements.
If you want to see the names of all tables in your database, issue the SQL statement SELECT
* FROM TABLES. ("TABLES" is a system-defined view.) Alternatively, you may use
the predefined command TABLES from Solid FlowControl. The table names can be found
in the column TABLE_NAME.
Creating Tables
CREATE TABLE table_name (column_name column_type
[, column_name column_type]...);
All users have privileges to create tables.
The following example creates a new table named TEST with the column I of the column
type INTEGER and the column TEXT of the column type VARCHAR.
CREATE TABLE TEST (I INTEGER, TEXT VARCHAR);
Removing Tables
DROP TABLE table_name;
Only the creator of the particular table or users having SYS_ADMIN_ROLE have privileges
to remove tables.
The following example removes the table named TEST.
DROP TABLE TEST;
Note
Note
For catalogs and schemas: The ANSI standard for SQL defines the keywords RESTRICT
and CASCADE. When dropping a catalog or a schema, if you use the keyword RESTRICT,
then you cannot drop a catalog or schema if it contains other database objects (e.g. tables).
Using the keyword CASCADE allows you to drop a catalog or schema that still contains
database objects -- the database objects that it contains will automatically be dropped. The
default behavior (if you don’t specify either RESTRICT or CASCADE) is RESTRICT.
For database objects other than Catalogs and Schemas: The keywords RESTRICT and CAS-
CADE are not accepted as part of most DROP statements in Solid SQL. Furthermore, for
these database objects, the rules are more complex than simply "pure CASCADE" or "pure
RESTRICT" behavior, but generally objects are dropped with drop behavior RESTRICT.
For example, if you try to drop table1 but table2 has a foreign key dependency on table1, or
if there are publications that reference table1, then you will not be able to drop table1 with-
out first dropping the dependent table or publication. However, the server does not use
RESTRICT behavior for all possible types of dependency. For example, if a view or a stored
procedure references a table, the referenced table can still be dropped, and the view or stored
procedure will fail the next time that it tries to reference that table. Also, if a table has a cor-
responding synchronization history table, that synchronization history table will be dropped
automatically. For more information about synchronization history tables, see the Solid
SmartFlow Data Synchronization Guide.
The following example statement deletes the column C from the table TEST.
ALTER TABLE TEST DROP COLUMN C;
Note
Note
If the autocommit mode is set OFF, you need to commit your work before you can modify
the data in the table you altered. To commit your work after altering a table, use the follow-
ing SQL statement:
COMMIT WORK;
If the autocommit mode is set ON, then all statements, including DDL (Data Definition Lan-
guage) statements, are committed automatically.
Managing Indexes
Indexes are used to speed up access to tables. The database engine uses indexes to access the
rows in a table directly. Without indexes, the engine would have to search the whole con-
tents of a table to find the desired row. You can create as many indexes as you like on a sin-
gle table; however, adding indexes does slow down write operations, such as inserts, deletes,
and updates on that table. For details on creating indexes to improve performance, read
“Using Indexes to Improve Query Performance” on page 6-4.
There are two kinds of indexes: non-unique indexes and unique indexes. A unique index is
an index where all key values are unique.
You can create and delete indexes using the following SQL statements. Refer to Appendix B,
“Solid SQL Syntax”, for a formal definition of the syntax for these statement.
Deleting an Index
DROP INDEX index_name;
The following example deletes the index named X_TEST.
DROP INDEX X_TEST;
Note
Note
After creating or dropping an index, you must commit (or roll back) your work before you
can modify the data in the table on which you created or dropped the index.
Once a primary key is defined (whether by the table creator or by the server), the server will
prevent rows with duplicate primary key values from being inserted into the table.
Note that if one index is a "leading subset" of another (meaning that the columns, column
order, and value order of all N columns in index2 are exactly the same as the first N col-
umn(s) of index1), then you only need to create the index that is the superset. For example,
suppose that you have an index on the combination of DEPARTMENT + OFFICE +
EMP_NAME. This index can be used not only for searches by department, office and
emp_name together, but also for searches of just the department, or just the department and
office together. So there is no need to create a separate index on the department name alone,
or on the department and office alone. The same is true for ORDER BY operations; if the
ORDER BY criterion matches a subset of an existing index, then the server can use that
index.
Keep in mind that if you defined a primary key or unique constraint, that key or constraint is
implemented as an index. Thus you never need to create an index that is a "leading subset"
of the primary key or of an existing unique constraint; such an index would be redundant.
Note that when searching using a secondary index, if the server finds all the requested data
in the index key, the server doesn't need to look up the complete row in the table. (This
applies only to "read" operations, i.e. SELECT statements. If the user updates values in the
table, then of course the data rows in the table as well as the values in the index(es) must be
updated.)
Foreign Keys
A foreign key is a column (or group of columns) within a table that refers to (or "relates to")
a unique value in another table. Each value in the foreign key column must have a matching
value in the other table.
The table with the foreign key is sometimes called a "child" table, and the table that the for-
eign key references is sometimes called a "parent" table. To ensure that each record in the
child table references exactly one record in the parent table, the referenced column(s) in the
parent table must have a primary key constraint or at least a unique constraint. (Note that
having aunique index is not sufficient.)
For example, in a bank, one table might hold customer information, and another table might
hold account information. Each account must be related to a customer, and would have a
unique customer_id. This customer_id would serve as the primary key of the customers
table. Each account would also have a copy of the customer_id of the customer who owns
that account; this allows us to look up customer information based on account information.
The copy of the customer_id in the accounts table is a foreign key; it references the match-
ing value in the primary key of the customers table.
Below is an example. In this example, the CUST_ID column in the CUSTOMERS table is
the primary key of the parent table, and the CUST_ID column of the ACCOUNTS table is a
foreign key that refers to the CUSTOMERS table. As you can see in the diagram below,
each account is associated with a corresponding customer. Some customers have more than
one account.
In the example shown above, the primary key and foreign key used a single column. How-
ever, primary and foreign keys may be composed of more than one column. Since each for-
eign key value must exactly match the corresponding primary key value, a foreign key must
contain the same number and data type of columns as the primary key, and these key col-
umns must be in the same order. However, a foreign key can have different column names
than the primary key, although this is rare. (The foreign key and primary key may also have
different default values. However, since values in the parent must be unique, default values
are not much use and are rarely used for columns that are part of a primary key. Default val-
ues are also not used very often for foreign key columns.)
Although primary key values must be unique, foreign key values are not required to be uni-
qie. For example, a single customer at a bank might have multiple accounts. The account_id
that appears in the primary key column in the CUSTOMERS table must be unique; how-
ever, the same account_id might occur multiple times in the foreign key column in the
ACCOUNTS table. As you can see in the illustration above, customer SMITH has more than
one account, and therefore her CUST_ID appears more than once in the foreign key column
of the ACCOUNTS table.
Although it’s rare, a foreign key in a table may refer to a primary key in that same table. In
other words, the parent table and child table are the same table. For example, in a table of
employees, each employee record might have a field that contains the ID of the manager of
that employee. The managers themselves might be stored in the same table. Thus the
manager_id of that table might be a foreign key that refers to the employee_id of that same
table. You can see an example of this below.
A SELF-REFERENTIAL TABLE
In this example, Rama’s manager is Smith (Rama’s MGR_ID is 20, and Smith’s EMP_ID is
20). Smith reports to Annan (Smith’s MGR_ID is 1, and Annan’s EMP_ID is 1.) Jones’
manager is Wong, and Wong’s manager is Annan. If Annan is the president of the com-
pany, then Annan doesn’t have a manager, but since we have to fill in some value for
Annan’s MGR_ID, we put in his own EMP_ID -- in other words, Annan is responsible to
himself.
You define the rules for referential integrity as part of the CREATE TABLE statement
through primary and foreign keys. For example:
CREATE TABLE DEPT (
DEPTNO INTEGER NOT NULL,
DNAME VARCHAR,
PRIMARY KEY (DEPTNO));
Note
Note
1. You can also enforce referential integrity by using triggers. Triggers, unlike foreign key
constraints, can be suspended temporarily.
2. Not all tables are allowed to have foreign keys. If a table is involved in master/replica
synchronization and is in a replica server, that table may not have any foreign key con-
straints. This limitation applies only to tables that are in replicas and that are involved in
publish/subscribe (refresh) activities. Note that tables in the replica that are not involved
in refresh activities may still have foreign keys. Foreign keys are allowed in tables that
are in the master database, even if those tables are involved in publish/refresh activities.
This limitation does not apply to primary keys. Any table may have a primary key (and
some tables, such as synchronization tables, must have a primary key).
3. Defining a foreign key always creates an index on the foreign key column(s). Each time
that a "parent" record is updated or deleted, the server checks that there are no child
records that are left without a parent. Giving each foreign key an index improves perfor-
mance of foreign key checking.
______________________________________________________________________
$1,000 balance. She adds your $300 deposit and saves your new account balance as $1,300.
At 11:09 AM, bank teller #1 returns to her terminal, finishes entering and saving the updated
value that she calculated ($800). That $800 value writes over the $1300. At the end of the
day, your account has $800 when it should have had $1,100 ($1000 + 300 - 200).
To prevent two users from "simultaneously" updating data (and potentially writing over each
other's updates), database software uses a concurrency control mechanism. Solid offers two
different concurrency control mechanisms. These are called "pessimistic concurrency con-
trol" (usually just called "locking") and "optimistic concurrency control. (We will explain
the reasons for these terms later.
For simplicity, in this example we will assume the system uses locking as its concurrency
control mechanism.
A lock is a mechanism for limiting other users’ access to a piece of data. When one user has
a lock on a record, the lock prevents other users from changing (and in some cases reading)
that record.
When teller #1 starts working on your account, a "lock" is placed on the account; if teller #2
tries to read or update your account while teller #1 is updating your account, teller #2 will
not be given access and will typically get an error message. In most database servers, the
lock is placed on an individual record in the database. (We will discuss table-level locks
later.) Using our banking example, a teller might get a lock on the record that contains your
checking account balance without also locking your savings account balance and without
locking the records of any other users' accounts.
Locking allows us to increase SAFETY at the cost of CONCURRENCY. We assure data
integrity, but we do it by preventing more than one user at a time from working with a par-
ticular piece of data.
If both users only want to read (not change) the data, then each user can use a "shared" lock.
For example, if I am reading, but not updating, a record, then another user can look at that
record at the same time. Many users may have shared locks on the same item (record, table,
etc.) at the same time. For example, you, your spouse, your banker, and a credit rating
agency could all look at your checking account balance simultaneously, as long as none of
you try to change it at the same time.
Shared and exclusive locks cannot be mixed. If you have an exclusive lock on a record, I
cannot get a shared lock (or an exclusive lock) on that same record.
allows one user to not only block another user from updating the same record, but even from
reading that record. If you use pessimistic locking and you get an exclusive lock, then no
other user can even read that record. With optimistic locking, however, we don't check for
conflicts except at the time that we write updated data to disk. If user1 updates a record and
user2 only wants to read it, then user2 simply reads whatever data is on the disk and then
proceeds, without checking whether the data is locked. User2 might see slightly out-of-date
information if user1 has read the data and updated it but has not yet "committed" the transac-
tion.
Solid actually implements optimistic concurrency control in a more sophisticated way than
this. Rather than giving each user "whatever version of data is on the disk at the moment it is
read", Solid actually can store multiple versions of each data row temporarily. Each user’s
transaction sees the database as it was at the time that the transaction started. This way, the
data that each user sees is consistent throughout the transaction, and users are able to concur-
rently access the database. Data is always available to users because locking is not used;
access is improved since deadlocks no longer apply. (Again, however, users run the risk that
their changes will be thrown out if those changes conflict with another user’s changes.) For
details about how multiversioning is done, read the section of the Solid Administrator Guide
titled Solid Bonsai Tree Multiversioning and Concurrency Control.
The descriptions above of optimistic and pessimistic concurrency control are slightly simpli-
fied. Even if a table uses pessimistic locking, and even if a record within that table has an
exclusive lock, another user may execute read operations on the locked record under spe-
cific conditions. If the reader explicitly sets her transaction to be a read-only transaction,
then she can use versioning rather than locking. This only occurs if the user explicitly
declares the transaction as read only by issuing the command:
SET TRANSACTION READ ONLY;
Thus, for example, user1 might put an exclusive lock on a record and update it. When the
record is updated, its version number changes. User2, who is using a read-only transaction,
can read the previous version of the record even though the record has an exclusive lock on
it.
Note that pessimistic locking allows you an option that optimistic locking does not offer. We
said earlier that pessimistic locks fail "immediately" -- i.e., if you try to get an exclusive lock
on a record and another user already has a lock (shared or exclusive) on that record, then you
will be told that you can't get a lock. In fact, Solid allows you the option of either failing
immediately or of waiting a specified number of seconds before failing. You might specify
a wait of 30 seconds; this means that if you initially try to get the lock and cannot, the server
will continue trying to get the lock until either it gets the lock or until the 30 seconds has
elapsed. In many cases, especially when transactions tend to be very short, you may find
that setting a brief wait allows you to continue activities that otherwise would have been
blocked by locks.
This wait mechanism applies only to pessimistic locking, not to optimistic concurrency con-
trol. There is no such thing as "waiting for an optimistic lock". If someone else changed the
data since the time that you read it, no amount of waiting will prevent a conflict that has
already occurred. In fact, since optimistic concurrency methods do not place locks, there is
literally no "optimistic lock" to wait on.
Note
Note
Note that the server uses pessimistic locking when executing SELECT FOR UPDATE, even
if the table is set to use optimistic locking. For more information, see “Setting Concurrency
Control” on page 4-29 and “Setting Lock Timeout for Optimistic Tables” on page 4-31.
Neither pessimistic nor optimistic concurrency control is "right" or "wrong". When prop-
erly implemented, both approaches ensure that your data is properly updated. In most sce-
narios, optimistic concurrency control is more efficient and offers higher performance, but in
some scenarios pessimistic locking is more appropriate. In situations where there are a lot of
updates and relatively high chances of users trying to update data at the same time, you
probably want to use pessimistic locking. If the odds of conflict are very low (many records
and relatively few users, or very few updates and mostly "read" operations), then optimistic
concurrency control is usually the best choice. The decision will also be affected by how
many records each user updates at a time. In our bank example, we usually update only one
account/record at a time. For some applications, however, each operation may update a
large number of records at a time (for example, the bank might add interest earnings to every
account at the end of each month), virtually assuring that if two such applications are run-
ning at the same time then they will have conflicts.
By default, the Solid database servers use optimistic locking. This allows fast performance
and high concurrency (access by multiple users), at the cost of occasionally "refusing" to
write data that was initially accepted but was found at the last second to conflict with
another user's changes.
You can override optimistic locking and specify pessimistic locking instead. You can do
this at the level of individual tables. One table might follow the rules of optimistic locking
while another table follows the rules of pessimistic locking. Both tables can be used within
the same transaction and even the same statement; the Solid server takes care of the details
for you. For more details about how to specify optimistic vs. pessimistic, see “Setting The
Concurrency (Locking) Mode to Optimistic or Pessimistic” on page 4-22.
You might wonder whether "optimistic locking" is a true locking scheme at all. When we
use optimistic locking, we do not actually place any locks. Thus the name "optimistic lock-
ing" is misleading. However, optimistic locking serves the same purpose as pessimistic
locking (it prevents overlapping updates), so it is labeled "locking", even though the under-
lying mechanism is not a true lock.
For all other tables, there is no single method of reading a table's concurrency mode. You
must to follow the steps below in order until you determine the concurrency mode for the
desired table.
1. If a table's concurrency mode was set explicitly with the ALTER TABLE command,
then the concurrency mode for that table is recorded in the system table named
SYS_TABLEMODES. You can read the value by executing the following command:
SELECT SYS_TABLEMODES.ID, table_name, mode
FROM SYS_TABLES, SYS_TABLEMODES
WHERE SYS_TABLEMODES.ID = SYS_TABLES.ID;
Note that this works ONLY if you explicitly set the table's concurrency mode using the
ALTER TABLE command.
2. If a table's concurrency mode was not set with the ALTER TABLE command, then
check the concurrency control mode specified by the solid.ini file at the time that the
server started. You can read this level by executing the command:
ADMIN COMMAND 'describe parameter general.pessimistic';
If the value in the solid.ini file has not been changed since the server started, and if the
value has not been overridden by an ADMIN COMMAND, then of course you can
determine the value by looking at the solid.ini file.
(Note: Prior to version 4.00.0031, the server did not properly recognize the ADMIN
COMMAND to display the value of the General.Pessimistic variable. This means that
for earlier versions of the server you will need to look at the value in the solid.ini file. If
anyone changed the value in the solid.ini file since the time that the server started, then
you will not know the correct value.)
3. If none of the above apply, then the server will default to optimistic for all tables.
■ UPDATE
When a user accesses a row with the SELECT... FOR UPDATE statement, the row is
locked with an update mode lock. This means that no other user can read or update the
row, and ensures the current user can later update the row. Update locks are similar to
exclusive locks. The main difference between the two is that you can acquire an update
lock when another user already has a shared lock on the same record.This lets the holder
of the update lock read data without excluding other users; however, once the holder of
the update lock changes the data, the update lock is converted to an exclusive lock. A
surprising characteristic of update locks is that they are asymmetric with respect to
shared locks. A user may acquire an update lock on a record that already has a shared
lock; however, a user may not acquire a shared lock on a record that already has an
update lock. Because an update lock prevents subsequent read locks, it is easier to con-
vert the update lock to an exclusive lock.
Table Locks
So far, we've talked primarily about locking individual rows in a table, such as the bank
account information that contains your checking account balance. The server allows table-
level locks as well as row-level locks. Many of the principles that apply to locks on individ-
ual records also apply to locks on tables.
Why would you want to lock a table? Imagine that you want to alter a table to add a new
column. You don't want anyone else to try to add a column with the same name at the same
time.
Therefore, when you execute an ALTER TABLE operation, you get a shared lock on that
table. That allows other users to continue to read data from the table, but prevents them from
making changes to the table. If another user wants to do DDL operations (such as ALTER
TABLE) on the same table at the same time, he or she will either have to wait or will get an
error message.
Thus basic table locking has much the same purpose and mechanism as record locking.
However, there are some additional situations in which table locking is used; it's not always
just because one user is trying to update the structure of the table.
Imagine that you are updating a record in a table; for example, perhaps you are updating a
customer's home phone number. Meanwhile, another user decides to change the table, drop-
ping the telephone number column and adding an email address column. If we allowed
another user to drop the telephone number column and then allowed you to try to write an
updated telephone number to that column that no longer exists, the data would undoubtedly
be corrupted. Therefore, when a user acquires a shared lock or an exclusive lock on a record
in a table, the user also implicitly acquires a lock (usually a shared lock) on the entire table.
This prevents the structure of the table from changing while users are in the middle of using
any part of that table.
Table-level locks are always "pessimistic"; i.e., the server puts a real lock on the table rather
than just looking at versioning information. This is true even if the table is set to optimistic
locking. (The terms here may be confusing. Keep in mind that when you set the lock mode
for a table, you are really setting the lock mode for the rows in the table, not the table itself.
In other words, you are setting the lock mode for for row-level locks, not table-level locks.)
Unless you are altering the table, the locks on tables are usually shared locks. These table
locks usually have a "timeout" of 0 seconds -- if you can’t get the lock immediately, then the
server does not wait; it just gives you an error message.
There is a third possible reason for locking an entire table. Suppose that you want to change
every record in the table within a single transaction. For example, suppose that it's 12:01
AM January 1st, and you want to credit all of the savings accounts with the interest that they
earned last year. You could acquire an individual exclusive lock on each record in the table,
but this is inefficient. You'd like to get an exclusive lock on the entire table. Checking this
one lock is more efficient than checking potential locks on every record in the table. Natu-
rally, if some other user has a lock on the table (such as the shared table lock that she
acquires as a result of locking any record in the table), then you won't be able to acquire an
exclusive lock on that table. The rules regarding exclusive/shared locks are the same for
tables as for records: you can have as many shared locks as you want, but only one exclu-
sive lock may exist at a time; furthermore, you can't have a combination of exclusive and
shared locks.
When the server recognizes that a particular operation (such as an UPDATE statement with-
out a where clause) will affect every record in the table, the server itself can lock the entire
table if it thinks that would be most efficient, and if there no conflicting locks on the table
already exist.
Thus we see that table locks can be used for at least 3 purposes:
1. to protect against two users trying to change the table at the same time;
2. to prevent the table from being changed while records within the table are being
changed.
3. to increase efficiency of operations that do mass updates.
Most table-level locks are implicit -- in other words, the server itself sets those locks when
necessary. However, you can also set table-level locks explicitly by using the LOCK
TABLE command. This is useful when using the Maintenance Mode feature set. See the
chapter "Updating And Maintaining The Schema Of A Distributed System" in the Solid
SmartFlow Data Synchronization Guide for more details.
Table-level Locking
The EXCLUSIVE and SHARED lock modes (see “Shared, Exclusive, and Update locks”
on page 4-23) are used for both pessimistic and optimistic tables.
Note
Note
By default, optimistic and pessimistic tables are always locked in shared mode. In addition,
some Solid Database Engine statements that are optionally run with the PESSIMISTIC key-
word use EXCLUSIVE table level locks even when the tables are optimistic.
Lock Duration
The purpose of a transaction (a sequence of statements that are all committed or rolled back
together) is to ensure that data is internally consistent. This may require locks to be held
unti the end of the transaction.
Let's review the subject of transactions first. Suppose that you just bought a new bicycle and
paid for it by check. The bank must subtract the price of the bicycle from your account and
must add the price of the bicycle to the bike store's account. These 2 operations must be
done "together" or else money may seem to disappear to, or appear from, nowhere. For
example, suppose that we subtracted the money from your account, then committed the
transaction, and then failed to update the bike store's account (perhaps because a power fail-
ure occurred immediately before we updated the store's account). You would be poorer, but
the bike store would be no richer. The money would seem to disappear (and you'd probably
have a very angry bicycle dealer demanding that you pay again for something you've already
paid for).
If we put the two operations (subtracting from your account and adding to the store's
account) into the same transaction, then no money ever disappears. If the transaction is
interrupted (and rolled back) for some reason such as a power failure, then we can retry the
same operation again later without risking the possibility of charging you twice (or not pay-
ing the store at all).
Generally, an update lock is held from the time it is acquired until the time that the transac-
tion completes (via commit or rollback). If the lock were not held until the end of the trans-
action, then rollback might fail. (Imagine what would happen if someone else updated the
record after you updated it but before you finished your transaction. If you have to roll back
for some reason, the server would have to figure out whether to roll back the other user's
changes -- or might simply lose those changes, even if the other user continued on and
comitted her transaction.)
In Solid servers, shared locks ("read locks") are also held until the end of the transaction.
Solid servers differ from some other servers in this regard. Some servers will release shared
locks before the end of a transaction if the Transaction Isolation Level is low enough.
You might wonder whether the transaction isolation level affects the server’s behavior with
regard to shared locks if those shared locks are always held until the end of the transaction.
There are still some differences between the isolation levels, even when locks are held until
the end of the transaction. For example, SERIALIZABLE isolalation level does additional
checks. It checks also that no new rows are added to the result set that the transaction should
have seen. In other words, it prevents other users from inserting rows that would have quali-
fied for the result set that is in the transaction. For example, suppose that I have a SERIAL-
IZABLE transaction that has an update command like:
UPDATE customers SET x = y WHERE area_code = 415;
In a SERIALIZABLE transaction, the server does not allow other users to enter records with
area_code=415 until the serializable transaction is committed.
See the next section for a more detailed discussion of Transaction Isolation.
Managing Transactions
A transaction is a group of SQL statements treated as a single unit of work; either all the
statements are executed as a group, or none are executed. This section assumes you know
the fundamentals for creating transactions using standard SQL statements. It describes how
Solid SQL lets you handle transaction behavior, concurrency control, and isolation levels.
Note
Note
To detect conflicts between transactions, use the standard ANSI SQL command SET
TRANSACTION ISOLATION LEVEL to define the transaction with a Repeatable Read or
Serializable isolation level. For details, read “Choosing Transaction Isolation Levels” on
page 4-31.
Transactions must be ended with the COMMIT WORK or ROLLBACK WORK commands
unless autocommit is used.
are frequently updated. In the case of these so-called hotspots, conflicts are so probable that
optimistic concurrency control wastes effort in rolling back conflicting transactions.
Mixed concurrency control is available by setting individual tables to optimistic or pessimis-
tic. Mixed concurrency control is a combination of row-level pessimistic locking and opti-
mistic concurrency control. By turning on row-level locking table-by-table, you can specify
that a single transaction use both concurrency control methods simultaneously. This func-
tionality is available for both read-only and read-write transactions.
Note
Note
Pessimistic table level locks in shared mode are possible with tables that are synchronized.
This functionality provides users with the option to run some operations for synchronization
in pessimistic mode even with optimistic tables. For example, when a REFRESH is exe-
cuted in pessimistic mode in a replica, Solid Database Engine locks all tables in shared
mode; later, if necessary, the server can "promote" these locks to exclusive table locks. This
is done in a few synchronization statements when optional keyword PESSIMISTIC is speci-
fied. Note that read operations do not use any locks.
To set individual tables for optimistic or pessimistic concurrency, use the following SQL
command:
ALTER TABLE base_table_name SET {OPTIMISTIC | PESSIMISTIC}
Note that by default all tables are set for optimistic.
You can also set a database-wide default in the [General] section of the configuration file
with the following parameter:
Pessimistic = yes
When you specify PESSIMISTIC concurrency control, the server places locks on rows to
control the level of consistency and concurrency when users are submitting queries or
updates to rows.
second user’s update transaction. If the first user doesn’t finish before the second user times
out, then the second user’s statement is terminated by the server.
You can set the lock timeout with the following SQL command:
SET LOCK TIMEOUT timeout_in_seconds
By default, the granularity is in seconds. The lock timeout can be set at millisecond granu-
larity by adding "MS" after the value, e.g.
SET LOCK TIMEOUT 10MS;
Without the "MS", the lock timeout will be in seconds.
Note that the maximum timeout is 1000 seconds (a little over 15 minutes). The server will
not accept a longer value.
Setting Lock Timeout for Optimistic Tables
When you use SELECT FOR UPDATE, the selected rows are locked even if the table’s
locking mode was set to "optimistic". These rows must be locked to ensure that the update
will be successful. By default, the lock timeout in this situation is 0 seconds -- in other
words, either you immediately get the lock, or you get an error message. If you would like
the server to wait and try again to get the lock before giving up, then you can use the follow-
ing SQL command to set the lock timeout separately for optimistic tables.
SET OPTIMISTIC LOCK TIMEOUT seconds
Note that this allows you to set the lock timeout on a per-connection basis. Each connection
can have its own timeout. Using such a timeout makes the "optimistic" locking in SELECT
FOR UPDATE statements behave more like pessimistic locking.
Note
Note
The Read Uncommitted mode (known also as the’dirty read’ mode) is not supported in Solid
databases. Its purpose has been to enhance concurrency in DBMSs that use locking, but it
sacrifices the consistent view and potentially also database integrity.
■ Read Committed
This isolation level allows a transaction to read only committed data. Nonetheless, the
view of the database may change in the middle of a transaction when other transactions
commit their changes. Read Committed does not prevent phantom updates, but it does
ensure that the result set returned by a single query is consistent by setting the read level
to the latest committed transaction when the query is started.
■ Repeatable Read
This isolation level is the default isolation level for Solid databases. It allows a transac-
tion to read only committed data and guarantees that read data will not change until the
transaction terminates. Solid Database Engine additionally ensures that the transaction
sees a consistent view of the database. When using optimistic concurrency control, con-
flicts between transactions are detected by using transaction write-set validation. This
means that the server validates only write operations, not read operations. For example,
if a transaction involves one read and one update, Solid Database Engine validates that
no one has updated the same row in between the read operation and the update opera-
tion. In this way, lost updates are detected, but the read is not validated. With transac-
tion write-set validation, phantom updates may occur and transactions are not
serializable. The server’s default isolation level is REPEATABLE READ (and therefore
the default validation is transaction write set validation).
■ Serializable
This isolation level allows a transaction to read only committed data with a consistent
view of the database. Additionally, no other transaction may change the values read by
the transaction before it is committed because otherwise the execution of transactions
cannot be serialized in the general case.
Solid Database Engine can provide serializable transactions by detecting conflicts
between transactions. It does this by using both write-set and read-set validations.
Because no locks are used, all concurrency control anomalies are avoided, including the
phantom updates. This feature is enabled by using the command SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE, which is described in Appendix B, “Solid SQL
Syntax”.
Not surprisingly, if you want to specify a particular table and that table name is not unique in
the database, you can identify it by specifying the catalog, schema, and table name, e.g.
accounting_catalog.david_jones.bills
The syntax is discussed in more detail later.
If you don’t specify the complete name (i.e. if you omit the schema, or the schema and the
catalog), then the server uses the current/default catalog and schema name to determine
which table to use.
In general, a catalog can be thought of as a logical database. A schema typically corre-
sponds to a user. This is discussed in more detail below.
Catalogs
A physical database file may contain more than one logical database. Each logical database
is a complete, independent group of database objects, such as tables, indexes, procedures,
triggers, etc. Each logical database is a catalog. Note that a Solid catalog is not just limited
to indexes (as in the traditional sense of a library card catalog, which serves to locate an item
without containing the full contents of the item).
Catalogs allow you to logically partition databases so you can:
■ Organize your data to meet the needs of your business, users, and, applications.
■ Specify multiple master or replica databases (by using logical databases) for synchroni-
zation within one physical database server. For more details on implementing synchro-
nization in multi-master environments, read "Multi-master synchronization model" in
the Solid SmartFlow Data Synchronization Guide.
Schemas
A catalog may contain one or more schemas. A schema is a persistent database object that
provides a definition for part or all of the database. It represents a collection of database
objects associated with a specific schema name. These objects include tables, views,
indexes, stored procedures, triggers, and sequences. Schemas allow you to provide each user
with his or her own database objects (such as tables) within the same logical database (that
is, a single catalog). If no schema is specified with a database object, the default schema is
the user id of the user creating the object.
smith and jones, and that the same Smith owns both "smith" schemas and the same Jones
owns both "jones" schemas. If Smith and Jones create a table named books in each of their
schemas, then we have a total of 4 tables named "books", and these tables are accessible as:
employee_catalog.smith.books
employee_catalog.jones.books
inventory_catalog.smith.books
inventory_catalog.jones.books
As you can see, the catalog name and schema name can be used to "qualify" (uniquely iden-
tify) the name of a database object such as a table. Object names can be qualified in all DML
statements by using the syntax:
catalog_name.schema_name.database_object
or
catalog_name.user_id.database_object
For example:
SELECT cust_name FROM accounting_dept.smith.overdue_bills;
You can qualify one or more database objects with a schema name, whether or not you spec-
ify a catalog name. The syntax is:
schema_name.database_object_name
or
user_id.database_object_name
For example,
SELECT SUM(sales_tax) FROM jones.invoices;
To use a schema name with a database object, you must have already created the schema.
By default, database objects that are created without schema names are qualified using the
user ID of the database object’s creator. For example:
user_id.table_name
Catalog and schema contexts are set using the SET CATALOG or SET SCHEMA state-
ment.
If a catalog context is not set using SET CATALOG, then all database object names are
resolved by using the default catalog name.
Note
Note
When creating a new database or converting an old database to a new format, the user is
prompted to specify a default catalog name for the database system catalog. Users can
access the default catalog name without knowing this specified default catalog name. For
example, users can specify the following syntax to access the system catalog:
""._SYSTEM.table
Solid Database Engine translates the empty string ("") specified as a catalog name to the
default catalog name. Solid Database Engine also provides for automatic resolution of
_SYSTEM schema to the system catalog, even when users provide no catalog name.
The following SQL statements provide examples of creating catalogs and schemas. Refer to
Appendix B, “Solid SQL Syntax”, for a formal definition of the Solid SQL statements.
Creating a Catalog
CREATE CATALOG catalog_name
Only the creator of the database or users having SYS_ADMIN_ROLE have privileges to
create or drop catalogs.
The following example creates a catalog named C and assumes the userid is SMITH
CREATE CATALOG C;
SET CATALOG C;
CREATE TABLE T (i INTEGER);
SELECT * FROM T;
--The name T is resolved to C.SMITH.T
Deleting a Catalog
DROP CATALOG catalog_name
The following example deletes the catalog named C.
DROP CATALOG C;
Creating a Schema
CREATE SCHEMA schema_name
Any database user can create a schema; however, the user must have permission to create
the objects that pertain to the schema (for example, CREATE PROCEDURE, CREATE
TABLE, etc.).
Note that creating a schema does not implicitly make that new schema the current/default
schema. You must explicitly set that schema with the SET SCHEMA statement if you want
the new schema to become the current schema.
The following example creates a schema named FINANCE and assumes the user id is
SMITH:
CREATE SCHEMA FINANCE;
CREATE TABLE EMPLOYEE (EMP_ID INTEGER);
-- NOTE: The employee table is qualified to SMITH.EMPLOYEE, not
-- FINANCE.EMPLOYEE. Creating a schema does not implicitly make that
-- new schema the current/default schema.
SET SCHEMA FINANCE;
CREATE TABLE EMPLOYEE (ID INTEGER);
SELECT ID FROM EMPLOYEE;
-- In this case, the table is qualified to FINANCE.EMPLOYEE
Deleting a Schema
DROP SCHEMA schema_name
The following example deletes the schema named FINANCE.
Expressions
Expressions are used heavily in SQL, primarily in the WHERE clause. For example,
SELECT name FROM employees WHERE salary > 75000;
In this statement "salary > 75000" is an expression. The statement will list the names of all
employees whose salary is greater than 75000.
Expressions evaluate to a value, such as a number, or TRUE or FALSE. Expressions used in
the WHERE clause must evaluate to TRUE or FALSE.
If we want to list all persons whose names are less than Michael Morley's, then we do NOT
want to use the following:
table1.lname < 'Morley' and table1.fname < 'Michael'
If we used this expression, we would reject Zelda Adams because her first name is alphabet-
ically after Michael Morley's first name. One correct solution is to use the row value con-
structor approach:
(table1.lname, table1.fname) < ('Morley', 'Michael')
Note that when using equality, the expression must be true for ALL elements of the RCVs.
E.g.:
(1, 2, 3) = (1, 2, 3)
Not surprisingly, for inequality the expression must be true for only one element:
(1, 2, 1) != (1, 1, 1)
This chapter provides information on the following Solid Database Engine diagnostic tools:
■ SQL info facility and the EXPLAIN PLAN FOR statement used to tune your applica-
tion and identify inefficient SQL statements in your application.
■ Tracing facilities for stored procedures and triggers
You can use these facilities to observe performance, troubleshoot problems, and produce
high quality problem reports. These reports let you pinpoint the source of your problems by
isolating them under product categories (such as Solid ODBC API, Solid ODBC Driver,
Solid JDBC Driver, etc.).
Observing Performance
You can use the SQL Info facility to provide information on a SQL statement and the SQL
statement EXPLAIN PLAN FOR to show the execution graph that the SQL optimizer
selected for a given SQL statement. Typically, if you need to contact Solid technical sup-
port, you will be asked to provide the SQL statement, EXPLAIN PLAN output, and SQL
Info output from the EXPLAIN PLAN run with info level 8 for more extensive trace output.
info = 1
The SQL Info facility can also be turned on with the following SQL statement (this sets SQL
Info on only for the client that executes the statement):
SET SQL INFO ON LEVEL info_value FILE file_name
and turned off with the following SQL statement:
SET SQL INFO OFF
Example:
SET SQL INFO ON LEVEL 1 FILE ‘my_query.txt’
Unit Description
JOIN UNIT* Join unit joins two or more tables. The join can be done by using
loop join or merge join.
TABLE UNIT The table unit is used to fetch the data rows from a table or
index.
ORDER UNIT Order unit is used to order rows for grouping or to satisfy
ORDER BY. The ordering can be done in memory or using an
external disk sorter.
GROUP UNIT Group unit is used to do grouping and aggregate calculation
(SUM, MIN, etc.).
UNION UNIT* Union unit performs the UNION operation. The unit can be done
by using loop join or merge join.
INTERSECT UNIT* Intersect unit performs the INTERSECT operation. The unit can
be done by using loop join or merge join.
EXCEPT UNIT* Except unit performs the EXCEPT operation. The unit can be
done by using loop join or merge join.
*This unit is generated also for queries that reference only a single table. In that case no join is exe-
cuted in the unit; it simply passes the rows without manipulating them.
The following texts may exist in the INFO column for different types of units.
Example 1
EXPLAIN PLAN FOR SELECT * FROM TENKTUP1 WHERE UNIQUE2_NI BETWEEN 0 AND
99;
ID UNIT_ID PAR_ID JOIN_PATH UNIT_TYPE INFO
1 2 1 3 JOIN UNIT
2 3 2 0 TABLE UNIT TENKTUP1
3 3 2 0 FULL SCAN
4 3 2 0 UNIQUE2_NI
<= 99
5 3 2 0 UNIQUE2_NI
>= 0
6 3 2 0
Execution graph:
JOIN UNIT 2 gets input from TABLE UNIT 3
TABLE UNIT 3 for table TENKTUP1 does a full table scan with constraints UNIQUE2_NI
<= 99 and UNIQUE2_NI >= 0
JOIN UNIT 2
JOIN PATH 3
TABLE UNIT 3
Example 2
EXPLAIN PLAN FOR SELECT * FROM TENKTUP1, TENKTUP2 WHERE TENKTUP1.UNIQUE2
> 4000 AND TENKTUP1.UNIQUE2 < 4500 AND TENKTUP1.UNIQUE2 =
TENKTUP2.UNIQUE2;
Execution graph:
JOIN UNIT 6 the input from order units 9 and 10 are joined using merge join algorithm
ORDER UNIT 9 orders the input from TABLE UNIT 8. Since the data is retrieved in cor-
rect order, no real ordering is needed
ORDER UNIT 10 orders the input from TABLE UNIT 7. Since the data is retrieved in cor-
rect order, no real ordering is needed
TABLE UNIT 8: rows are fetched from table TENKTUP2 using primary key. Constraints
UNIQUE2 < 4500 and UNIQUE2 > 4000 are used to select the rows
TABLE UNIT 7: rows are fetched from table TENKTUP1 using primary key. Constraints
UNIQUE2 < 4500 and UNIQUE2 > 4000 are used to select the rows
JOIN
JOI UNIT 6
ORDER UNIT 9 OR
ORDER UNIT 10
Problem Reporting
Solid Database Engine offers sophisticated diagnostic tools and methods for producing high
quality problem reports with very limited effort. Use the diagnostic tools to capture all the
relevant information about the problem.
All problem reports should contain the following files and information:
■ solid.ini
■ license number
■ solmsg.out
■ solerror.out
■ soltrace.out
■ problem description
■ steps to reproduce the problem
■ all error messages and codes
■ contact information, preferably email address of the contact person
Problem Categories
Most problems can be divided into the following categories:
■ Solid ODBC API
■ Solid ODBC or JDBC Driver
■ UNIFACE driver for Solid Database Engine
■ Communication problems between the application or an external application (if using
the AcceleratorLib) and Solid Database Engine.
The following pages include detailed instructions to produce a proper problem report for
each problem type. Please follow the guidelines carefully.
fied procedure, a specified trigger, or for all triggers associated with a particular table. The
syntax is:
ADMIN COMMAND ‘proctrace { on | off }
user username { procedure | trigger | table } entity_name’
The "entity_name" is the name of the procedure, trigger, or table for which you want to turn
tracing on or off.
Trace is activated only when the specified user calls the procedure / trigger. This is useful,
for example, when tracing propagated procedure calls in a SmartFlow master.
Turning on tracing turns it on in all procedure/trigger calls by this user, not just calls from
the connection that switched the trace on. If you have multiple connections that use the same
username, then all of the calls in all of those connections will be traced. Furthermore, the
tracing will be done on calls propagated to (executed on) the master, as well as the calls exe-
cuted on the replica.
If the keyword "table" is specified, then all triggers on that table are traced.
Example:
--> I:=2
--> J:=NULL
--> SQLSUCCESS:=1
--> SQLERRNUM:=NULL
--> SQLERRSTR:=NULL
--> SQLROWCOUNT:=NULL
0004: J := 2*I;
--> J:=4
0005: RETURN ROW;
0006:END
23.01 17:25:17 ---- PROCEDURE 'DBA.DBA.TRACE_SAMPLE' TRACE END ----
the next START AFTER COMMIT is issued. The maximum number is configurable in
solid.ini using the parameter named MaxStartStatements (for details, see the description of
this parameter in the Solid Administrator Guide).
If a statement cannot be started, the reason for it is logged into the system table
SYS_BACKGROUNDJOB_INFO. Only failed START AFTER COMMIT statements are
logged into this table. For more details about this table, see
“SYS_BACKGROUNDJOB_INFO” on page D-2.
The user can retrieve the information from the table SYS_BACKGROUNDJOB_INFO
using either an SQL SELECT statement or by calling the system procedure
SYS_GETBACKGROUNDJOB_INFO. The stored procedure
SYS_GETBACKGROUNDJOB_INFO returns the row that matches the given jobid of the
START AFTER COMMIT statement. For more details about
SYS_GETBACKGROUNDJOB_INFO, see “SYS_GETBACKGROUNDJOB_INFO” on
page E-12.
If you want to be notified when a statement fails to start, you can wait on the system event
SYS_EVENT_SACFAILED. See “SYS_EVENT_SACFAILED” on page F-6 for details
about this event. The application can wait for this event and use the jobid to retrieve the
error message from the system table SYS_BACKGROUNDJOB_INFO.
This chapter discusses techniques that you can use to improve the performance of Solid
Database Engine. The topics included in this chapter are:
■ Tuning SQL statements and applications
■ Optimizing single-table SQL queries
■ Using indexes to improve query performance
■ Waiting on events
■ Optimizing batch inserts and updates
■ Using Optimizer hints for performance
■ Diagnosing poor performance
For tips on optimizing SmartFlow data synchronization, see the Solid SmartFlow Data Syn-
chronization Guide.
You should know what data your application processes, what are the SQL statements used,
and what operations the application performs on the data. For example, you can improve
query performance when you keep SELECT statements simple, avoiding unnecessary
clauses and predicates.
■ If you have multiple statements inside a single stored procedure, calling that stored pro-
cedure once may use fewer network "trips" than passing each statement individually
from the client to the server.
contraints that contain the operators =, !=, or <>. The server may, of course, use an index to
resolve other types of constraints that use =, !=, or <>.) For more information about row
value constructors, see “Row value constructors” on page 4-38.
Indexes improve the performance of queries that select a small percentage of rows from a
table. You should consider using indexes for queries that select less than 15% of table rows.
Concatenated indexes
An index can be made up of more than one column. Such an index is called a concatenated
index. We recommend using concatenated indexes when possible.
Whether or not a SQL statement uses a concatenated index is determined by the columns
contained in the WHERE clause of the SQL statement. A query can use a concatenated
index if it references a leading portion of the index in the WHERE clause. A leading portion
of an index refers to the first column or columns specified in the CREATE INDEX state-
ment.
Example:
Waiting On Events
In many programs, you may have to wait for a particular condition to occur before you can
perform a certain task. In some cases, you may use a "while" loop to check whether the con-
dition has occurred. Solid Database Engine provides Events, which in some cases allow you
to avoid wasting CPU time spinning in a loop waiting for a condition.
One (or more) clients or threads can wait on an event, and another client or thread can post
that event. For example, several threads might wait for a sensor to get a new piece of data.
Another thread (working with that sensor) can post an event indicating that the data is avail-
able. For more information about events, see “Using Events” on page 3-93 and various sec-
tions of Appendix B, “Solid SQL Syntax”, including “CREATE EVENT” on page B-28.
1. Check that you are running the application with the AUTOCOMMIT mode set off.
Solid ODBC Driver’s default setting is AUTOCOMMIT. This is the standard setting
according to the ODBC specification. To set your application with AUTOCOMMIT off,
call the SQLSetConnectOption function as in the following example:
rc = SQLSetConnectOption
(hdbc, SQL_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF);
2. Do not use large transactions. Five hundred (500) rows is recommended as the initial
transaction size. The optimal value for the transaction size is dependent on the particu-
lar application; you may need to experiment.
3. To make batch inserts faster, you can turn logging off. This, however, increases the risk
of data loss during system failure. In some environments, this trade-off is tolerable.
Number 1 and 2 of these guidelines are the most important actions you can take to increase
the speed of batch inserts. The actual rate of insertions also depends on your hardware, on
the amount of data per row, and on the existing indices for the table.
For more details on Optimizer Hints, including a description of possible hints and examples,
refer to the Solid SQL Syntax for “HINT” on page B-95.
Abbreviation Description
DEFLEN the defined length of the column;
for example, for CHAR(24) the precision and
length is 24
DEFPREC the defined precision;
for example, for NUMERIC(10,3) it is 10
DEFSCALE the defined scale;
for example, for NUMERIC(10,3), it is 3
MAXLEN the maximum length of column
N/A not applicable
NOTE: Although integer data types (TINYINT, SMALLINT, INT, and BIGINT) may be
interpreted by the client program as either signed or unsigned, the Solid server stores and
orders them as signed integers. There is no way to tell the server to order the integer data
types as though they were unsigned.
Caution
BIGINT has approximately 19 significant digits. This means that you may lose least signifi-
cant digits when storing BIGINT into non-integer data types such as FLOAT (which has
approximately 15 significant digits), SMALLFLOAT (which has approximately 7 signifi-
cant digits), DECIMAL (which has 16 significant digits).
Tip
To insert values into BINARY, VARBINARY, and LONG VARBINARY fields, you may
express the value as hexadecimal and use the CAST operator, e.g.:
INSERT INTO table1 VALUES (CAST('FF00AA55' AS VARBINARY));
Similarly, you may use CAST() expressions in WHERE clauses:
CREATE TABLE t1 (x VARBINARY);
INSERT INTO t1 (x) VALUES (CAST('000000A512' AS VARBINARY));
INSERT INTO t1 (x) VALUES (CAST('000000FF12' AS VARBINARY));
-- NOTE: If you want to use "=" rather than "LIKE" then you
-- can cast either operand.
SELECT * FROM t1 WHERE CAST(x AS VARCHAR) = '000000A512';
SELECT * FROM t1 WHERE x = CAST('000000A512' AS VARBINARY);
WARNING: this kind of query cannot use indexed search for the LIKE predicate and
results in poor query performance in many cases.
ADMIN COMMAND
ADMIN COMMAND 'command_name'
command_name ::= BACKUP | BACKUPLIST | CHECKPOINTING |
CLEANBGJOBINFO | CLOSE | DESCRIBE PARAMETER | ERRORCODE |
EXIT | FILESPEC | HELP | HOTSTANDBY | INFO | MAKECP | MEMORY |
MESSAGES | MONITOR | NOTIFY | OPEN | PARAMETER | PERFMON |
PID | PROCTRACE | PROTOCOLS | REPORT | RUN MERGE | SAVE
PARAMETERS | START MERGE | SHUTDOWN | STATUS | STATUS
BACKUP | THROWOUT | TRACE | USERID | USERLIST | USERTRACE |
VERSION
Supported in
Embedded Engine, BoostEngine.
Usage
This SQL extension executes administrative commands. The command_name in the syntax
is a Solid FlowControl or Solid SQL Editor (teletype) command string, for example:
ADMIN COMMAND ’backup’
If you are entering these commands using Solid Remote Control (teletype), be sure to spec-
ify the syntax with command name only (without the quotes), for example:
backup
Abbreviations for ADMIN COMMANDs are also available, for example, ADMIN COM-
MAND ’bak’.To access a list of abbreviated commands, execute ADMIN COMMAND
'help'.
The result set contains two columns: RC INTEGER and TEXT VARCHAR(254). Integer
column RC is a command return code (0 if success), and varchar column TEXT is the com-
mand reply. The TEXT field contains the same lines that are displayed on Solid FlowCon-
trol screen, one line per one result row.
Note that all options of the ADMIN COMMAND are not transactional and cannot be rolled
back.
Caution
Although ADMIN COMMANDs are not transactional, they will start a new transaction if
one is not already open. (They do not commit or roll back any open transaction.) This effect
is usually insignificant. However, it may affect the "start time" of a transaction, and that may
occasionally have unexpected effects. Solid’s concurrency control is based on a versioning
system; you see a database as it was at the time that your transaction started. (See the sec-
tion of the Solid Administrator Guide titled "Solid Bonsai Tree Multiversioning and Concur-
rency Control") . So, for example, if you:
commit work, and
issue an ADMIN COMMAND without doing another commit, and
go to lunch and return an hour later,
then your next SQL command may see the database as it was an hour ago, i.e. when you first
started the transaction with the ADMIN COMMAND.
Caution
Following is a description of the syntax for each ADMIN COMMAND command option:
ADMIN EVENT
ADMIN EVENT 'command'
command_name ::=
REGISTER { event_name [ , event_name ... ] | ALL } |
UNREGISTER { event_name [ , event_name ... ] | ALL } |
WAIT
event_name ::= the name of a system event
Usage
This is a Solid-specific extention to SQL that allows you to register for and wait for system-
generated events without writing and calling a stored procedure.
You must explicitly register for and wait for the event. For example
ADMIN EVENT 'register sys_event_hsbstateswitch';
ADMIN EVENT 'wait';
After the event is posted by the system, you will see something similar to the following:
1 rows fetched.
You must register for the event before you wait for it. (This is different from the way that
WAIT works in stored procedures. In stored procedures, explicit registration is optional.)
Once the connection starts to wait for an event, the connection will not be able to do any-
thing else until the event is posted.
You may register for multiple events. When you wait, you cannot specify which type of
event to wait for. The wait will continue until you have received any of the events for which
you have registered.
You may only wait for system events, not user events, using ADMIN EVENT. If you want to
wait for user events, then you must write and call a stored procedure.
The ADMIN EVENT command does not provide an option to post an event.
To use ADMIN EVENT, you must have DBA privileges or be granted the role
SYS_ADMIN_ROLE.
Examples
ADMIN EVENT 'register sys_event_hsbstateswitch';
ADMIN EVENT 'wait';
ADMIN EVENT ’unregister sys_event_hsbstateswitch';
ALTER TABLE
ALTER TABLE base_table_name
{
ADD [COLUMN] column_identifier data_type
[DEFAULT literal | NULL] [NOT NULL] |
ADD CONSTRAINT [constraint_name] dynamic_table_constraint |
DROP CONSTRAINT constraint_name |
ALTER [ COLUMN ] column_name
{DROP DEFAULT | {SET DEFAULT literal | NULL} } |
{{ADD | DROP} NOT NULL }
DROP [COLUMN] column_identifier |
RENAME [COLUMN]
column_identifier column_identifier |
MODIFY [COLUMN] column_identifier data-type |
MODIFY SCHEMA schema_name} |
SET HISTORY COLUMNS (c1, c2, c3) |
SET {OPTIMISTIC | PESSIMISTIC} |
SET STORE {DISK | MEMORY} |
SET TABLE NAME new_base_table_name
}
dynamic_table_constraint::=
{FOREIGN KEY (column_identifier [, column_identifier] ...)
REFERENCES table_name [(column_identifier [, column_identifier] ] ...)} |
CHECK (check_condition) |
UNIQUE (column_identifier)
Usage
The structure of a table may be modified through the ALTER TABLE statement. Columns
may be added, removed, modified, or renamed. You may change whether the table uses opti-
mistic or pessimistic concurrency control. You may change whether the table is stored in
memory or on disk. You may change which schema the table is part of.
The server allows users to change the width of a column using the ALTER TABLE com-
mand. A column width can be increased at any time (that is, whether a table is empty [no
rows] or non-empty). However, the ALTER TABLE command disallows decreasing the col-
umn width when the table is non-empty; a table must be empty to decrease the column
width.
Note that a column cannot be dropped if it is part of a unique or primary key.
The owner of a table can be changed using the ALTER TABLE base_table_name MODIFY
SCHEMA schema_name statement. This statement gives all rights, including creator rights,
to the new owner. The old owner’s access rights to the table, excluding the creator rights, are
preserved.
For information about the SET HISTORY COLUMNS clause, see the description of
“ALTER TABLE ... SET HISTORY COLUMNS” on page B-16.
Individual tables can be set to optimistic or pessimistic with the statement ALTER TABLE
base_table_name SET {OPTIMISTIC | PESSIMISTIC}. By default, all tables are
optimistic. A database-wide default can be set in the General section of the configuration
file with the parameter Pessimistic = yes.
A table may be changed from disk-based to in-memory or vice-versa. (This is allowed only
with BoostEngine.) This may be done only if the table is empty. If you try to change a table
to the same storage mode that it already uses (e.g. if you try to change an in-memory table to
use in-memory storage), then the command has no effect, and no error message is issued.
Example
ALTER TABLE table1 ADD x INTEGER;
ALTER TABLE table1 RENAME COLUMN old_name new_name;
ALTER TABLE table1 MODIFY COLUMN xyz SMALLINT;
ALTER TABLE table1 DROP COLUMN xyz;
ALTER TABLE table1 SET STORE MEMORY;
ALTER TABLE table1 SET PESSIMISTIC;
ALTER TABLE table2 ADD COLUMN col_new CHAR(8) DEFAULT 'VACANT' NOT NULL;
Usage
To further optimize the synchronization history process, after you set tables for synchroniza-
tion history, you can use the SET HISTORY COLUMNS statement to specify which col-
umn updates in the master and its corresponding synchronized table cause entries to the
history table. If you do not use this statement to specify particular columns, then all update
operations (on all columns) in the master database cause a new entry to the history table
when the corresponding synchronized table is updated. Generally, we recommend using
ALTER TABLE ... SET HISTORY COLUMNS for columns that are used for search criteria
or for joining.
Usage in Master
Use SET SYNCHISTORY and SET HISTORY COLUMNS in the master to enable incre-
mental publications on a table.
Usage in Replica
Use SET SYNCHISTORY and SET HISTORY COLUMNS in the replica to enable incre-
mental REFRESH on a table.
Example
ALTER TABLE myLargeTable SET HISTORY COLUMNS (accountid);
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
See Also
ALTER TABLE ... SET SYNCHISTORY
Usage
SET [NO]SYNCHISTORY
The "SET SYNCHISTORY / NOSYNCHISTORY" clause tells the server to use the incre-
mental publications mechanism of Solid Database Engine architecture for this table. By
default, SYNCHISTORY is not on. When this statement is set to SYNCHISTORY for a
specified table, a shadow table is automatically created to store old versions of updated or
deleted rows of the main table. The shadow table is called a "synchronization history table"
or simply a "history table".
The data in a history table is referred to when a replica gets an incremental REFRESH from
a publication in the master. For example, let’s suppose that the record with Ms. Smith’s tele-
phone bill is deleted from the main table. A copy of her record is stored in the synchroniza-
tion history table. When the replica refreshes, the master checks the history table and tells
the replica that Ms. Smith’s record was deleted. The replica can then delete that record, also.
If the percentage of records that were deleted or changed is fairly small, then an incremental
update is faster than downloading the entire table from the master. (When the user does a full
REFRESH, rather than an incremental REFRESH, the history table is not used. The data in
the table on the master is simply copied to the replica.)
Versioned data is automatically deleted from the database when there are no longer any rep-
licas that need the data to fulfill REFRESH requests.
You must use this command to turn on synchronization history before a table can participate
in master/replica synchronization. You can use this command on a table even if data cur-
rently exists in that table; however ALTER TABLE SET SYNCHISTORY can only be used
if the specified table is not referenced by an existing publication.
SET SYNCHISTORY must be specified in the tables of both master and replica databases.
Usage in Master
Use SET SYNCHISTORY in the master to enable incremental publications on a table.
Usage in Replica
Use SET SYNCHISTORY in the replica to enable incremental REFRESHES on a table.
Example
ALTER TABLE myLargeTable SET SYNCHISTORY ;
ALTER TABLE myVerySmallTable SET NOSYNCHISTORY ;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
See Also
ALTER TABLE ... SET HISTORY COLUMNS
ALTER TRIGGER
ALTER TRIGGER trigger_name_attr SET {ENABLED | DISABLED}
trigger_name_attr ::= [catalog_name.[schema_name.]]trigger_name
Usage
You can alter trigger attributes using the ALTER TRIGGER statement. The valid attributes
are ENABLED and DISABLED trigger.
The ALTER TRIGGER DISABLED statement causes a Solid server to ignore the trigger
when an activating DML statement is issued. With this command, you can also enable a trig-
ger that is currently inactive or disable a trigger that is currently active.
You must be the owner of the table, or a user with DBA authority, to alter a trigger on the
table.
Example
ALTER TRIGGER trig_on_employee SET ENABLED ;
ALTER USER
ALTER USER user_name IDENTIFIED BY password
Usage
The password of a user may be modified through the ALTER USER statement.
Example
ALTER USER MANAGER IDENTIFIED BY O2CPTG;
ALTER USER
ALTER USER replica_user SET MASTER master_name USER user_specification
where:
user_specification ::= {master_user IDENTIFIED BY master_password |
NONE}
Usage
The following statement is used to map replica user ids to specified master user ids.
ALTER USER replica_user SET MASTER master_name USER user_specification
Mapping user ids is used for implementing security in a multi-master or multi-tier synchro-
nization environment. In such environments, it is difficult to maintain the same username
and passwords in separate, geographically dispersed databases. For this reason mapping is
effective.
Only a user with DBA authority or SYS_SYNC_ADMIN_ROLE can map users. To imple-
ment mapping, an administrator must know the master user name and password. Note that it
is always a replica user id that is mapped to a master user id. If NONE is specified, the map-
ping is removed.
All replica databases are responsible for subscribing to the SYNC_CONFIG system publica-
tion to update user information. Public master user names and passwords are downloaded,
during this process, to a replica database using the MESSAGE APPEND SYNC_CONFIG
command. Through mapping of the replica user id with the master user id, the system deter-
mines the currently active master user based on the local user id that is logged to the replica
database. Note that if during SYNC_CONFIG loading, the system does not detect mapping,
it determines the currently active master user through the matching user id and password in
the master and the replica.
For more details on using mapping for security, read "Implementing Security Through
Access Rights And Roles" in the Solid SmartFlow Data Synchronization Guide.
It is also possible to limit what master users are downloaded to the replica during
SYNC_CONFIG loading. This is done by altering users as private or public with the follow-
ing command:
ALTER USER user_name SET PRIVATE | PUBLIC
Note that the default is PUBLIC. If the PRIVATE option is set for the user, that user’s infor-
mation is not included in a SYNC_CONFIG subscription, even if they are specified in a
SYNC_CONFIG request. Only a user with DBA authority or SYS_SYNC_ADMIN_ROLE
can alter a user’s status.
This allows administrators to ensure no user ids with administration rights are sent to a rep-
lica. For security reasons, administrators may want to ensure that DBA passwords are never
public, for example.
Usage in Master
You set user ids to PUBLIC or PRIVATE in a master database.
Usage in Replica
You map a replica user id to a master user id in a replica database.
Example
The following example maps a replica user id smith_1 to a master user id dba with a pass-
word of dba.
ALTER USER SMITH_1 SET MASTER MASTER_1 USER DBA IDENTIFIED BY DBA
The following example shows how users are set to PRIVATE and PUBLIC.
-- this master user should not be downloaded to any replica
ALTER USER dba SET PRIVATE;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
CALL
CALL procedure_name [(parameter [, parameter ...])] [AT node-def]
node-def ::= DEFAULT | <replica name> | <master name>
Supported in
EmbeddedEngine, Solid Database Engine (Note that remote procedure calls are allowed
only with Solid Database Engine with the SmartFlow option)
Usage
Stored procedures are called with statement CALL.
You may call a stored procedure on another node by using the AT node_ref clause. This is
valid only if the call is made from a master node to one of its replica nodes or vice-versa.
DEFAULT means that the “current replica context” is used. The "current replica context" is
only defined when the procedure call is started in the background using the START AFTER
COMMIT statement with the FOR EACH REPLICA option. If the default is not set, then an
error ‘Default node not defined’ is returned. DEFAULT can be used inside stored proce-
dures and in a statement started with START AFTER COMMIT.
A remote stored procedure cannot return a result set; it can only return an error code.
A single call statement can only call a single procedure on a single node. If you want to call
more than one procedure on a single node, you must execute multiple CALL statements. If
you want to execute the same procedure (i.e. the same procedure name) on more than one
node, then you have to either
1) Use
START AFTER COMMIT FOR EACH REPLICA.
Eg.
START AFTER COMMIT FOR EACH REPLICA WHERE NAME LIKE ‘REPLICA%’ UNIQUE CALL
MYPROC AT DEFAULT.
2) Execute multiple calls.
A procedure call is executed synchronously; it returns after the call is executed.
NOTE: The procedure call is executed asynchronously in the background if the procedure
call is executed using START AFTER COMMIT (e.g. START AFTER COMMIT UNIQUE
CALL FOO AT REPLICA1). That is due to the nature of the START AFTER COMMIT
command, not the nature of procedure calls.
Transactions
A remote procedure call (whether or not it was started by a START AFTER COMMIT) is
executed in a separate transaction from the transaction that it was called from. The caller
cannot roll back or commit the remote procedure call. The procedure that is executing in the
called node is responsible for issuing its own commit or rollback statement.
that the stored procedure will be executed on) and the user must have appropriate access
rights to the database and the called procedure.
CASE 2. If the Sync user is not set:
The caller sends the following information to the remote server when calling a remote
procedure:
If the caller is the master and the remote server is the replica (M -> R):
* Name of the master (SYS_SYNC_REPLICAS.MASTER_NAME).
* Replica id (SYS_SYNC_REPLICAS.ID).
* User name of the caller.
* User id of the caller.
If the caller is the replica and the remote procedure is the master (R -> M):
* Name of the master (SYS_SYNC_MASTERS.NAME).
* Replica id (SYS_SYNC_MASTERS.REPLICA_ID).
* Master user id (The same user id is used as when a replica refreshes data.
There has to be a mapping from the local replica user to a master user in
SYS_SYNC_USERS table.)
[Synchronizer]
ConnectStrForMaster=tcp replicahost 1316
The replica sends that connect string automatically to the master when it forwards any mes-
sage to the master. When the master receives the connect string fromthe replica, it replaces
any previous value (if it differs).
The master can set the connect string to the replica (if the replica has not done any messag-
ing and the master needs to call it and knows that the connect string has changed) using the
following statement:
SET SYNC CONNECT <connect-info> TO REPLICA <replica-name>
Durability
Remote procedure calls are not durable. If the server goes down right after issuing the
remote procedure call, then the call is lost. It will not be executed in recovery phase.
Example
CALL proctest;
CALL proctest(’some string’, 14);
CALL remote_proc AT replica2;
CALL RemoteProc(?,?) AT MyReplica1;
COMMIT WORK
COMMIT WORK
Usage
The changes made in the database are made permanent by the COMMIT statement. It termi-
nates the transaction. To discard the changes, use the ROLLBACK command. Note that if
you do not explicitly COMMIT a transaction, and if the program (e.g. solsql, FlowControl)
does not COMMIT for you, then the transaction will be rolled back.
Example
COMMIT WORK;
See Also
ROLLBACK WORK
CREATE CATALOG
CREATE CATALOG catalog_name
Usage
Catalogs allow you to logically partition databases so you can organize your data to meet the
needs of your business or application. Solid’s use of catalogs is an extension to the SQL
standard.
A Solid Database Engine physical database file may contain more than one logical data-
base. Each logical database is a complete, independent group of database objects, such as
tables, indexes, triggers, stored procedures, etc. Each logical database is implemented as a
database catalog. Thus, a Solid database can have one or more catalogs.
When creating a new database or converting an old database to a new format, users are
prompted for a default catalog name. This default catalog name allows for backward com-
patibility of Solid databases prior to version 3.x.
A catalog can have zero or more schema_names. The default schema name is the user ID of
the user who creates the catalog.
A schema can have zero or more database object names. A database object can be qualified
by a schema or user ID.
The catalog name is used to qualify a database object name. Database object names can be
qualified in all DML statements as:
catalog_name.schema_name.database_object
or
catalog_name.user_id.database_object
Note that if you use the catalog name, then you must also use the schema name. The con-
verse is not true; you may use the schema name without using the catalog name (if you have
already done an appropriate SET CATALOG statement to specify the default catalog).
catalog_name.database_object -- Illegal
schema_name.database_object -- Legal
Only a user with DBA authority (SYS_ADMIN_ROLE) can create a catalog for a database.
Note that creating a catalog does not automatically make that catalog the current default cat-
alog. If you have created a new catalog and want your subsequent commands to execute
within that catalog, then you must also execute the SET CATALOG statement. For example:
CREATE CATALOG MyCatalog;
Examples
CREATE CATALOG C;
SET CATALOG C;
CREATE SCHEMA S;
SET SCHEMA S;
CREATE TABLE T (i INTEGER);
SELECT * FROM T;
-- the name T is resolved to C.S.T
CREATE EVENT
CREATE EVENT event_name [(parameter_definition [, parameter_definition ...])]
Usage
Event alerts are used to signal an event in the database. Events are simple objects with a
name. Applications can use event alerts instead of polling, which uses more resources.
An event object is created with the SQL statement
CREATE EVENT event_name [parameter_list]
The name can be any user-specified alphanumeric string. The parameter list specifies param-
eter names and parameter types. The parameter types are normal SQL types.
Events are dropped with the SQL statement
DROP EVENT event_name
Events are sent and received inside stored procedures. Special stored procedure statements
are used to send and receive events.
The event is sent with the stored procedure statement
post_statement ::= POST EVENT event_name [(parameters)]
Event parameters must be local variables, constant values, or parameters in the stored proce-
dure from which the event is sent.
All clients that are waiting for the posted event will receive the event.
Each connection has its own event queue. The events to be collected in the event queue are
specified with the stored procedure statement:
wait_register_statement ::= REGISTER EVENT event_name
Events are removed from the event queue with the stored procedure statement:
wait_register_statement ::= UNREGISTER EVENT event_name
Note that you do not need to register for every event before waiting for it. When you wait on
an event, you will be registered implicitly for that event if you did not already explicitly reg-
ister for it. Thus you only need to explicitly register events if you want them to start being
queued now but you don’t want to start WAITing for them until later.
To make a procedure wait for an event to happen, the WAIT EVENT construct is used in a
stored procedure:
wait_event_statement ::=
WAIT EVENT
[event_specification ...]
END WAIT
event_specification ::=
WHEN event_name [(parameters)] BEGIN
statements
END EVENT
Each connection has its own event queue. To specify the events to be collected in the event
queue, use the command REGISTER EVENT event_name. Events are removed from the
event queue by the command UNREGISTER EVENT event_name.
"CREATE PROCEDURE register_event
begin
register event test_event
end";
The creator of an event or the database administrator can grant and revoke access rights on
that event. Access rights can be granted to users and roles. If a user has "SELECT" access
right on an event, then the user has the right to wait on that event. If a user has the INSERT
access right on an event, then the user may post that event.
For in-depth examples of events usage, refer to the the section “Using Events” on page 3-93.
The example includes a pair of SQL scripts scripts that when used together post and wait for
mulitple events.
Example
CREATE EVENT ALERT1(I INTEGER, C CHAR(4));
See Also
CREATE PROCEDURE
CREATE INDEX
CREATE [UNIQUE] INDEX index_name
ON base_table_name
(column_identifier [ASC | DESC]
[, column_identifier [ASC | DESC]] ...)
Usage
Creates an index for a table based on the given columns.
The keyword UNIQUE specifies that the column(s) being indexed must contain unique val-
ues. If more than one column is specified, then the combination of columns must have a
unique value, but the individual columns do not need to have unique values. For example, if
you create an index on the combination of LAST_NAME and FIRST_NAME, then the fol-
lowing data values are acceptable because although there are duplicate first names and dupli-
cate last names, no 2 rows have the same value for both first name and last name.
SMITH, PATTI
SMITH, DAVID
JONES, DAVID
Keywords ASC and DESC specify whether the given columns should be indexed in ascend-
ing or descending order. If neither ASC nor DESC is specified, then ascending order is used.
Example
CREATE UNIQUE INDEX UX_TEST ON TEST (I);
CREATE PROCEDURE
CREATE PROCEDURE procedure_name [(parameter_definition
[, parameter_definition ...])]
[RETURNS (parameter_definition [, parameter_definition ...])]
BEGIN procedure_body END;
parameter_definition ::= parameter_name data_type
procedure_body ::= [declare_statement; ...][procedure_statement; ...]
execute_statement ::=
EXEC SQL EXECUTE cursor_name
[USING (variable [, variable ...])]
[INTO (variable [, variable ...])] |
cursor_name ::=
literal
wait_event_statement ::=
WAIT EVENT
[event_specification ...]
END WAIT
event_specification ::=
WHEN event_name [(parameters)] BEGIN
statements
END EVENT
wait_register_statement ::=
REGISTER EVENT event_name |
Usage
Stored procedures are simple programs, or procedures, that are executed in the server. The
user can create a procedure that contains several SQL statements or a whole transaction and
execute it with a single call statement. Usage of stored procedures reduces network traffic
and allows more strict control to access rights and database operations.
elseif
expr2
then
statement-list2
end if
return Returns the current values of output
parameters and exits the procedure. If a
procedure has a return row statement,
return behaves like return norow.
return sqlerror of cursor-name Returns the sqlerror associated with the
cursor and exits the procedure.
return row Returns the current values of output
parameters and continues execution of the
procedure. Return row does not exit the
procedure and return control to the caller.
return norow Returns the end of the set and exits the
procedure.
All SQL DML and DDL statements can be used in procedures. Thus the procedure can, for
example, create tables or commit a transaction. Each SQL statement in the procedure is
atomic.
The "autocommit" functionality works differently for statements inside a stored procedure
than for statements outside a stored procedure. For SQL statements outside a stored proce-
The cursor specification is a cursor name that must be given. It can be any unique cursor
name inside the transaction. Note that if the procedure is not a complete transaction, other
open cursors outside the procedure may have conflicting cursor names.
Fetching Results
Rows are fetched with the statement
EXEC SQL FETCH cursor_name
If the fetch completed successfully, then the column values are stored into the variables
defined in the opt_into specification of the EXECUTE or EXECDIRECT statement.
Using Transactions
EXEC SQL {COMMIT | ROLLBACK} WORK
is used to terminate transactions.
EXEC SQL SET TRANSACTION {READ ONLY | READ WRITE}
is used to control the type of transactions.
Writetrace
The writetrace() function allows you to send a string to the soltrace.out trace file. This can
be useful when debugging problems in stored procedures.
The output will only be written if you turn tracing on.
For more information about writetrace and how to turn on tracing, see “Tracing facilities for
stored procedures and triggers” on page 5-11.
PROC_NAME(N) returns the Nth procedure name is the stack. First procedure position is
zero.
PROC_SCHEMA(N) returns the schema name of the Nth procedure in procedure stack.
EXECDIRECT
The EXECDIRECT statement allows you to execute statements inside stored procedures
without first "preparing" those statements. This reduces the amount of code required. Note
that if the statement is a cursor, you still need to close and drop it; only the PREPARE state-
ment can be skipped.
When using
EXEC SQL [USING(var_list)] [CURSORNAME(variable)] EXECDIRECT <statement>
or
EXEC SQL <cursor_name> [USING(var_list)] [INTO (var_list)]
[CURSORNAME(variable)] EXECDIRECT <statement>
remember the following rules:
■ If the statement specifies a cursor name, then the cursor must be dropped with the
EXEC SQL DROP statement.
■ If a cursor name is not specified, then you don’t need to drop the statement.
■ If the statement is a fetch cursor, then the INTO... clause must be specified.
■ If the INTO clause is specified, then the cursor_name must be specified; otherwise the
FETCH statement won’t be able to specify which cursor name the row should be
fetched from. (You may have more than one open cursor at a time.)
Below are several examples of CREATE PROCEDURE statements. Some use the PRE-
PARE and EXECUTE commands, while others use EXECDIRECT.
Example 1
"create procedure test2(tableid integer)
returns (cnt integer)
begin
exec sql prepare c1 select count(*) from sys_tables where id > ?;
exec sql execute c1 using (tableid) into (cnt);
exec sql fetch c1;
exec sql close c1;
exec sql drop c1;
end";
Example 2
This example uses the explicit RETURN statement to return multiple rows, one at a time.
"create procedure return_tables
returns (name varchar)
begin
exec sql execdirect create table table_name (lname char (20));
exec sql whenever sqlerror rollback, abort;
exec sql prepare c1 select table_name from sys_tables;
exec sql execute c1 into (name);
while sqlsuccess loop
exec sql fetch c1;
if not sqlsuccess
then leave;
end if
return row;
end loop;
exec sql close c1;
exec sql drop c1;
end";
Example 3
-- This example shows how to use "execdirect".
"CREATE PROCEDURE p
BEGIN
DECLARE host_x INT;
DECLARE host_y INT;
SET host_x = 1;
EXAMPLE 4
This example shows the usage of the CURSORNAME() pseudo-function. This shows only
part of the body of a stored procedure, not a complete stored procedure.
-- Declare a variable that will hold a unique string that we can use
-- as a cursor name.
EXAMPLE 5
Here is a more complete example that actually uses the GET_UNIQUE_STRING and CUR-
SORNAME functions in a recursive stored procedure.
The stored procedure below demonstrates the use of these two functions in a recursive pro-
cedure. Note that the cursor name "curs1" appears to be hard-coded, but in fact has been
mapped to the dynamically generated name.
SumSoFar := 0;
SumOfRemainingItems := 0;
nMinusOne := n - 1;
Autoname := GET_UNIQUE_STRING('CURSOR_NAME_PREFIX_') ;
EXEC SQL PREPARE curs1 CURSORNAME(autoname) CALL Sum1ToN(?);
EXEC SQL EXECUTE curs1 USING(nMinusOne) INTO(SumOfRemainingItems);
EXEC SQL FETCH curs1;
EXEC SQL CLOSE curs1;
EXEC SQL DROP curs1;
END IF;
SumSoFar := n + SumOfRemainingItems;
END";
EXAMPLE 6
In EXECDIRECT
CALL foo();
SELECT * FROM table1;
EXAMPLE 7
Creating a unique name for a synchronization message:
DECLARE Autoname VARCHAR;
DECLARE Sqlstr VARCHAR;
Autoname := get_unique_string(’MSG_’) ;
Sqlstr := ’MESSAGE ’ + autoname + ’ BEGIN’;
EXEC SQL EXECDIRECT Sqlstr;
...
Sqlstr := 'MESSAGE ' + autoname + ' FORWARD';
EXEC SQL EXECDIRECT Sqlstr;
EXAMPLE 8
-- This demonstrates how to use the GET_UNIQUE_STRING() function
-- to generate unique message names from within a recursive stored
-- procedure.
BEGIN
Autoname := GET_UNIQUE_STRING('MSG_');
MsgBeginStr := 'MESSAGE ' + Autoname + ' BEGIN';
MsgEndStr := 'MESSAGE ' + Autoname + ' END';
RETURN;
END";
CALL repeater(3);
The output from this SELECT statement would look similar to the following:
I BEGINMSG ENDMSG
-- --------------------- ------------------
1 MESSAGE MSG_019 BEGIN MESSAGE MSG_019 END
2 MESSAGE MSG_020 BEGIN MESSAGE MSG_020 END
3 MESSAGE MSG_021 BEGIN MESSAGE MSG_021 END
main_result_set_definition ::=
RESULT SET FOR main_replica_table_name
BEGIN
SELECT select_list
FROM master_table_name
[ WHERE search_condition ] ;
[ [DISTINCT] result_set_definition...]
END
result_set_definition ::=
RESULT SET FOR replica_table_name
BEGIN
SELECT select_list
FROM master_table_name
[ WHERE search_condition ] ;
[[ DISTINCT] result_set_definition...]
END
Usage
Publications define the sets of data that can be REFRESHed from the master to the replica
database. A publication is always transactionally consistent, that is, its data has been read
from the master database in one transaction and the data is written to the replica database in
one transaction.
Search conditions of a SELECT clause can contain input arguments of the publication as
parameters. The parameter name must have a colon as a prefix.
Publications can contain data from multiple tables. The tables of the publication can be inde-
pendent or there can be relations between the tables. If there is a relation between tables, the
result sets must be nested. The WHERE clause of the SELECT statement of the inner result
set of the publication must refer to a column of the table of the outer result set.
If the relation between outer and inner result set of the publication is a N-1 relationship, then
the keyword DISTINCT must be used in the result set definition.
The replica_table_name can be different from the master_table_name. The publication defi-
nition provides the mapping between the master and replica tables. (If you have multiple
replicas, all the replicas should use the same name, even if that name is different from the
name used in the master.) Column names in the master and replica tables must be the same.
Note that the initial download is always a full publication, which means that all data con-
tained in the publication is sent to the replica database. Subsequent downloads (refreshes)
for the same publication may be incremental publications, which means that they contain
only the data that has been changed since the prior REFRESH. To enable usage of incremen-
tal publications, SYNCHISTORY has to be set ON for tables included in the publication in
both the master and replica databases. For details, read “ALTER TABLE ... SET SYNCHIS-
TORY” on page B-17 and “DROP PUBLICATION REGISTRATION” on page B-77.
If the optional keywords "OR REPLACE" are used, then if the publication already exists it
will be replaced with the new definition. Since the publication was not dropped and recre-
ated, replicas do not need to re-register, and subsequent REFRESHes from that publication
can be incremental rather than full, depending upon exactly what changes were made to the
publication.
To avoid having a replica refresh from a publication while you are updating that publication,
you may temporarily set the catalog’s sync mode to Maintenance mode. However, using
maintenance mode is not absolutely required when replacing a publication.
If you replace an existing publication, the new definition of the publication will be sent to
each replica the next time that replica requests a refresh. The replica does not need to explic-
itly re-register itself to the publication.
When you replace an existing publication with a new definition, you may change the result-
set definitions. You cannot change the parameters of the publication. The only way to
change the parameters is to drop the publication and create a new one, which also means that
the replicas must re-register and the replicas will get a full refresh rather than an incremen-
tal refresh the next time that they request a refresh.
When you replace an existing publication, the privileges related to that publication are left
unchanged. (You do not need to re-create them.)
The CREATE OR REPLACE PUBLICATION command can be executed in any situation
where it is valid to execute the CREATE PUBLICATION command.
Usage in Master
You define the publication in the master database to enable the replicas to get refrehses from
it.
Usage in Replica
There is no need to define the publications in the replicas. Publication subscription function-
ality depends on the definitions only at the master database. If this command is executed in a
replica, it will store the publication definition to the replica, but the publication definition is
not used for anything. (Note that if a database is both a replica (for a master above it) and a
master (to a replica below it), then of course you may want to create a publication definition
in the database.)
Example
The following sample publication retrieves data from the customer table using the area code
of the customer as search criterion. For each customer, the orders and invoices of the cus-
tomer (1-N relation) as well as the dedicated salesman of the customer (1-1 relation) are also
retrieved.
"CREATE PUBLICATION PUB_CUSTOMERS_BY_AREA
(IN_AREA_CODE VARCHAR)
BEGIN
RESULT SET FOR CUSTOMER
BEGIN
SELECT * FROM CUSTOMER
WHERE AREA_CODE = :IN_AREA_CODE;
RESULT SET FOR CUST_ORDER
BEGIN
SELECT * FROM CUST_ORDER
WHERE CUSTOMER_ID = CUSTOMER.ID;
END
RESULT SET FOR INVOICE
BEGIN
SELECT * FROM INVOICE
WHERE CUSTOMER_ID = CUSTOMER.ID;
END
DISTINCT RESULT SET FOR SALESMAN
BEGIN
SELECT * FROM SALESMAN
WHERE ID = CUSTOMER.SALESMAN_ID;
END
END
END";
EXAMPLE 2:
Developers decided to add a new column C in table T, which is referenced in publication P.
The modification must be made to the master database and all replica databases.
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
CREATE ROLE
CREATE ROLE role_name
Usage
Creates a new user role.
Example
CREATE ROLE GUEST_USERS;
CREATE SCHEMA
CREATE SCHEMA schema_name
Usage
Schemas are a collection of database objects, such as tables, views, indexes, events, triggers,
sequences, and stored procedures for a database user. The default schema name is the user
id. Note that with schemas, there is one default for each user. Solid’s use of schemas con-
forms to the SQL standard.
The schema name is used to qualify a database object name. Database object names are
qualified in all DML statements as:
catalog_name.schema_name.database_object_name
or
user_id.database_object_name
To logically partition a database, users can create a catalog before they create a schema. For
details on creating a catalog, read “CREATE CATALOG” on page B-25. Note that when cre-
ating a new database or converting an old database to a new format, users are prompted for a
default catalog name.
To use schemas, a schema name must be created before creating the database object name
(such as a table name or procedure name). However, a database object name can be created
without a schema name. In such cases, database objects are qualified using user_id only.
You can specify the database object names in a DML statement explicitly by fully qualify-
ing them or implicitly by setting the schema name context using:
SET SCHEMA schema_name
Note that creating a schema does not automatically make that schema the current default
schema. If you have created a new schema and want your subsequent commands to execute
within that schema, then you must also execute the SET SCHEMA statement. For example:
CREATE SCHEMA MySchema;
CREATE TABLE t1; -- not in MySchema
SET SCHEMA MySchema;
CREATE TABLE t2; -- in MySchema
For more information about SET SCHEMA, see the description of the SET SCHEMA com-
mand in the description of the command “SET” on page B-143.
A schema can be dropped from a database using:
DROP SCHEMA schema_name
When dropping a schema name, all objects associated with the schema name must be
dropped prior to dropping the schema.
A schema context can be removed using:
SET SCHEMA USER
Below are the rules for resolving schema names:
■ A fully qualified name (schema_name.database_object_name) does not need any name
resolution, but will be validated.
■ If a schema context is not set using SET SCHEMA, then all database object names are
resolved always using the user id as the schema name.
■ If the database object name cannot be resolved from the schema name, then the data-
base object name is resolved from all existing schema names.
■ If name resolution finds either zero matching or more than one matching database
object name, then a Solid server issues a name resolution conflict error.
Examples
-- Assume the userID is SMITH.
CREATE SCHEMA FINANCE;
CREATE TABLE EMPLOYEE (EMP_ID INTEGER);
SET SCHEMA FINANCE;
CREATE TABLE EMPLOYEE (ID INTEGER);
SELECT ID FROM EMPLOYEE;
-- In this case, the table is qualified to FINANCE.EMPLOYEE
SELECT EMP_ID FROM EMPLOYEE;
-- This will give an error as the context is with FINANCE and
-- table is resolved to FINANCE.EMPLOYEE
--The following are valid schema statements: one with a schema context,
--the other without.
SELECT ID FROM FINANCE.EMPLOYEE;
SELECT EMP_ID FROM SMITH.EMPLOYEE
--The following statement will resolve to schema SMITH without a schema
--context
SELECT EMP_ID FROM EMPLOYEE;
CREATE SEQUENCE
CREATE [DENSE] SEQUENCE sequence_name
Usage
Sequencer objects are objects that are used to get sequence numbers.
Using a dense sequence guarantees that there are no holes in the sequence numbers. The
sequence number allocation is bound to the current transaction. If the transaction rolls back,
then the sequence number allocations are also rolled back. The drawback of dense sequences
is that the sequence is locked out from other transactions until the current transaction ends.
Using a sparse sequence guarantees uniqueness of the returned values, but they are not
bound to the current transaction. If a transaction allocates a sparse sequence number and
later rolls back, the sequence number is simply lost.
Sequence numbers are 8-byte values. Sequence values can be stored in BIGINT, INT, or
BINARY data types. BIGINT is recommended. Sequence values stored in INT variables lose
information because an 8-byte sequence number will not fit in a 4-byte INT. 8-byte BINARY
values can store a complete sequence number, but BINARY values are not always as conve-
nient to work with as integer data types.
Note
Note
Because a sequence number is an 8-byte number, storing it in a 4-byte integer (in a stored
procedure or in an application program) will omit the highest four bytes. This will lead pos-
sibly to unwanted behavior after the sequence number goes beyond 2^31 - 1 (=2147483647).
Below is some sample code and the output that demonstrates this behavior:
COMMIT WORK;
CALL set_seq_to_2G();
x y
2147483647 -2147483648
The value for x is correct, but the value for y is a negative number instead of the correct pos-
itive number.
The advantage of using a sequencer object instead of a separate table is that the sequencer
object is specifically fine-tuned for fast execution and requires less overhead than normal
update statements.
Sequence values can be incremented and used within SQL statements. These constructs can
be used in SQL:
sequence_name.CURRVAL
sequence_name.NEXTVAL
Sequences can also be used inside stored procedures. The current sequence value can be
retrieved using the following stored procedure statement:
EXEC SEQUENCE sequence_name.CURRENT INTO variable
The new sequence value can be retrieved using the following stored procedure statement:
EXEC SEQUENCE sequence_name.NEXT INTO variable
Sequence values can be set with the following stored procedure statement:
EXEC SEQUENCE sequence_name SET VALUE USING variable
Select access rights are required to retrieve the current sequence value. Update access rights
are required to allocate new sequence values. These access rights are granted and revoked in
the same way as table access rights.
Examples
CREATE DENSE SEQUENCE SEQ1;
INSERT INTO ORDER (id) VALUES (order_sequence.NEXTVAL);
Supported in
This requires Solid SmartFlow.
Usage
This statement creates a bookmark in a master database. Bookmarks represent a user-defined
version of the database. It is a persistent snapshot of a Solid database, which provides a ref-
erence for performing specific synchronization tasks. Bookmarks are used typically to
export data from a master for import into a replica using the EXPORT SUBSCRIPTION
command. Exporting and importing data allows you to create a replica from a master more
efficiently if you have databases larger than 2GB.
To create a bookmark, you must have administrative DBA privileges or
SYS_SYNC_ADMIN_ROLE. There is no limit to the number of bookmarks you can create
in a database. A bookmark is created only in a master database. The system issues an error if
you attempt to create a bookmark in a replica database.
If a table is set up for synchronization history with the ALTER TABLE SET SYNCHIS-
TORY command, a bookmark retains history information for the table. For this reason, use
the DROP SYNC BOOKMARK statement to drop bookmarks when they are not longer
needed. Otherwise, extra history data will increase disk space usage.
When you create a new bookmark, the system associates other attributes, such as creator of
the bookmark, creation data and time, and a unique bookmark ID. This metadata is main-
tained in the system table SYS_SYNC_BOOKMARKS. For a description of this table, refer
to “SYS_SYNC_BOOKMARKS” on page D-19.
Usage in Master
Use the CREATE SYNC BOOKMARK statement to create a bookmark in a master data-
base.
Usage in Replica
The CREATE SYNC BOOKMARK statement cannot be used in a replica database.
Example
CREATE SYNC BOOKMARK BOOKMARK_AFTER_DATALOAD;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
CREATE TABLE
CREATE [ { [GLOBAL] TEMPORARY | TRANSIENT } ] TABLE base_table_name
(column_element [, column_element] ...) [STORE {MEMORY | DISK}]
Usage
Tables are created through the CREATE TABLE statement. The CREATE TABLE state-
ment requires a list of the columns created, the data types, and, if applicable, sizes of values
within each column, in addition to other options, such as creating primary keys.
Constraint definitions are available for both the column and table level. For the column level,
constraints defined with NOT NULL specify that a non-null value is required for a column
insertion. UNIQUE specifies that no two rows are allowed to have the same value. PRI-
MARY KEY ensures that the column(s), which is (are) a primary key, does not permit two
rows to have the same value and does not permit any NULL values; PRIMARY KEY is thus
equivalent to the combination of UNIQUE and NOT NULL. The REFERENCES clause
with FOREIGN KEY specifies a table name and a list of columns for a referential integrity
constraint. This means that when data is inserted or updated in this table, the data must
match the values in the referenced tables and columns.
The CHECK keyword restricts the values that can be inserted into a column (for example,
restricting the values with a specific integer range). When defined, the check constraint per-
forms a validation check for any data that is inserted or updated in that column. If the data
violates the constraint, then the modification is prohibited. For example:
CREATE TABLE table1 (salary DECIMAL CHECK (salary >= 0.0));
The check_condition is a boolean expression that specifies the check constraints for the col-
umn. Check constraints are defined with the predicates >, <, =, <>, <=, >= and the key-
words BETWEEN, IN, LIKE (which may contain wildcard characters), and IS [NOT]
NULL. The expression (similar to the syntax of a WHERE clause) can be qualified with
keywords AND and OR. For example:
...CHECK (col1 = ’Y’ OR col1 = ’N’)...
...CHECK (last_name IS NOT NULL)...
Note that UNIQUE and PRIMARY KEY constraints can be defined at the column level or
the table level. They also automatically create a unique index on the specified columns.
A foreign key is a column or group of columns within a table that refers to, or relates to,
some other table through its values. The FOREIGN KEY is used to specify that the col-
umn(s) listed are foreign keys in this table. The REFERENCES keyword in the statement
specifies the table and those column(s) that are references of the foreign key(s). Note that
although column-level constraints can use a REFERENCES clause, only table-level con-
straints can use the FOREIGN KEY ... REFERENCES clause. .
To use the REFERENCES constraint with FOREIGN keys, a foreign key must always
include enough columns in its definition to uniquely identify a row in the referenced table. A
foreign key must contain the same number and type (data type) of columns as the primary
key in the referenced table as well as be in the same order; however, a foreign key can have
different column names and default values than the primary key.
Note the following rules about constraints:
■ The check_condition cannot contain subqueries, aggregate functions, host variables, or
parameters.
■ Column check constraints can reference only the columns on which they are defined.
■ Table check constraints can reference any columns in the table, that is if all columns in
the table have been defined earlier in the statement.
■ A table may have only one primary key constraint, but may have multiple unique con-
straints.
■ The UNIQUE and PRIMARY KEY constraints in the CREATE TABLE statement can
be used to create indexes. However, if you use the ALTER TABLE statement, keep in
mind that a column cannot be dropped if it is part of a unique or primary key. You may
want to use the CREATE INDEX statement to create an index instead because the index
will have a name and you can drop it. The CREATE INDEX statement also offers some
additional features, such as the ability to create non-unique indexes and to specify if the
indexes are sorted in ascending or descending order.
■ The referential integrity rules for persistent, transient, and temporary table types are dif-
ferent.
■ A Temporary Table may reference another Temporary Table, but may not reference
any other type of table (i.e. Transient or persistent). No other type of table may ref-
erence a Temporary Table.
■ Transient Tables may reference other Transient Tables and may reference persis-
tent tables. They may not reference Temporary Tables. Neither Temporary Tables
nor persistent tables may reference a Transient Table.
In a disk-based table, the maximum size of a row (excluding BLOBs) is approximately 1/3
of the page size. In an in-memory table, the maximum size of a row (including BLOBs) is
approximately the page size. (There is a small amount of overhead used in both disk-based
and in-memory pages, so not quite all of the page is available for user data.) The default
page size is 8kB. For more information about page size, see the description of the solid.ini
configuration parameter "BlockSize" in the Solid Administrator Guide.
The server does not use simple rules to determine BLOB storage, but as a general rule each
BLOB occupies 256 bytes from the page where the row resides, and the rest of the BLOB
goes to separate BLOB pages. If the BLOB is shorter than 256 bytes, then it is stored
entirely in the main disk page, not BLOB pages.
Each row is limited to 1000 columns.
The STORE clause indicates whether the table should be stored in memory or on disk. (This
clause is available only in BoostEngine.) For more information about the STORE clause, see
the In-Memory Database Guide.
In-memory tables may be persistent (normal) tables, Temporary Tables, or Transient Tables.
For a detailed discussion of Temporary Tables and Transient Tables, see the In-Memory
Database Guide.
All Temporary Tables and Transient Tables must be in-memory tables. You do not need to
specify the "STORE MEMORY" clause; Temporary Tables and Transient Tables will auto-
matically be created as in-memory tables if you omit the STORE clause. (For Temporary
Tables and Transient Tables, the solid.ini configuration parameter DefaultStoreIsMemory is
ignored.) You will get an error if you try to explicitly create Temporary Tables or Transient
Tables as disk-based tables, e.g. if you execute a command similar to the following:
CREATE TEMPORARY TABLE t1 (i INT) STORE DISK; -- Wrong!
The keyword "GLOBAL" is included to comply with the SQL:1999 standard for Temporary
Tables. In Solid Database Engine, all Temporary Tables are global, whether or not the GLO-
BAL keyword is used.
Example
CREATE TABLE DEPT (DEPTNO INTEGER NOT NULL, DNAME VARCHAR, PRIMARY
KEY(DEPTNO));
CREATE TABLE DEPT2 (DEPTNO INTEGER NOT NULL PRIMARY KEY, DNAME VARCHAR);
CREATE TABLE DEPT3 (DEPTNO INTEGER NOT NULL UNIQUE, DNAME VARCHAR);
CREATE TABLE DEPT4 (DEPTNO INTEGER NOT NULL, DNAME VARCHAR,
UNIQUE(DEPTNO));
CREATE TABLE EMP (DEPTNO INTEGER, ENAME VARCHAR, FOREIGN KEY (DEPTNO)
REFERENCES DEPT (DEPTNO)) STORE DISK;
CREATE TABLE EMP2 (DEPTNO INTEGER, ENAME VARCHAR, CHECK (ENAME IS NOT
NULL), FOREIGN KEY (DEPTNO) REFERENCES DEPT (DEPTNO)) STORE MEMORY;
CREATE GLOBAL TEMPORARY TABLE T1 (C1 INT);
CREATE TRANSIENT TABLE T2 (C1 INT);
CREATE TRIGGER
CREATE TRIGGER trigger_name ON table_name time_of_operation
triggering_event [REFERENCING column_reference]
BEGIN trigger_body END
where:
trigger_name ::= literal
table_name ::= literal
[, REFERENCING column_reference ]
trigger_body ::=
[declare_statement;...]
[trigger_statement;...]
Note
Note
This appendix is intended to provide a quick reference to using Solid SQL commands. For
details on when and how to use triggers, read“Triggers and Procedures” on page 3-45.
Usage
A trigger provides a mechanism for executing a series of SQL statements when a particular
action (an INSERT, UPDATE, or DELETE) occurs. The "body" of the trigger contains the
SQL statement(s) that the user wants to execute. The body of the trigger is written using
Stored Procedure Language (which is described in more detail in section about the CRE-
ATE PROCEDURE statement).
You may create one or more triggers on a table, with each trigger defined to activate on a
specific INSERT, UPDATE, or DELETE command. When a user modifies data within the
table, the trigger that corresponds to the command is activated.
You can only use inline SQL or stored procedures with triggers. If you use a stored proce-
dure in the trigger, then the procedure must be created with the CREATE PROCEDURE
command. A procedure invoked from a trigger body can invoke other triggers.
To create a trigger, you must be a DBA or owner of the table on which the trigger is being
defined.
Note
Note
Following is a brief summary of the keywords and clauses used in the CREATE TRIGGER
command. For more information on usage, read “Stored Procedures, Events, Triggers, and
Sequences” on page 3-1.
Trigger name
The trigger_name identifies the trigger and can contain up to 254 characters.
■ BEFORE DELETE
■ AFTER INSERT
■ AFTER UPDATE
■ AFTER DELETE
The following example shows trigger trig01 defined BEFORE INSERT ON table1.
"CREATE TRIGGER TRIG01 ON table1
BEFORE INSERT
REFERENCING NEW COL1 AS NEW_COL1
BEGIN
EXEC SQL PREPARE CUR1
INSERT INTO T2 VALUES (?);
EXEC SQL EXECUTE CUR1 USING (NEW_COL1);
EXEC SQL CLOSE CUR1;
EXEC SQL DROP CUR1;
END"
Following are examples (including implications and advantages) of using the BEFORE and
AFTER clause of the CREATE TRIGGER command for each DML operation:
■ UPDATE operation
The BEFORE clause can verify that modified data follows integrity constraint rules
before processing the UPDATE. If the REFERENCING NEW AS
new_column_identifier clause is used with the BEFORE UPDATE clause, then the
updated values are available to the triggered SQL statements. In the trigger, you can set
the default column values or derived column values before performing an UPDATE.
The AFTER clause can perform operations on newly modified data. For example, after
a branch address update, the sales for the branch can be computed.
If the REFERENCING OLD AS old_column_identifier clause is used with the AFTER
UPDATE clause, then the values that existed prior to the invoking update are accessible
to the triggered SQL statements.
■ INSERT Operation
The BEFORE clause can verify that new data follows integrity constraint rules before
performing an INSERT. Column values passed as parameters are visible to the trig-
gered SQL statements but the inserted rows are not. In the trigger, you can set default
column values or derived column values before performing an INSERT.
The AFTER clause can perform operations on newly inserted data. For example, after
insertion of a sales order, the total order can be computed to see if a customer is eligible
for a discount.
Column values are passed as parameters and inserted rows are visible to the triggered
SQL statements.
■ DELETE Operation
The BEFORE clause can perform operations on data about to be deleted. Column val-
ues passed as parameters and inserted rows that are about to be deleted are visible to the
triggered SQL statements.
The AFTER clause can be used to confirm the deletion of data. Column values passed
as parameters are visible to the triggered SQL statements. Please note that the deleted
rows are visible to the triggering SQL statement.
Note
Note
There may be some performance impact if you try to load the data with triggers enabled.
Depending on your business need, you may want to disable the triggers before loading and
enable them after loading. For details, For details, see “ALTER TRIGGER” on page B-18..
Note
Note
The above example can lead to recursive trigger execution, which you should try to avoid.
Table_name
The table_name is the name of the table on which the trigger is created. Solid server allows
you to drop a table that has dependent triggers defined on it. When you drop a table all
dependent objects including triggers are dropped. Be aware that you may still get run-time
errors. For example, assume you create two tables A and B. If a procedure SP-B inserts data
into table A, and table A is then dropped, a user will receive a run-time error if table B has a
trigger which invokes SP-B.
Trigger_body
The trigger_body contains the statement(s) to be executed when a trigger fires. The
trigger_body definition is identical to the stored procedure definition. Read “CREATE PRO-
CEDURE” on page B-31 for details on creating a stored procedure body.
Note that it is syntactically valid, although not useful, to create a trigger with an empty body.
A trigger body may also invoke any procedure registered with a Solid server. Solid proce-
dure invocation rules follow standard procedure invocation practices.
You must explicitly check for business logic errors and raise an error.
REFERENCING Clause
This clause is optional when creating a trigger on an INSERT/UPDATE/DELETE opera-
tion. It provides a way to reference the current column identifiers in the case of INSERT and
DELETE operations, and both the old column identifier and the new updated column identi-
fier by aliasing the column(s) on which an UPDATE operation occurs.
You must specify the OLD or NEW column_identifier to access it. A Solid server does not
provide access to the column_identifier unless you define it using the REFERENCING sub-
clause.
the case of a BEFORE trigger, an inserted or updated row is invisible within the trigger and
a deleted row is visible. In the case of an UPDATE, the pre-update values are available in a
BEFORE trigger.
The table below summarizes the statement atomicity in a trigger, indicating whether the row
is visible to the SELECT statement in the trigger body.
Note
Note
The triggers are applied to each row. This means that if there are ten inserts, a trigger is exe-
cuted ten times.
■ You cannot define triggers on a view (even if the view is based on a single table).
■ You cannot alter a table that has a trigger defined on it when the dependent columns are
affected.
■ You cannot create a trigger on a system table.
■ You cannot execute triggers that reference dropped or altered objects. To prevent this
error:
body. Otherwise, errors must be checked explicitly within the trigger body after each proce-
dure call or SQL statement.
For any errors in the user-written business logic as part of the trigger body, users can receive
errors in a procedure variable using the SQL statement:
RETURN SQLERROR error_string
or
RETURN SQLERROR char_variable
The error is returned in the following format:
User error: error_string
If a user does not specify the RETURN SQLERROR statement in the trigger body, then all
trapped SQL errors are raised with a default error_string determined by the system. For
details, see the appendix on Error Codes in the Solid Administrator Guide.
Note
Note
Triggered SQL statements are a part of the invoking transaction. If the invoking DML state-
ment fails due to either the trigger or another error that is generated outside the trigger, all
SQL statements within the trigger are rolled back along with the failed invoking DML com-
mand.
Below is an example of using WHENEVER SQLERROR ABORT to make sure that the
trigger catches an error in a stored procedure that it calls.
-- If you return an SQLERROR from a stored procedure, the error is
-- displayed. However, if the stored procedure is called from inside
-- a trigger, then the error is not displayed unless you use the
-- SQL statement WHENEVER SQLERROR ABORT.
END";
COMMIT WORK;
-- This shows that the error is returned if you execute the stored
procedure.
CALL stproc1();
-- Displays an error because the trigger had WHENEVER SQL ERROR ABORT.
INSERT INTO table1 (x) values (1);
-- Does not display an error.
INSERT INTO table2 (x) values (1);
TRIG_SCHEMA(n) returns the nth trigger schema name in the trigger stack. The first trig-
ger position or offset is zero. The return value is a string.
Example
"CREATE TRIGGER TRIGGER_BI ON TRIGGER_TEST
BEFORE INSERT
REFERENCING NEW BI AS NEW_BI
BEGIN
EXEC SQL PREPARE BI INSERT INTO TRIGGER_OUTPUT VALUES (
'BI', TRIG_NAME(0), TRIG_SCHEMA(0));
EXEC SQL EXECUTE BI;
SET NEW_BI = 'TRIGGER_BI';
END";
CREATE USER
CREATE USER user_name IDENTIFIED BY password
Usage
Creates a new user with a given password.
Example
CREATE USER HOBBES IDENTIFIED BY CALVIN;
CREATE VIEW
CREATE VIEW viewed_table_name [(column_identifier [, column_identifier]... )]
AS query-specification
Usage
A view can be viewed as a virtual table; that is, a table that does not physically exist, but
rather is formed by a query specification against one or more tables.
Example
CREATE VIEW TEST_VIEW
(VIEW_I, VIEW_C, VIEW_ID)
DELETE
DELETE FROM table_name [WHERE search_ondition]
Usage
Depending on your search condition, the specified row(s) will be deleted from a given table.
Example
DELETE FROM TEST WHERE ID = 5;
DELETE FROM TEST;
DELETE (positioned)
DELETE FROM table_name WHERE CURRENT OF cursor_name
Usage
The positioned DELETE statement deletes the current row of the cursor.
Example
DELETE FROM TEST WHERE CURRENT OF MY_CURSOR;
DROP CATALOG
DROP CATALOG catalog_name [CASCADE | RESTRICT]
Usage
The DROP CATALOG statement drops the specified catalog from the database.
If you use the RESTRICT keyword, or if you do not specify either RESTRICT or CAS-
CADE, then you must drop all database objects in the catalog before you drop the catalog
itself.
If you use the CASCADE keyword, then if the catalog contains database objects (such as
tables), those will automatically be dropped. If you use the CASCADE keyword, and if
objects in other catalogs reference an object in the catalog being dropped, then the refer-
ences will automatically be resolved by dropping those referencing objects or updating them
to eliminate the reference.
Only the creator of the database or users having SYS_ADMIN_ROLE (i.e. DBA) have privi-
leges to create or drop a catalog. Even the creator of a catalog cannot drop that catalog if she
loses SYS_ADMIN_ROLE privileges.
Example
DROP CATALOG C1;
DROP CATALOG C2 CASCADE;
DROP CATALOG C3 RESTRICT;
DROP EVENT
DROP EVENT event_name
DROP EVENT [[catalog_name.]schema_name.]event_name
Usage
The DROP EVENT statement removes the specified event from the database.
Example
DROP EVENT EVENT_TEST;
-- Using catalog, schema, and event name
DROP EVENT HR_database.smith_schema.event1;
DROP INDEX
DROP INDEX index_name
DROP INDEX[[catalog_name.]schema_name.]index_name
Usage
The DROP INDEX statement removes the specified index from the database.
Example
DROP INDEX test_index;
-- Using catalog, schema, and index name
DROP INDEX bank_accounts.bankteller.first_name_index;
DROP MASTER
DROP MASTER master_name
Usage
This statement drops the master database definitions from a replica database. After this oper-
ation, the replica cannot synchronize with the master database.
Note
Note
1. Unregistering the replica is the preferred way to quit using a master database. The
DROP MASTER statement is used only when the MESSAGE APPEND UNREGIS-
TER REPLICA statement is unable to be executed. For details on unregistering a rep-
lica, see “MESSAGE APPEND” on page B-109.
2. Solid Database Engine requires that autocommit be set OFF when using the DROP
MASTER command.
3. If master_name is a reserved word, it must be enclosed in double quotes.
______________________________________________________________________
Usage in Master
This statement can not be used in a master.
Usage in Replica
This statement is used in replica to drop a master from replica.
Examples
DROP MASTER "MASTER";
DROP MASTER MY_MASTER;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
DROP PROCEDURE
DROP PROCEDURE procedure_name
DROP PROCEDURE [[catalog_name.]schema_name.]procedure_name
Usage
The DROP PROCEDURE statement removes the specified procedure from the database.
ExampleS
DROP PROCEDURE PROCTEST;
-- Using catalog, schema, and procedure name
DROP PROCEDURE telecomm_database.technician1.add_new_IP_address;
DROP PUBLICATION
DROP PUBLICATION publication_name
Usage
This statement drops a publication definition in the master database. All subscriptions to the
dropped publication are automatically dropped as well.
Usage in Master
Dropping a publication from the master will remove it and replicas will not be able to
refresh from it.
Usage in Replica
Using this statement in a replica will drop the publication definition from the replica if you
defined a publication on the replica. (However, it is not necessary or useful to define publica-
tions in replica databases, so you should not need to use either CREATE PUBLICATION or
DROP PUBLICATION in a replica.)
Example
DROP PUBLICATION customers_by_area;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This requires Solid SmartFlow.
Usage
This statement drops a registration for a publication in the replica database. The publication
definition remains on the master database, but a user will be unable to refresh from the pub-
lication. All subscriptions to the registered publication are automatically dropped as well.
Usage in Master
This statement is not used in a master database.
Usage in Replica
Using this statement in a replica will drop the registration for the publication in the replica.
All subscriptions and their data to this publication are dropped automatically.
Example
DROP PUBLICATION customers_by_area REGISTRATION;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
DROP REPLICA
DROP REPLICA replica_name
Supported in
This requires Solid SmartFlow.
Usage
This statement drops a replica database from the master database. After this operation, the
dropped replica cannot synchronize with the master database.
Note
Note
1. Unregistering the replica is the preferred way to quit using a replica database. The
DROP REPLICA statement is used only when the MESSAGE APPEND UNREGIS-
TER REPLICA statement is unable to be executed. For details on unregistering a rep-
lica, see “MESSAGE APPEND” on page B-109.
2. Solid Database Engine requires that autocommit be set OFF when using the DROP
REPLICA statement.
3. If replica_name is a reserved word, it should be enclosed in double quotes.
______________________________________________________________________
Usage in Master
Use this statement in the master to drop a replica from master.
Usage in Replica
This statement cannot be used in replica.
Example
DROP REPLICA salesman_smith ;
DROP REPLICA "REPLICA";
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
DROP ROLE
DROP ROLE role_name
Usage
The DROP ROLE statement removes the specified role from the database.
Example
DROP ROLE GUEST_USERS;
DROP SCHEMA
DROP SCHEMA schema_name [CASCADE | RESTRICT]
DROP SCHEMA [catalog_name.]schema_name [CASCADE | RESTRICT]
Usage
The DROP SCHEMA statement drops the specified schema from the database. If you use
the keyword RESTRICT, or if you do not specify either RESTRICT or CASCADE, then all
the objects associated with the specified schema_name must be dropped prior to using this
statement. If you use the keyword CASCADE, then all the database objects (such as tables)
within the specified schema will be dropped automatically.
If you use the CASCADE keyword, and if objects in other schemas reference an object in
the schema being dropped, those references will automatically be resolved by dropping
those referencing objects or updating them to eliminate the reference.
Examples
DROP SCHEMA finance;
DROP SCHEMA finance CASCADE;
DROP SCHEMA finance RESTRICT;
DROP SCHEMA forecasting_db.securities_schema CASCADE;
DROP SEQUENCE
DROP SEQUENCE sequence_name
DROP SEQUENCE [[catalog_name.]schema_name.]sequence_name
Usage
The DROP SEQUENCE statement removes the specified sequence from the database.
Examples
DROP SEQUENCE SEQ1;
-- Using catalog, schema, and sequence name
DROP SEQUENCE bank_db.checking_acct_schema.account_num_seq;
DROP SUBSCRIPTION
In replica:
DROP SUBSCRIPTION publication_name [{(parameter_list) | ALL}]
[COMMITBLOCK number_of_rows] [OPTIMISTIC | PESSIMISTIC]
In master:
DROP SUBSCRIPTION publication_name [{(parameter_list) | ALL}]
REPLICA replica_name
Supported in
This command requires Solid SmartFlow.
Usage
Data that is no longer needed in a replica database can be deleted from the replica database
by dropping the subscription that was used to retrieve the data from the master database.
Note
Note
Solid Database Engine requires that autocommit be set OFF when dropping subscriptions.
By default, the data of a subscription is deleted in one transaction. If the amount of data is
large, for example, tens of thousands of rows, it is recommended that the COMMITBLOCK
be defined. When using the COMMITBLOCK option the data is deleted in more than one
transaction. This ensures good performance for the operation.
In a replica, you can define the DROP SUBSCRIPTION statement to use table-level pessi-
mistic locking when it is initially executed. If the PESSIMISTIC mode is specified, all other
concurrent access to the table affected is blocked until the drop has completed. Otherwise if
the optimistic mode is used, the DROP SUBSCRIPTION may fail due to a concurrency con-
flict.
A subscription can be dropped also from the master database. In this case, the replica name
is included in the command. Names of all replicates that have been registered in the master
database can be found in the SYS_SYNC_REPLICAS table. This operation deletes only the
internal information about the subscription for this replica. The actual data in the replica is
kept.
Dropping a subscription from the master is useful when a replica is no longer using the sub-
scription and the replica has not dropped the subscription itself. Dropping old subscriptions
releases old history data from the database. This history data is automatically deleted from
the master databases after dropping the subscription.
If a replica’s subscription has been dropped in the master database, the replica will receive
the full data in the next refresh.
If a subscription is dropped in this case, DROP SUBSCRIPTION also drops the publication
registration if the last subscription to the publication was dropped. Otherwise, registration
must be dropped explicitly using the DROP PUBLICATION REGISTRATION statement or
MESSAGE APPEND UNREGISTER PUBLICATION.
You can define the DROP SUBSCRIPTION statement to use table-level pessimistic locking
when it is initially executed. If the PESSIMISTIC mode is specified, all other concurrent
access to the tables affected is blocked until the import has completed. Otherwise if the opti-
mistic mode is used, the DROP SUBSCRIPTION may fail due to a concurrency conflict.
When a transaction acquires an exclusive lock to a table, the TableLockWaitTimeout
parameter setting in the [General] section of the solid.ini configuration file deter-
mines the transaction’s wait period until the exclusive or shared lock is released. For details,
see the description of this parameter in the Solid Administrator Guide.
Usage in Master
Use this statement to drop a subscription for a specified replica.
Usage in Replica
Use this statement to drop a subscription from this replica.
Example
Drop subscription from a master database
DROP SUBSCRIPTION customers_by_area('south')
FROM REPLICA salesman_joe
Return Values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
This statement drops a bookmark defined on a master database. To drop a bookmark, you
must have administrative privileges DBA or SYS_SYNC_ADMIN_ROLE. Bookmarks are
typically used when exporting data to a file. After a file is successfully imported to a replica
from the master database, it is recommended that you drop the bookmark that you used to
export the data to a file.
If a bookmark remains, then all subsequent changes to data on the master including deletes
and updates are tracked on the master database for each bookmark to facilitate incremental
refreshes.
If you do not drop bookmarks, the history information takes up disk space and unwanted
disk I/O is incurred, as well, for each bookmark registered in the master database. This may
result in performance degradation.
Note that bookmarks should only be dropped after the exported data is imported into all
intended replicas and after all the replicas have synchronized at least once. Be sure to drop a
bookmark only when you no longer have replicas to import and those replicas have refreshed
once from the publication after the import.
When dropping bookmarks, Solid Database Engine uses the following rules to delete his-
tory records:
■ Finds the oldest REFRESH delivered to any replica on that table
■ Finds the oldest bookmark
■ Determines which is older, the oldest REFRESH or oldest bookmark
■ Deletes all rows from history up to what it determines is older, the oldest REFRESH or
oldest bookmark.
Usage in Master
Use the DROP SYNC BOOKMARK statement to drop a bookmark in a master database.
Usage in Replica
The DROP SYNC BOOKMARK statement cannot be used in a replica database.
Example
DROP SYNC BOOKMARK new_database;
DROP SYNC BOOKMARK database_after_dataload;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
DROP TABLE
DROP TABLE base_table_name
DROP TABLE [[catalog_name.]schema_name.]table_name
Note
Note
Objects are usually dropped with drop behavior RESTRICT. There are some exceptions,
however, including:
1) If your table has a synchronization history table, that synchronization history table will be
dropped automatically. (Solid Database Engine 3.7 and later.)
2) If a table has indexes on it, you do not need to drop the indexes first; they will be dropped
automatically when the table is dropped.
Usage
The DROP TABLE statement removes the specified table from the database.
Examples
DROP TABLE table1;
-- Using catalog, schema, and table name
DROP TABLE domains_db.demand_schema.bad_address_table;
DROP TRIGGER
DROP TRIGGER trigger_name
DROP TRIGGER [[catalog_name.]schema_name.]trigger_name
Usage
Drops (or deletes) a trigger defined on a table from the system catalog.
You must be the owner of a table, or a user with DBA authority, to delete a trigger from the
table.
Examples
DROP TRIGGER update_acct_balance;
-- Using schema and trigger name
DROP TRIGGER savings_accounts.update_acct_balance;
-- Using catalog, schema, and trigger name
DROP TRIGGER accounts.savings_accounts.update_acct_balance;
DROP USER
DROP USER user_name
Usage
The DROP USER statement removes the specified user from the database. All the objects
associated with the specified user_name must be dropped prior to using this statement; the
DROP USER statement is not a cascaded operation.
Example
DROP USER HOBBES;
DROP VIEW
DROP VIEW view_name
DROP VIEW [[catalog_name.]schema_name.]view_name
Usage
The DROP VIEW statement removes the specified view from the database.
Examples
DROP VIEW sum_of_acct_balances;
-- Using schema and view name
Usage
The EXPLAIN PLAN FOR statement shows the selected search plan for the specified SQL
statement.
Example
EXPLAIN PLAN FOR select * from tables;
EXPORT SUBSCRIPTION
EXPORT SUBSCRIPTION publication_name [(publication_parameters)]
TO 'filename'
USING BOOKMARK bookmark_name;
[WITH [NO] DATA];
Supported in
This command requires Solid SmartFlow.
Usage
This EXPORT SUBSCRIPTION statement allows you export a version of the data from a
master database to a file. You can then use the IMPORT statement to import the data in the
file into a replica database.
There are several uses for the EXPORT SUBSCRIPTION statement. Among them are:
■ Creating a large replica database (greater than 2GB) from an existing master.
This procedure requires that you export a subscription with or without data to a file first,
then import the subscription to the replica. For details, read "Creating A Replica By
Exporting A Subscription With Data" or "Creating A Replica By Exporting A Subscrip-
tion Without Data" in the Solid SmartFlow Data Synchronization Guide.
■ Exporting specific versions of the data to a replica.
For performance reasons, you may choose to "export" the data rather then to use the
MESSAGE APPEND REFRESH to send the data to a replica.
■ Export metadata information without the actual row data.
You may want to create a replica that already contains existing data and only needs the
schema and version information associated with a publication.
Unlike the MESSAGE APPEND REFRESH statement where a REFRESH is requested from
a replica, you request an export directly from the master database. The export output is saved
to a user-specified file rather than a Solid Database Engine message.
An export file can contain more than one publication. You can export subscriptions using
either the WITH DATA or WITH NO DATA options:
■ Use the WITH DATA option to create a replica when data is exported to an existing
database that does not contain master data and requires a subset of data. For details, read
"Creating A Replica By Exporting A Subscription With Data" in the Solid SmartFlow
Data Synchronization Guide.
■ Use the WITH NO DATA option to create a replica when a subscription is imported to a
database that already contains the data (for example, using a backup copy of an existing
master). For details, read "Creating A Replica By Exporting A Subscription Without
Data" in the Solid SmartFlow Data Synchronization Guide .
By default, the export file is created using the WITH DATA option and includes all rows. If
there is more than one publication specified, the exported file can have a combination of
"WITH DATA" and "WITH NO DATA."
Usage Rules
Note the following rules when using the EXPORT SUBSCRIPTION statement:
■ Only one file per subscription is allowed for export. You can use the same file name to
include multiple subscriptions on the same file.
■ The file size of an export file is dependent upon the underlying operating system. If a
respective platform (such as SUN, or HP) allows more than 2GB, you can write files
greater than 2GB. This means that replica (recipient) should also have a compatible
platform and file system. Otherwise, the replica is not able to accept the export file. If
both the operating system on a master and replica support file sizes greater than 2GB,
then export files greater than 2GB are permitted.
■ An export file can contain more than one subscription. Subscriptions can be exported
using either the WITH DATA or WITH NO DATA options. An exported file with multi-
ple subscriptions can have a combination of WITH DATA and WITH NO DATA
included.
■ When a subscription is exported to a file using the WITH NO DATA option, only meta-
data (that is, schema and version information corresponding to that publication) is
exported to the file.
■ Solid Database Engine requires that autocommit be set OFF when using the EXPORT
SUBSCRIPTION statement.
Usage in Master
Use this statement to request master data for export to a file.
Usage in Replica
This statement is not available in a replica database.
Example
EXPORT SUBSCRIPTION FINANCE_PUBLICATION TO 'FINANCE.EXP' USING BOOKMARK
BOOKMARK_FOR_FINANCE_DB WITH NO DATA;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
GET_PARAM()
get_param('param_name’)
Supported in
This command requires Solid SmartFlow.
Usage
The get_param() function retrieves a parameter that was placed on the transaction bulletin
board using the PUT_PARAM() function or with commands SAVE PROPERTY, SAVE
DEFAULT PROPERTY, and SET SYNC PARAMETER. The parameter that is retrieved is
specific to a catalog and each catalog has a different set of parameters. This function returns
the VARCHAR value of the parameter or NULL, if the parameter does not exist in the bulle-
tin board.
Because get_param() is a SQL function, it can be used only in a procedure or as part of a
SELECT statement.
The parameter name must be enclosed in single quotes.
Usage in Master
Use the get_param() function in the master for retrieving parameter values.
Usage in Replica
Use the get_param() function in replicas for retrieving parameter values.
Example
SELECT put_param('myparam', '123abc') ;
SELECT get_param('myparam');
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
When executed successfully get_param() returns the value of the assigned parameter.
See Also
PUT_PARAM
SAVE PROPERTY
SET SYNC PARAMETER
GRANT
GRANT {ALL | grant_privilege [, grant_privilege]...}
ON table_name
TO {PUBLIC | user_name [, user_name]... |
role_name [, role_name]... }
[WITH GRANT OPTION]
Usage
The GRANT statement is used to
1. grant privileges to the specified user or role.
2. grant privileges to the specified user by giving the user the privileges of the specified
role.
When you grant a role to a user, the role may be one that you have created, or it may be a
system-defined role, such as SYS_SYNC_ADMIN_ROLE or SYS_ADMIN_ROLE.
The role SYS_SYNC_ADMIN_ROLE gives the specified user the privileges to execute data
synchronization administration operations, including:
■ dropping or re-executing stopped synchronization messages,
■ dropping a replica database from master database,
■ creating a bookmark.
The role SYS_ADMIN_ROLE is the role given to the creator of the database. This role has
privileges to all tables, indexes, and users, as well as the right to use Solid FlowControl and
Solid Remote Control (teletype).
If you use the optional WITH GRANT OPTION, then the user who receives the privilege
may grant the privilege to other users.
Example
GRANT GUEST_USERS TO CALVIN;
GRANT INSERT, DELETE ON TEST TO GUEST_USERS;
See Also
For more information about user privileges, see also:
■ “REVOKE (Privilege from Role or User)” on page B-136, and
■ “Managing User Privileges and Roles” on page 4-2.
GRANT REFRESH
GRANT { REFRESH | SUBSCRIBE } ON publication_name TO {PUBLIC |
user_name,
[user_name] ... | role_name, [role_name] ... }
Supported in
This command requires Solid SmartFlow.
Usage
This statement grants access rights on a publication to a user or role defined in the master
database.
NOTE: The keywords "SUBSCRIBE" and "REFRESH" are equivalent. However, the key-
word "SUBSCRIBE" is deprecated in the GRANT statement.
Usage in Master
Use this statement to grant user or role access rights to a publication.
Usage in Replica
This statement is not available in a replica database.
Example
GRANT REFRESH ON customers_by_area TO salesman_jones;
GRANT REFRESH ON customers_by_area TO all_salesmen;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
HINT
--(* vendor (SOLID), product (Engine), option(hint)
--hint *)--
hint::=
[MERGE JOIN |
LOOP JOIN |
JOIN ORDER FIXED |
INTERNAL SORT |
EXTERNAL SORT |
INDEX [REVERSE] table_name.index_name |
PRIMARY KEY [REVERSE] table_name |
FULL SCAN table_name |
[NO] SORT BEFORE GROUP BY]
Following is a description of the keywords and clauses used in the syntax:
Note
Note
In the pseudo comment prefix --(* and *)--, there must be no space between the parenthesis
and the asterisk.
Hint
Hints always follow the SELECT, UPDATE, or DELETE keyword that applies to it.
Note
Note
Caution
If you are using hints and you compose a query as a string and then submit that string using
ODBC or JDBC, you MUST ensure that appropriate newline characters are embedded
within that string to mark the end of the comments. Otherwise, you will get a syntax error. If
you don’t include any newlines, then all of the statement after the start of the first comment
will look like a comment. For example, suppose that your code looks like the following:
strcpy(s, "SELECT --(* hint... *)-- col_name FROM table;");
Everything after the first "--" looks like a comment, and therefore your statement looks
incomplete. You must do something like the following:
strcpy(s, "SELECT --(* hint... *)-- \n col_name FROM table;");
Note the embedded newline "\n" character to terminate the comment.
A useful technique for debugging is to print out the strings to make sure that they look cor-
rect. They should look like:
SELECT --(* hint ... *)--
column_name FROM table_name...;
or
SELECT
--(* hint ... *)--
column_name FROM table_name...;
Each subselect requires its own hint; for example, the following are valid uses of hints syn-
tax:
INSERT INTO ... SELECT hint FROM ...
UPDATE hint TABLE ... WHERE column = (SELECT hint ... FROM ...)
DELETE hint TABLE ... WHERE column = (SELECT hint ... FROM ...)
Be sure to specify multiple hints in one pseudo comment separated by commas as shown in
the following examples:
Example 1
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
--MERGE JOIN
--JOIN ORDER FIXED *)--
*
FROM TAB1 A, TAB2 B;
WHERE A.INTF = B.INTF;
Example 2
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
--INDEX TAB1.INDEX1
--INDEX TAB1.INDEX1 FULL SCAN TAB2 *)--
*
FROM TAB1, TAB2
WHERE TAB1.INTF = TAB2.INTF;
Hint Definition
MERGE JOIN Directs the Optimizer to choose the merge join access plan in a select
query for all tables listed in the FROM clause. The MERGE JOIN
option is used when two tables are approximately equal in size and the
data is distributed equally. It is faster than a LOOP JOIN when an equal
amount of rows are joined. For joining data, MERGE JOIN supports a
maximum of three tables. The joining table is ordered by joining col-
umns and combining the results of the columns.
You can use this hint when the data is sorted by a join key and the
nested loop join performance is not adequate. The Optimizer selects the
merge join only where there is an equal predicate between tables. Oth-
erwise, the Optimizer selects LOOP JOIN even if the MERGE JOIN
hint is specified.
Note that when data is not sorted before performing the merge opera-
tion, the Solid query executor sorts the data.
When considering the usage of this hint, keep in mind that the merge
join with a sort is more resource intensive than the merge join without
the sort.
LOOP JOIN Directs the Optimizer to pick the nested loop join in a select query for
all tables listed in the FROM clause. By default, the Optimizer does not
pick the nested loop join. Using the loop join when tables are small and
fit in memory may offer greater efficiency than using other join algo-
rithms.
The LOOP JOIN loops through both inner and outer tables. It is used
when a small and large amount of rows are joined. It finds matches
between columns in inner and outer tables. For better performance, the
joining columns should be indexed.
You can use the loop join when tables are small and fit in memory.
JOIN ORDER FIXED Specifies that the Optimizer use tables in a join in the order listed in the
FROM clause of the query. This means that the Optimizer does not
attempt to rearrange any join order and does not try to find alternate
access paths to complete the join.
Before using this hint, be sure to run the EXPLAIN PLAN to view the
associated plan. This gives you an idea on the access plan used for exe-
cuting the query with this join order.
Hint Definition
INTERNAL SORT Specifies that the query executor use the internal sorter. Use this hint if
the expected resultset is small (100s of rows as opposed to 1000s of
rows), for example, if you are performing some aggregates, ORDER
BY with small resultsets, or GROUP BY with small resultsets, etc.
This hint avoids the use of the more expensive external sorter.
EXTERNAL SORT Specifies that the query executor use the external sorter. Use this hint
when the expected resultset is large and does not fit in memory, for
example, if the expected resultset has 1000s of rows.
In addition, specify the SORT working directory in the solid.ini
before using the external sort hint. If a working directory is not speci-
fied, you will receive a run-time error. The working directory is speci-
fied in the [sorter] section of the solid.ini configuration file.
For example:
[sorter]
TmpDir_1=c:\soldb\temp1
INDEX [REVERSE] Forces a given index scan for a given table. In this case, the Optimizer
table_name.index_name does not proceed to evaluate if there are any other indexes that can be
used to build the access plan or whether a table scan is better for the
given query.
Before using this hint, it is recommended that you "test" the hint by
running the EXPLAIN PLAN output to ensure that the plan generated
is optimal for the given query.
The optional keyword REVERSE returns the rows in the reverse order.
In this case, the query executor begins with the last page of the index
and starts returning the rows in the descending (reverse) key order of
the index.
Note that in tablename.indexname, the tablename is a fully qualified
table name which includes the catalogname and schemaname.
PRIMARY KEY Forces a primary key scan for a given table.
[REVERSE] tablename
The optional keyword REVERSE returns the rows in the reverse order.
If the primary key is not available for the given table, then you will
receive a run-time error.
Hint Definition
FULL SCAN table_name Forces a table scan for a given table. In this case, the optimizer does
not proceed to evaluate if there are any other indexes that can be used
to build the access plan or whether a table scan is better for the given
query.
Before using this hint, it is recommended that you "test" the hint by
running the EXPLAIN PLAN output to ensure that the plan generated
is optimal for the given query.
[NO] SORT BEFORE Indicates whether the SORT operation occurs before the resultset is
GROUP BY grouped by the GROUP BY columns.
If the grouped items are few (100s of rows) then use NO SORT
BEFORE. On the other hand, if the grouped items are large (1000s of
rows), then use SORT BEFORE.
Usage
Due to various conditions with the data, user query, and database, the SQL Optimizer is not
always able to choose the best possible execution plan. For more efficiency, you may want to
force a merge join because you, unlike the Optimizer, know that your data is already sorted.
Or sometimes specific predicates in queries cause performance problems that the Optimizer
cannot eliminate. The Optimizer may be using an index that you know is not optimal. In this
case, you may want to force the Optimizer to use one that produces faster results.
Optimizer hints is a way to have better control over response times to meet your perfor-
mance needs. Within a query, you can specify directives or hints to the Optimizer, which it
then uses to determine its query execution plan. Hints are detected through a pseudo com-
ment syntax from SQL2.
You can place a hint(s) in a SQL statement as a static string, just after a SELECT, INSERT,
UPDATE, or DELETE keyword. The hint always follows the SQL statement that applies to
it.
Table name resolution in optimizer hints is the same as in any table name in a SQL state-
ment. When there is an error in a hint specification, then the whole SQL statement fails with
an error message.
Hints are enabled and disabled using the following configuration parameter in the
solid.ini:
[Hints]
EnableHints = YES | NO
The default is YES.
Example
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- INDEX TAB1.IDX1 *)--
* FROM TAB1 WHERE I > 100
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- INDEX MyCatalog.mySchema.TAB1.IDX1 *)--
* FROM TAB1 WHERE I > 100
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- JOIN ORDER FIXED *)--
* FROM TAB1, TAB2 WHERE TAB1.I >= TAB2.I
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- LOOP JOIN *)--
* FROM TAB1, TAB2 WHERE TAB1.I >= TAB2.I
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- INDEX REVERSE MyCatalog.mySchema.TAB1.IDX1 *)--
* FROM TAB1 WHERE I > 100
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
-- SORT BEFORE GROUP BY *)--
AVG(I) FROM TAB1 WHERE I > 10 GROUP BY I2
SELECT
--(* vendor(SOLID), product(Engine), option(hint)
IMPORT
IMPORT ’file_name’ [COMMITBLOCK number_of_rows]
[{OPTIMISTIC | PESSIMISTIC}]
Usage
This IMPORT command allows you to import data to a replica from a data file created by
the EXPORT SUBSCRIPTION command.
The file_name represents a literal value enclosed in single quotes. The import command can
accept a single filename only. Therefore, all the data to be imported to a replica must be con-
tained in one file.
The COMMITBLOCK option indicates the number of rows processed before the data is
committed. The number_of_rows is an integer value used with the optional COMMIT-
BLOCK clause to indicate the commitblock size. The use of the COMMITBLOCK option
improves the performance of the import and releases the internal transaction resources fre-
quently.
The optimal value for the COMMITBLOCK size varies depending on various resources at
the server. A good example is a COMMITBLOCK size of 1000 for 10,000 rows. If you do
not specify the COMMITBLOCK option, the IMPORT command uses all rows in the publi-
cation as one transaction. This may work well for a small number of rows, but is problem-
atic for 1000s and millions of rows.
You can define the import to use table-level pessimistic locking when it is initially executed.
If the PESSIMISTIC mode is specified, all other concurrent access to the table affected is
blocked until the import has completed. Otherwise if the optimistic mode is used, the
IMPORT may fail due to a concurrency conflict.
When a transaction acquires an exclusive lock to a table, the TableLockWaitTimeout
parameter setting in the [General] section of the solid.ini configuration file deter-
mines the transaction’s wait period until the exclusive or shared lock is released. For details,
see the description of this parameter in the Solid Administrator Guide.
Imported data is not valid in a replica until it is refreshed once after the import. At the time a
replica makes its first REFRESH, the bookmark used to export the file must exist in the mas-
ter database. If it does not exist, then the REFRESH fails. This means that you are required
to create a new bookmark on the master database, re-export the data, and re-import the data
on the replica.
Usage Rules
Note the following rules when using the IMPORT command:
■ Only one file per subscription is allowed for import.
■ The file size of an export file is dependent upon the underlying operating system. If a
respective platform (such as SUN, or HP) allows more than 2GB, you can write files
greater than 2GB. This means that a replica (recipient) should also have a compatible
platform and file system. Otherwise, the replica is not able to accept the export file. If
both the operating system on a master and replica support file sizes greater than 2GB,
then export files greater than 2GB are permitted.
■ Back up replica databases before using the IMPORT command. If a COMMITBLOCK
option is used and fails, then the imported data is only partially committed; you will
need to restore the replica with a backup file.
■ Solid Database Engine requires that autocommit be set OFF when using the IMPORT
command.
Usage in Master
This statement is not available in a master database.
Usage in Replica
Use this statement in a replica to import data from a data file created by the EXPORT SUB-
SCRIPTION statement in a master database.
Example
IMPORT 'FINANCE.EXP';
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
INSERT
INSERT INTO table_name [(column_identifier [, column_identifier]...)]
VALUES (insert_value[, insert_value]... )
Usage
There are several variations of the INSERT statement. In the simplest instance, a value is
provided for each column of the new row in the order specified at the time the table was
defined (or altered). In the preferable form of the INSERT statement, the columns are speci-
fied as part of the statement and they do not need to be in any specific order as long as the
orders of the column list matches the order of the value list.
Example
INSERT INTO TEST (C, ID) VALUES (0.22, 5);
INSERT INTO TEST VALUES (0.35, 9);
Usage
The query specification creates a temporary virtual table. Using the INSERT statement, the
rows of this virtual table are inserted into the specified table (the number and data types of
the columns must match).
Example
INSERT INTO TEST (C, ID) SELECT A, B FROM INPUT_TO_TEST;
LOCK TABLE
LOCK lock-definition [lock-definition] [wait-option]
lock-definition ::= TABLE tablename [,tablename]
IN { SHARED | [LONG] EXCLUSIVE } MODE
wait-option ::= NOWAIT | WAIT <#seconds>
Tablename: The name of the table to lock. You can also specify the catalog and schema of
the table. You may only lock tables, not views.
SHARED: Shared mode allows others to perform read and write operations on the table.
Also DDL operations are allowed. Shared mode prohibits others from issuing an EXCLU-
SIVE lock on the same table.
EXCLUSIVE: If a table uses pessimistic locking, then an exclusive lock prevents any other
user from accessing the table in any way (e.g. reading data, acquiring a lock, etc.). If the
table uses optimistic locking, then an exclusive lock allows other users to perform SELECTs
on the locked table but prohibits any other activity (like acquiring shared locks) on that table.
LONG: By default, locks are released at the end of a transaction. If the LONG option is
specified, then the lock is not released when the locking transaction commits. (Note: If the
locking transaction aborts or is rolled back, then all locks, including LONG locks, are
released.) The user must explicitly unlock long locks using the UNLOCK command
described later in this document. LONG duration locks are allowed only in EXCLUSIVE
mode. LONG shared locks are not supported.
NOWAIT: Specifies that control is returned to you immediately even if any specified table is
locked by another user. If requested lock is not granted then error is returned.
WAIT: Specifies a timeout how long in seconds system should wait to get requested locks. If
requested lock is not granted within that time then error is returned.
Usage
The LOCK and UNLOCK commands allow you to manually lock and unlock tables. Put-
ting a lock on a table (or any other object) limits access to that object. The LOCK TABLE
command has an option that allows you to extend the duration of a manual exclusive lock
past the end of the current transaction; in other words, you can keep the table exclusively
locked through a series of multiple transactions.
Manual locking is not needed very often. The server’s automatic locking is usually suffi-
cient. For a detailed discussion of locking in general, and the server’s automatic locking in
particular, see “Concurrency Control and Locking” on page 4-16.
Explicit locking of tables is primarily intended to help database administrators execute main-
tenance operations in a database without being disturbed by other users. (For more informa-
tion about Maintenance Mode, see the chapter titled "Updating And Maintaining The
Schema Of A Distributed System" in the Solid SmartFlow Data Synchronization Guide.)
However, tables can be locked manually even if you are not using “Maintenance Mode”.
Table locks can be either SHARED or EXCLUSIVE.
An EXCLUSIVE lock on a table prohibits any other user or connection from changing the
table or any records within the table. If you have an exclusive lock on a table, then other
users/connections cannot do any of the following on that table until your exclusive lock is
released:
■ INSERT, UPDATE, DELETE
■ ALTER TABLE
■ DROP TABLE
■ LOCK TABLE (in shared or exclusive mode)
Furthermore, if the table uses pessimistic locking, then an exclusive lock also prevents oth-
ers users/connections from doing the following:
■ SELECT
If the table uses pessimistic locking, no other user can SELECT from the table when it has
an exclusive lock. Note, however, that if the table uses optimistic locking, an exclusive lock
does NOT prevent other users from SELECTing records from that table. (Most database
servers on the market behave differently -- i.e. they do not allow SELECTs on a table that is
locked exclusively -- because most other database servers use only pessimistic locking.)
A SHARED lock is less restrictive than an exclusive lock. If you have a shared lock on a
table, then other users/connections cannot do any of the following on that table until your
shared lock is released:
■ ALTER TABLE
■ DROP TABLE
■ LOCK TABLE (in exclusive mode)
If you use a shared lock on a table, other users/connections may insert, update, delete, and of
course select from the table.
Note that shared locks on a table are somewhat different from shared locks on a record. If
you have a shared lock on a record, then no other user may change data in the record. How-
ever, if you have a shared lock on a table, then other users may still change data in that table.
More than one user at a time may have shared locks on a table. Therefore, if you have a
shared lock on the table, other users may also get shared locks on the table. However, no
user may get an exclusive lock on a table when another user has a shared lock (or exclusive
lock) on that table.
The LOCK command takes effect at the time it is executed. If you do not use the LONG
option, then the lock will be released at the end of the transaction. If you use the LONG
option, the table lock lasts until you explicitly unlock the table. (The table lock will also be
released if you roll back the transaction in which the lock was placed. In other words,
LONG locks only persist across transactions if you commit the transaction in which you
placed the LONG lock.)
The LOCK/UNLOCK TABLE commands apply only to tables. There is no command to
manually lock or unlock individual records within a table.
Privileges required: To use the LOCK TABLE command to issue a lock on a table, you must
have insert, delete or update privileges on that table. Note that there is no GRANT com-
mand to give other users LOCK and UNLOCK privileges on a table.
Note that in one LOCK command you can lock more than one table and specify different
modes. If the lock command fails, then none of the tables are locked. If the lock command
was successful, then all requested locks are granted.
If the user does not specify a wait option (NOWAIT or WAIT seconds), then the default wait
time is used. That is the same as the deadlock detection timeout.
Examples
LOCK TABLE emp IN SHARED MODE;
LOCK TABLE emp IN SHARED MODE TABLE dept IN EXCLUSIVE MODE;
LOCK TABLE emp,dept IN SHARED MODE NOWAIT;
LOCK TABLE emp IN LONG EXCLUSIVE MODE;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
See Also
UNLOCK TABLE
SET SYNC MODE { MAINTENANCE | NORMAL }
MESSAGE APPEND
MESSAGE unique_message_name APPEND
[
PROPAGATE TRANSACTIONS
[ { IGNORE_ERRORS | LOG_ERRORS | FAIL_ERRORS } ]
[WHERE { property_name {=|<|<=|>|>=|<>} 'value_string' | ALL } ]
]
[ { REFRESH | SUBSCRIBE } publication_name[(publication_parameters)] [FULL]]
[REGISTER PUBLICATION publication_name]
[UNREGISTER PUBLICATION publication_name]
[REGISTER REPLICA]
[UNREGISTER REPLICA]
[SYNC_CONFIG ('sync_config_arg')]
Supported in
This command requires Solid SmartFlow.
Usage
Once a message has been created in the replica database with the MESSAGE BEGIN com-
mand, you can append the following tasks to that message:
■ Propagate transactions to the master database
■ Refresh a publication from the master database
■ Register or unregister a publication for replica subscription
■ Register or unregister a replica database to the master
■ Download master user information (list of user names and passwords) from the master
database
The PROPAGATE TRANSACTIONS task may contain a WHERE clause that is used to
propagate only those transactions where a transaction property defined with the SAVE
PROPERTY statement meets specific criteria. Using the keyword ALL overrides any default
propagation condition set earlier with the statement SAVE DEFAULT PROPAGATE PROP-
Note
Note
A single-master environment does not require the use of catalogs. By default, when catalogs
are not used, registration of the replica occurs automatically with a base catalog that maps to
a master base catalog, whose name is given when the database is created. Therefore, no
backward compatibility issues exist for versions prior to SynchroNet 2.0, which supported
only the single-master architecture.
Note
Note
A single replica node may have multiple masters, but only if the node has a separate replica
catalog for each master catalog. A single replica catalog may not have multiple masters.
The UNREGISTER REPLICA option removes an existing replica database from the list of
replicas in the master database.
The REFRESH task may contain arguments to the publication (if used in the publication).
The parameters must be literals; you cannot use stored procedure variables, for example.
Using keyword FULL with REFRESH forces fetching full data to the replica. The publica-
tion requested must be registered. Note that the keywords REFRESH and SUBSCRIBE are
synonymous; however, SUBSCRIBE is deprecated in the MESSAGE APPEND statement.
The REGISTER PUBLICATION task registers a publication in the replica so that it can be
refreshed from. Users can only refresh from publications that are registered. In this way,
publication parameters are validated, preventing users from accidently subscribing to
unwanted subscriptions or requesting ad hoc subscriptions. All tables that the registered pub-
lication refers to must exist in the replica.
The UNREGISTER PUBLICATION option removes an existing registered publication from
the list of registered publications in the master database.
The input argument of the SYNC_CONFIG task defines the search pattern of the user names
that are returned from the master database to the replica. SQL wildcards (such as the sym-
bol %) that follow the conventions of the LIKE keyword are used in this argument with a
match_string, which is a character string. For details on using the LIKE keyword, see “Wild-
card characters” on page B-184.
Usage in Master
The MESSAGE APPEND statement cannot be used in a master database.
Usage in Replica
Use MESSAGE APPEND in replicas to append tasks to a message that has been created
with MESSAGE BEGIN.
Example
MESSAGE MyMsg0001 APPEND PROPAGATE TRANSACTIONS;
MESSAGE MyMsg0001 APPEND REFRESH PUB_CUSTOMERS_BY_AREA('SOUTH');
MESSAGE MyMsg0001 APPEND REGISTER REPLICA;
MESSAGE MyMsg0001 APPEND SYNC_CONFIG ('S%');
MESSAGE MyMsg0001 APPEND REGISTER PUBLICATION publ_customer;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
MESSAGE BEGIN
MESSAGE unique_message_name BEGIN [TO master_node_name]
Supported in
This command requires Solid SmartFlow.
Usage
Each message that is sent from a replica to the master database must explicitly begin with
the MESSAGE BEGIN statement.
Each message must have a name that is unique within a replica. To construct unique mes-
sage names, you may use the GET_UNIQUE_STRING() function, which is documented in
page 177 in Appendix B . After a message has been processed, that message name may be
reused. However, if the message fails for any reason, the master will keep a copy of the
failed message, and if you try to reuse the message name before you delete the failed mes-
sage, then of course the name will not be unique. You may want to use a new message name
even in situations where you might be able to re-use an existing name.Note that it is possi-
ble for two replicas of the same master to have the same message name.
When registering a replica to a master catalog, other than the master system catalog, you
must provide the master node name in the MESSAGE BEGIN command. The master node
name is used to resolve the correct catalog at the master database. Note that specifying a
master node name only applies when using the REGISTER REPLICA statement. Later mes-
sages are automatically sent to the correct master node.
If you use the optional "TO master_node_name" clause, then you must put double quotes
around the master_node_name.
Note
Note
When working with messages, be sure the autocommit mode is always switched off.
Usage in Master
The MESSAGE BEGIN statement cannot be used in a master database.
Usage in Replica
Use MESSAGE BEGIN to start building a new message in a replica.
Example
MESSAGE MyMsg0001 BEGIN ;
MESSAGE MyMsg0002 BEGIN TO "BerkeleyMaster";
MESSAGE DELETE
MESSAGE message_name [FROM REPLICA replica_name] DELETE
Supported in
This command requires Solid SmartFlow.
Usage
If the execution of a message is terminated because of an error, this command lets you
explicitly delete the message from the database to recover from the error. Note that the cur-
rent transaction and all subsequent transactions that were propagated to the master in this
message are permanently lost when the message is deleted. To use this statement, you must
have SYS_SYNC_ADMIN_ROLE access.
Note
Note
If the message needs to be deleted from the master database, then the node name of the rep-
lica database that forwarded the message needs to also be provided.
When deleting messages, be sure the autocommit mode is always switched off.
Usage in Master
Use this statement in the master to delete a failed message. Be sure to specify the replica in
the syntax: 'FROM REPLICA replica_name'.
Usage in Replica
This statement is used in the replica to delete a message.
Example
MESSAGE MyMsg0000 DELETE ;
MESSAGE MyMsg0001 FROM REPLICA bills_laptop DELETE ;
Supported in
This command requires Solid SmartFlow.
Usage
This statement deletes the current transaction from a given message in the master database.
To use this statement requires SYS_SYNC_ADMIN_ROLE privilege.
The execution of a message stops, if a DBMS level error such as a duplicate insert occurs
during the execution. This kind of error can be resolved by deleting the offending transac-
tion from the message. Once deleted with the MESSAGE FROM REPLICA DELETE CUR-
RENT TRANSACTION, an administrator can proceed with the synchronization process.
When deleting the current transaction, be sure the autocommit mode is always switched off.
This statement is used only when the message is in an error state; if used otherwise, an error
message is returned. This statement is a transactional operation and must be committed
before message execution may continue. To restart the message after the deletion is commit-
ted, use the following statement:
MESSAGE msgname FROM REPLICA replicaname EXECUTE
Note that the deletion is completed first before the MESSAGE FROM REPLICA EXE-
CUTE statement is executed; that is, the statement starts the message from replica, but waits
until the active statement is completed before actually executing the message. Thus the state-
ment performs asynchronous message execution.
Caution
Delete a transaction only as a last resort; normally transactions should be written to prevent
unresolved conflicts in a master database. MESSAGE FROM REPLICA DELETE CUR-
RENT TRANSACTION is intended for use in the development phase, when unresolved con-
flicts occur more frequently.
Use caution when deleting a transaction. Because subsequent transactions may be depen-
dent on the results of a deleted transaction, the risk incurred is more transaction errors.
Usage in Master
Use this statement in the master to delete a failed transaction.
Usage in Replica
This statement is not available in the replica.
Example
MESSAGE somefailures FROM REPLICA laptop1 DELETE
CURRENT TRANSACTION;
COMMIT WORK;
MESSAGE somefailures FROM REPLICA laptop1 EXECUTE;
COMMIT WORK;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
MESSAGE END
MESSAGE unique_message_name END
Supported in
This command requires Solid SmartFlow.
Usage
A message must be "wrapped up" and made persistent before it can be sent to the master
database. Ending the message with the MESSAGE END command closes the message, i.e.
you can no longer append anything to it. Committing the transactionmakes the message per-
sistent.
Note
Note
When working with messages, be sure the autocommit mode is switched off.
Usage in Master
The MESSAGE END statement cannot be used in a master database.
Usage in Replica
Use the MESSAGE END statement in replicas to end a message.
Example
MESSAGE MyMsg001 END ;
COMMIT WORK ;
The following example shows a complete message that propagates transactions and refreshes
from publication PUB_CUSTOMERS_BY_AREA.
MESSAGE MyMsg001 BEGIN ;
MESSAGE MyMsg001 APPEND PROPAGATE TRANSACTIONS;
MESSAGE MyMsg001 APPEND REFRESH PUB_CUSTOMERS_BY_AREA(‘SOUTH’);
MESSAGE MyMsg001 END ;
COMMIT WORK ;
MESSAGE EXECUTE
MESSAGE message_name EXECUTE [{OPTIMISTIC | PESSIMISTIC}]
Supported in
This command requires Solid SmartFlow.
Usage
This statement allows a message to be re-executed if the execution of a reply message fails
in a replica. This can occur, for example, if the database server detects a concurrency con-
flict between a REFRESH and an ongoing user transaction.
If you anticipate concurrency conflicts to happen often and the re-execution of the message
fails because of a concurrency conflict, you can execute the message using the PESSIMIS-
TIC option for table-level locking; this ensures the message execution is successful.
In this mode, all other concurrent access to the table affected is blocked until the synchroni-
zation message has completed. Otherwise if the optimistic mode is used, the MESSAGE
EXECUTE statement may fail due to a concurrency conflict.
When a transaction acquires an exclusive lock to a table, the TableLockWaitTimeout param-
eter setting in the General section of the solid.ini configuration file determines the transac-
tion’s wait period until the exclusive or shared lock is released. For details, see the
description of this parameter in the Solid Administrator Guide.
Note
Note
When working with messages, be sure the autocommit mode is always switched off.
Usage in Master
This statement is not available in the master. See “MESSAGE FROM REPLICA EXE-
CUTE” on page B-127.
Usage in Replica
Use this statement in the replica to re-execute a failed message execution in the replica.
Result set
MESSAGE EXECUTE returns a result set. The returned result set is the same as with com-
mand MESSAGE GET REPLY.
Example
MESSAGE MyMsg0002 EXECUTE;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
MESSAGE FORWARD
MESSAGE unique_message_name FORWARD
[TO {’connect_string’ | node_name | "node_name"} ]
[TIMEOUT {number_of_seconds | FOREVER} ]
[COMMITBLOCK block_size_in_rows]
[{OPTIMISTIC | PESSIMISTIC}]
Supported in
This command requires Solid SmartFlow.
Usage
After a message has been completed and made persistent with the MESSAGE END state-
ment, it can be sent to the master database using the MESSAGE FORWARD statement.
It is only necessary to specify the recipient of the message with keyword TO when a new
replica is being registered with the master database; that is, when the first message from a
replica to the master server is sent.
The connect_string is a valid connect string, such as:
tcp [host_computer_name] server_port_number
For more information about connect strings, read the section of the Solid Administrator
Guide titled "Communication Protocols".
In the context of a MESSAGE FORWARD command, a connect string must be delimited in
single quotes.
The node_name (without quotes) is a valid alphanumeric sequence that is not a reserved
word. The "node_name" (in double quote marks) is used if the node name is a reserved
word; in this case, the double quotes ensure that the node name is treated as delimited identi-
fier. For example, since the word "master" is a reserved word, the word is placed in double
quotes when it is used as a node name:
-- On master
SET SYNC NODE "master";
--On replica
MESSAGE refresh_severe_bugs2 FORWARD TO "master" TIMEOUT FOREVER;
Each sent message has a reply message. The TIMEOUT property defines how long the rep-
lica server will wait for the reply message.
If a TIMEOUT is not defined, the message is forwarded to the master and the replica does
not fetch the reply. In this case the reply can be retrieved with a separate MESSAGE GET
REPLY call.
If the reply of the sent message contains REFRESHes of large publications, the size of the
REFRESH’s commit block, that is, the number of rows that are committed in one transac-
tion, can be defined using the COMMITBLOCK property. This has a positive impact on the
performance of the replica database. It is recommended that there are no on-line users
accessing the database when the COMMITBLOCK property is being used.
As part of the MESSAGE FORWARD operation, you can specify table-level pessimistic
locking when the reply message is initially executed in the replica. If the PESSIMISTIC
mode is specified, all other concurrent access to the table affected is blocked until the syn-
chronization message has completed. Otherwise if the optimistic mode is used, the MES-
SAGE FORWARD operation may fail due to a concurrency conflict.
When a transaction acquires an exclusive lock to a table, the TableLockWaitTimeout
parameter setting in the General section of the solid.ini configuration file determines
the transaction’s wait period until the exclusive or shared lock is released. For details, see the
description of this parameter in the Solid Administrator Guide.
If a forwarded message fails in delivery due to a communication error, you must explicitly
use the MESSAGE FORWARD to resend the message. Once re-sent, MESSAGE FOR-
WARD re-executes the message.
Note
Note
When working with the messages, be sure the autocommit mode is always switched off.
Example
Forward message, wait for the reply for 60 seconds
MESSAGE MyMsg001 FORWARD TIMEOUT 60 ;
Forward message to a master server that runs on the "mastermachine.acme.com" machine.
Do not wait for the reply message.
MESSAGE MyRegistrationMsg FORWARD TO
’tcp mastermachine.acme.com 1313’;
Forward message, wait for the reply for 5 minutes (300 seconds) and commit the data of the
refreshed publications to replica database in transactions of max. 1000 rows.
Result set
If the MESSAGE FORWARD also retrieves the reply, the statement returns a result set. The
result set returned is the same as the one returned with the statement MESSAGE GET
REPLY. See “MESSAGE GET REPLY” on page B-129.
Supported in
This command requires Solid SmartFlow.
Usage
The execution of a message stops if a DBMS level error such as a duplicate insert occurs
during the execution or if an error is raised from a procedure by putting the
SYS_ROLLBACK parameter to the transactions bulletin board. This kind of error is recov-
erable by fixing the reason for the error, for example, by removing the duplicate row from
the database, and then executing the message.
When the transaction in error is deleted with MESSAGE DELETE CURRENT TRANSAC-
TION, the deletion is completed first before the MESSAGE FROM REPLICA EXECUTE
command is executed; that is, the statement starts the message from replica, but waits until
the active statement is completed before actually executing the message. Thus the command
performs asynchronous message execution.
Note
Note
When working with the messages, be sure the autocommit mode is always switched off.
Usage in Master
Use this command in the master to execute a failed message.
Usage in Replica
This command is not available in the replica. See “MESSAGE EXECUTE” on page B-120
for an alternative.
Example
MESSAGE MyMsg0002 FROM REPLICA bills_laptop EXECUTE;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
If the reply to a sent message has not been received by the MESSAGE FORWARD state-
ment, it can be requested separately from the master database by using the MESSAGE GET
REPLY statement in the replica database.
If the reply message contains REFRESes of large publications, the size of the REFRESH’s
commit block, that is, the number of rows that are committed in one transaction, can be lim-
ited using the COMMITBLOCK property. This has a positive impact on the performance of
the replica database. It is recommended that there are no on-line users in the database when
the COMMITBLOCK property is in use.
If the execution of a reply message with the COMMITBLOCK property fails in the replica
database, it cannot be re-executed. The failed message must be deleted from the replica data-
base and refreshed from the master database.
If NO EXECUTE is specified, when the reply message is available at the master, it is only
read and stored for later execution. Otherwise, the reply message is downloaded from the
master and executed in the same statement. Using NO EXECUTE reduces bottlenecks in
communication lines by allowing reply messages for later execution in different transactions.
You can define the reply message to use table-level pessimistic locking when it is initially executed. If
the PESSIMISTIC mode is specified, all other concurrent access to the table affected is
blocked until the synchronization message has completed. Otherwise if the optimistic mode
is used, the MESSAGE GET REPLY operation may fail due to a concurrency conflict.
When a transaction acquires an exclusive lock to a table, the TableLockWaitTimeout
parameter setting in the General section of the solid.ini configuration file determines
the transaction’s wait period until the exclusive or shared lock is released. For details, see the
description of this parameter in the Solid Administrator Guide.
Note
Note
When working with the messages, be sure the autocommit mode is always switched off.
Usage in Master
MESSAGE GET REPLY cannot be used in the master.
Usage in Replica
Use MESSAGE GET REPLY in the replica to fetch a reply of a message from the master.
Example
MESSAGE MyMessage001 GET REPLY TIMEOUT 120
MESSAGE MyMessage001 GET REPLY TIMEOUT 300 COMMITBLOCK 1000
Result set
MESSAGE GET REPLY returns a result set table. The columns of the result set are as fol-
lows:
POST EVENT
The POST EVENT command is allowed only inside stored procedures. See the CREATE
PROCEDURE statement for more details.
PUT_PARAM()
put_param(param_name,param_value)
Supported in
This command requires Solid SmartFlow.
Usage
With Solid Intelligent Transaction, SQL statements or procedures of a transaction can com-
municate with each other by passing parameters to each other using a parameter bulletin
board. The bulletin board is a storage of parameters that is visible to all statements of a
transaction.
Parameters are specific to a catalog. Different replica and master catalogs have their own set
of bulletin board parameters that are not visible to each other.
Use the put_param() function to place a parameter on the bulletin board. If the parameter
already exists, the new value overwrites the previous one.
These parameters are not propagated to the master. You can use the SAVE PROPERTY
statement to propagate properties from the replica to the master. For details, read “SAVE
PROPERTY” on page B-140.
Because put_param() is a SQL function, it can be used only within a procedure or in a SQL
statement.
Both the parameter name and value are of type VARCHAR.
Usage in Master
Put_param() function can be used in the master for setting parameters to the parameter bulle-
tin board of the current transaction.
Usage in Replica
Put_param() function can be used in replicas for setting parameters to the parameter bulletin
board of the current transaction.
Example
Select put_param('myparam', '123abc') ;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
When executed successfully, put_param() returns the new value of the assigned parameter.
See Also
GET_PARAM
SAVE PROPERTY
SET SYNC PARAMETER
REGISTER EVENT
Registering an event tells the server that you would like to be notified of all future occur-
rences of thie event, even if you are not yet waiting for it. By separating the "register" and
"wait" commands, you can start queuing events immediately, while waiting until later to
actually start processing them.
Note that you do not need to register for every event before waiting for it. When you wait on
an event, you will be registered implicitly for that event if you did not already explicitly reg-
ister for it. Thus you only need to explicitly register events if you want them to start being
queued now but you don’t want to start WAITing for them until later.
The REGISTER EVENT command is allowed only inside stored procedures. See the CRE-
ATE PROCEDURE statementand the CREATE EVENT statement for more details.
Usage
The REVOKE statement is used to take a role away from users.
Example
REVOKE GUEST_USERS FROM HOBBES;
Note
Note
Solid Database Engine does not support the keywords CASCADE and RESTRICT in
REVOKE statements.
Usage
The REVOKE statement is used to take privileges away from users and roles.
Example
REVOKE INSERT ON TEST FROM GUEST_USERS;
See Also
For more information about user privileges, see also:
■ “GRANT” on page B-92, and
■ “Managing User Privileges and Roles” on page 4-2.
REVOKE REFRESH
REVOKE { REFRESH | SUBSCRIBE} ON publication_name FROM {PUBLIC |
user_name,
[user_name] ... | role_name, [role_name] ... }
Supported in
This command requires Solid SmartFlow.
Usage
This statement revokes access rights to a publication from a user or role defined in the mas-
ter database.
NOTE: The keywords "REFRESH" and "SUBSCRIBE" are synonymous. However, "SUB-
SCRIBE" is deprecated in the REVOKE statement.
Usage in Master
Use this statement to revoke access rights to a publication from a user or role.
Usage in Replica
This statement is not available in a replica database.
Example
REVOKE REFRESH ON customers_by_area FROM joe_smith;
REVOKE REFRESH ON customers_by_area FROM all_salesmen;
Return values
Error Code Description
13137 Illegal grant/revoke mode
13048 No grant option privilege
25010 Publication name not found
ROLLBACK WORK
ROLLBACK WORK
Usage
The changes made in the database by the current transaction are discarded by the ROLL-
BACK WORK statement. It terminates the transaction.
Example
ROLLBACK WORK;
SAVE
SAVE [NO CHECK] [ { IGNORE_ERRORS | LOG_ERRORS | FAIL_ERRORS } ]
[ { AUTOSAVE | AUTOSAVEONLY } ] sql_statement
Supported in
This command requires Solid SmartFlow.
Usage
The statements of a transaction that need to be propagated to the master database must be
explicitly saved to the transaction queue of the replica database. Adding a SAVE statement
before the transaction statements does this.
Only master users are allowed to save statements. This is because when the saved state-
ments are executed on the master, they must be executed using the appropriate access rights
of a user on the master. The saved statements are executed in the master database using the
access rights of the master user that was active in the replica when the statement was saved.
If a user in the replica was mapped to a user in the master, the SAVE statement uses the
access rights of the user in the master.
The default behavior for error handling with transaction propagation is that a failed transac-
tion halts execution of the message; this aborts the currently-executing transaction and pre-
vents execution of any subsequent transactions that are in that same message. However, you
may choose a different error-handling behavior.
The options for the SAVE command are explained below:
NO CHECK: This option means that the statement is not prepared in the replica. This option
is useful if the command would not make sense on the replica. For example, if the SQL com-
mand calls a stored procedure that exists on the master but not on the replica, then you don’t
want the replica to try to prepare the statement. If you use this option, then the statement can
not have parameter markers.
IGNORE_ERRORS: This option means that if a statement fails while executing on the mas-
ter, then the failed statement is ignored and the transaction is aborted. However, only the
transaction, not the entire message, is aborted. The master continues executing the message,
resuming with the first transaction after the failed one.
LOG_ERRORS: This means that if a statement failed while executing on the master, then
the failed statement is ignored and the current transaction is aborted. The failed transac-
tion’s statements are saved in SYS_SYNC_RECEIVED_STMTS system table for later exe-
cution or investigation. The failed transactions can be examined using
SYNC_FAILED_MESSAGES system view and they can be re-executed from there using
MESSAGE <msg_id> FROM REPLICA <replica_name> RESTART –statement.
Note that, as with the IGNORE_ERROR option, aborting the transaction does not abort the
entire message. The master continues executing the message, resuming with the first transac-
tion after the failed one.
FAIL_ERRORS: This option means that if a statement fails, the the master stops executing
the message. This is the default behavior.
AUTOSAVE: This option means that the statement is executed in the master and automati-
cally saved for further propagation if the master is also a replica to some other master (i.e. a
middle-tier node)
AUTOSAVEONLY: This option means that the statement is NOT executed in the master but
instead is automatically saved for further propagation if the master is also a replica to some
other master (i.e. is a middle-tier node)
Usage in Master
This statement cannot be used in the master.
Usage in Replica
Use this statement in the replica to save statements for propagation to the master.
Example
SAVE INSERT INTO mytbl (col1, col2) VALUES ('calvin', 'hobbes')
SAVE CALL SP_UPDATE_MYTBL('calvin_1', 'hobbes')
SAVE CALL SP_DELETE_MYTBL('calvin')
SAVE NO CHECK IGNORE_ERRORS insert into mytab values(1,2)
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
SAVE PROPERTY
SAVE PROPERTY property_name VALUE 'value_string'
SAVE PROPERTY property_name VALUE NONE
SAVE DEFAULT PROPERTY property_name VALUE 'value_string’
SAVE DEFAULT PROPERTY property_name VALUE NONE
SAVE DEFAULT PROPAGATE PROPERTY WHERE name {=|<|<=|>|>=|<>}'value'
SAVE DEFAULT PROPAGATE PROPERTY NONE
Supported in
This command requires Solid SmartFlow.
Usage
It is possible to assign properties to the current active transaction with the following com-
mand:
SAVE PROPERTY property_name VALUE 'value_string'
The statements of the transaction in the master database can access these properties by call-
ing the GET_PARAM() function. Properties are only available in the replica database that
apply to the command
MESSAGE APPEND unique_message_name PROPAGATE TRANSACTIONS
WHERE property > ‘value_string‘
When the transaction is executed in the master database, the saved properties are placed on
the parameter bulletin board of the transaction. If the saved property already exists, the new
value overwrites the previous one.
It is also possible to define default properties that are saved to all transactions of the current
connection. The statement for this is:
SAVE DEFAULT PROPERTY property_name VALUE 'value_string'
A SAVE DEFAULT PROPAGATE PROPERTY WHERE statement can be used to save
default transaction propagation criteria. This can be used for example to set the propagation
priority of transactions created in the current connection.
SAVE DEFAULT PROPAGATE PROPERTY WHERE property > 'value' can be used in a
connection level to append all MESSAGE unique_message_name APPEND PROPAGATE
TRANSACTIONS statements to have the default WHERE statement. If the WHERE state-
ment is entered also in the PROPAGATE statement, it will override the statement set with
the DEFAULT PROPAGATE PROPERTY.
A property or a default property can be removed by re-saving the property with value string
NONE.
Usage in Master
This statement cannot be used in the master database.
Usage in Replica
You can use these statements in the replica to set properties for a transaction that is saved for
propagation to the master. The property’s value can be read in the master database.
Example
SAVE PROPERTY conflict_rule VALUE 'override'
SAVE DEFAULT PROPERTY userid VALUE 'scott'
SAVE DEFAULT PROPERTY userid VALUE NONE
SAVE DEFAULT PROPAGATE PROPERTY WHERE priority > '2'
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Result set
SAVE PROPERTY does not return a result set.
SELECT
SELECT [ALL | DISTINCT] select-list
FROM table_reference_list
[WHERE search_condition]
[GROUP BY column_name [, column_name]... ]
[HAVING search_condition]
[[UNION | INTERSECT | EXCEPT] [ALL] select_statement]...
[ORDER BY {unsigned integer | column_name}
[ASC | DESC]]
Usage
The SELECT statement allows you to select 0 or more records from 1 or more tables.
Example
SELECT ID FROM TEST;
SELECT DISTINCT ID, C FROM TEST WHERE ID = 5;
SELECT DISTINCT ID FROM TEST ORDER BY ID ASC;
SELECT NAME, ADDRESS FROM CUSTOMERS
UNION
SELECT NAME, DEP FROM PERSONNEL;
SET
SET CATALOG catalog_name
SET DURABILITY { RELAXED | STRICT }
SET IDLE TIMEOUT { seconds | DEFAULT }
SET LOCK TIMEOUT { timeout_in_seconds | timeout_in_millisecondsMS}
SET OPTIMISTIC LOCK TIMEOUT seconds
SET SAFENESS {1SAFE | 2SAFE | DEFAULT}
SET SCHEMA ’schema_name’ | USER | ’user_name’
SET STATEMENT MAXTIME minutes
SET ISOLATION LEVEL {
READ COMMITTED |
REPEATABLE READ |
SERIALIZABLE }
SET {READ ONLY | READ WRITE}
Usage
These commands apply to the user session (connection) in which they are executed. They do
not affect other user sessions.
These SET statements may be issued at any time; however, they do not all take effect imme-
diately. The following statements take effect immediately:
■ SET CATALOG
■ SET IDLE TIMEOUT
■ SET SCHEMA
use locks for a few operations, such as SELECT FOR UPDATE. You can use SET OPTI-
MISTIC LOCK TIMEOUT to control the lock timeout for those operations. This command
applies only to the current connection. For more information, see “Setting Lock Timeout for
Optimistic Tables” on page 4-31.
SET SCHEMA sets the schema name context when implicitly qualifying a database object
name in a session. To remove the schema context, use the SET SCHEMA USER com-
mand. For details, read “SET SCHEMA” on page B-146.
SET STATEMENT MAXTIME sets connection-specific maximum execution time in min-
utes. The setting is effective until a new maximum time is set. Zero time means no maxi-
mum time, which is also the default.
SET {READ ONLY | READ WRITE} allows you to specify whether the connection be
allowed only to read, or whether it be allowed to read and write.
SET ISOLATION LEVEL allows you to specify the isolation level. For more information
about isolation levels, see “TRANSACTION ISOLATION Levels” on page 4-27.
Examples
SET CATALOG myCatalog;
SET DURABILITY STRICT;
SET IDLE TIMEOUT 30;
SET ISOLATION LEVEL REPEATABLE READ;
SET OPTIMISTIC LOCK TIMEOUT 30;
See Also
There are many commands that begin with the keyword SET. We’ve categorized many of
those separately from the SET commands. Those other commands include:
SET SCHEMA
SET SQL
SET TRANSACTION
and several SET SYNC commands.
There are other "SET xxx" commands, also.
SET SCHEMA
SET SCHEMA {’schema_name’ | USER | 'user_name'}
Usage
Solid Database Engine supports SQL89 style schemas. Schemas are used to help uniquely
identify entities (tables, views, etc.) within a database. By using schemas, each user may cre-
ate entities without worrying about whether her names overlap the names chosen by other
users/schemas.
To uniquely identify an entity (such as a table), you "qualify" it by specifying the catalog
name and schema name. Below is an example of a fully-qualified table name:
FinanceCatalog.AccounstReceivableSchema.CustomersTable
In keeping with the ANSI SQL2 standard, the user_name or schema_name may be enclosed
in single quotes.
The default schema can be changed with the SET SCHEMA statement. The schema can be
changed to the current user name by using the SET SCHEMA USER statement. Alterna-
tively, the schema can be set to ‘user_name’ which must be a valid user name in the data-
base.
Example
SET SCHEMA 'CUSTOMERS';
See Also
Catalogs are also used to quality (uniquely identify) the names of tables and other database
entities, so you may also wish to read about the SET CATALOG command..
SET SQL
SET SQL INFO {ON | OFF} [FILE {file_name | "file_name" | 'file_name'}]
[LEVEL info_level]
SET SQL SORTARRAYSIZE {array-size | DEFAULT}
SET SQL JOINPATHSPAN {path-span | DEFAULT}
SET SQL CONVERTORSTOUNIONS
{YES [COUNT value] | NO | DEFAULT}
Usage
All the settings are read per user session (unlike the settings in the solid.ini file, which
are automatically read each time Solid Database Engine is started).
SET SQL INFO The SET SQL INFO command allows you to turn on trace information
that may allow you to debug problems or tune queries. For SQL INFO, the default file is a
global soltrace.out shared by all users. If the file name is given, all future INFO ON
settings will use that file unless a new file is set. It is recommended that the file name is
given in single quotes, because otherwise the file name is converted to uppercase. The info
output is appended to the file and the file is never truncated, so after the info file is not
needed anymore, the user must manually delete the file. If the file open fails, the info output
is silently discarded.
The default SQL INFO LEVEL is 4. A good way to generate useful info output is to set info
on with a new file name and then execute the SQL statement using EXPLAIN PLAN FOR
syntax. This method gives all necessary estimator information but does not generate output
from the fetches (which may generate a huge output file).
SET SQL SORTARRAYSIZE This command sets the size of the array that SQL uses when order-
ing the result set of a query. The units are "rows" -- e.g. if you specify a value of 1000, then the server
will create an array big enough to sort 1000 rows.
SET SQL JOINPATHSPAN This command is obsolete. The syntax is accepted, but the
command has no effect.
SET SQL CONVERTORSTOUNIONS allows you to convert a query that contains "OR"
operations into an equivalent query that uses "UNION" operations. The following opera-
tions are logically equivalent:
select ... where x = 1 OR y = 1;
select ... where x = 1 UNION select... where y = 1;
By setting CONVERTORSTOUNIONS, you tell the optimizer that it may use equivalent
UNION operations instead of OR operations if the UNIONs seem more efficient based on
the volume and distribution of data. The COUNT parameter in SQL CONVERTOR-
STOUNIONS ("Convert ORs to UNIONs") specifies the maximum number of OR opera-
tions that may be converted to UNION operations. Note that you can also specify
CONVERTORSTOUNIONS by using the solid.ini configuration parameter named Conver-
tORsToUNIONs (for details, see the description of this parameter in the Solid Administra-
tor Guide). The default value is 100, which should be enough in almost all cases.
Example
SET SQL INFO ON FILE 'sqlinfo.txt' LEVEL 5
Supported in
This command requires Solid SmartFlow.
Usage
When a database catalog is created and configured for synchronization use, you must use
this command to specify whether the database is a master, replica, or both. Only a DBA or a
user with SYS_SYNC_ADMIN_ROLE can set the database role.
The database catalog is a master database if there are replicas in the domain that refresh
from publications from this database and/or propagate transactions to it. The database cata-
log is a replica catalog if it can refresh from publications that are in a master database. In
multi-tier synchronization, intermediate level databases serve a dual role, as both master and
replica databases.
Note that to use this command requires that you have already set the node name for the mas-
ter or replica using the SET SYNC NODE command. For details, read “SET SYNC NODE”
on page B-155.
When you set the database for a dual role, you can use the statement once or twice. For
example:
SET SYNC MASTER YES;
SET SYNC REPLICA YES;
Note that when you set the database for dual roles, SET SYNC REPLICA YES does not
override SET SYNC MASTER YES. Only the following explicit statement can override the
status of the master database:
SET SYNC MASTER NO;
Once overridden, the current database is set as replica only.
Examples
-- configure as replica
SET SYNC REPLICA YES;
-- configure as master
SET SYNC MASTER YES;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
This statement changes the network name associated with the database name. Use this state-
ment in a replica (or master) whenever you have changed network names in databases that a
replica (or master) connects to. Network names are defined in the Listen parameter of
the solid.ini configuration file.
The second connect string in SET SYNC CONNECT ... TO MASTER facilitates transpar-
ent failover of a Replica server to a standby Master server, should the Primary Master server
fail. The order of the connect strings is not significant. The connection is automatically
maintained to the currently active Primary server.
Usage in Master
Use this statement in a master to change the replica’s network name.
Usage in Replica
Use this statement in a replica to change the master’s network name.
Example
SET SYNC CONNECT 'tcp server.company.com 1313' TO MASTER hq_master;
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
This command sets the current catalog’s sync mode to either Maintenance mode or Normal
mode.
This command applies only to catalogs that are involved in synchronization (i.e. are "mas-
ter" catalogs or "replica" catalogs, or are both master and replica in a hierarchy with 3 or
more levels).
This command applies only to the current catalog. If you want to set more than one cata-
log’s sync mode to Maintenance, then you will have to switch to each catalog (by using the
SET CATALOG command) and then issue the SET SYNC MODE MAINTENANCE com-
mand for that catalog.
While a catalog’s sync mode is Maintenance, the following rules apply:
■ The catalog will not send or receive synchronization messages and therefore will not
engage in synchronization activities (e.g. refresh or respond to a refresh request).
■ DDL commands (e.g. ALTER TABLE) will be allowed on tables that are referenced by
publications.
■ When the sync mode changes, the server will send the system event
SYNC_MAINTENANCEMODE_BEGIN or SYNC_MAINTENANCEMODE_END.
■ If the master catalog’s publications are altered (dropped and recreated) by using the
REPLACE option, then the publication’s metadata (internal publication definition data)
is refreshed automatically to each replica the next time that replica refreshes from the
changed publication. (This is true whether or not the database was in Maintenance sync
mode when the publication was REPLACEd.)
■ Each catalog has a read-only parameter named SYNC_MODE in the parameter bulletin
board so that applications can check the catalog’s mode. Values for that parameter are
either ‘MAINTENANCE’ if the catalog is in maintenancesync mode or ‘NORMAL’ if
the catalog is not in maintenance sync mode. The value is NULL if the catalog is not a
master or a replica.
■ The user must have DBA or synchronization administrations privileges to set sync mode
to Maintenance or Normal.
■ A user may have more than one catalog in Maintenance sync mode at a time.
■ If the session that set the mode ON disconnects, then mode is set off.
■ The normal synchronization history operations are disabled. For example, when a delete
or update operation is done on a table that has synchronization history on, the synchro-
nization history tables will not store the "original" rows (i.e. the rows before they were
deleted or updated). Note, however, that deletes and updates apply to the synchroniza-
tion history table; e.g.
DELETE * FROM T WHERE c = 5
will delete rows from the history table as well as from the base table. The table below
shows how various operations (INSERT, DELETE, etc.) apply to the synchronization
history tables in master and replica when sync mode is set to Maintenance.
Table 6–1
Operation Master Replica
INSERT Rows are inserted to base table. Rows are inserted to base table and marked as
official.
Table 6–1
Operation Master Replica
UPDATE Both base table and history is Both base table and history is updated. Tenta-
updated. tive/official status is not updated so tentative
rows remains tentative and official rows
remains official.
DELETE Rows are deleted from base table and Rows are deleted from base table and from his-
from history. tory.
Add, alter, drop column Same operation is done to history Same operation is done to history also.
also.
Altering table mode History mode is not altered History mode is not altered
Create index Same index is created to history also Same index is created to history also
Create triggers Triggers are not created on history Triggers are not created on history
Example
SET SYNC MODE MAINTENANCE
SET SYNC MODE NORMAL
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
Assigning the node name is part of the registration process of a replica database. Each cata-
log of a Solid Database Engine environment must have a node name that is unique within the
domain. One catalog can have only one node name. Two catalogs cannot have the same node
name.
You can use the SET SYNC NODE unique_node_name option to rename a node name if:
■ If the node is a replica database and it is not registered to a master
and/or
■ If the node is a master database and there are no replicas registered in the master data-
base
Following are examples for renaming a node name:
SET SYNC NODE A; -- Now the node name is A.
SET SYNC NODE B; -- Now the node name is B.
COMMIT WORK;
SET SYNC NODE C; -- Now the node name is C.
ROLLBACK WORK; -- Now the node name is rolled back to B.
SET SYNC NODE NONE; -- Now the node has no name.
COMMIT WORK;
The unique_node_name must conform to the rules that are used for naming other objects
(such as tables) in the database. Do not put single quotes around the node name.
If you specify NONE, then this command will remove the current node name.
If you want to use a reserved word, such as "NONE", as a node name, then you must put the
keyword in double quote marks to ensure that it is treated as a delimited identifier. For exam-
ple:
SET SYNC NODE "NONE"; -- Now the node name is "NONE".
You can verify the node name assignment with the following statement:
Note
Note
When using the SET SYNC NODE NONE option, be sure the catalog associated with the
node name is not defined as a master, replica, or both. To remove the node name, the catalog
must be defined as SET SYNC MASTER NO and/or SET SYNC REPLICA NO. If you do
try to set the node name to NONE on a master and/or replica catalog, Solid Database Engine
returns error message 25082.
Usage in Master
Use this statement in the master to set or remove the node name from the current catalog.
Usage in Replica
Use this statement in the replica to set or remove the node name from the current catalog.
Example
SET SYNC NODE SalesmanJones;
Return Values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
Supported in
This command requires Solid SmartFlow.
Usage
This statement defines persistent catalog-level parameters that are visible via the parameter
bulletin board to all transactions that are executed in that catalog. Each catalog has a differ-
ent set of parameters.
If the parameter already exists, the new value overwrites the previous one. An existing
parameter can be deleted by setting its value to NONE. All parameters are stored in the
SYS_BULLETIN_BOARD system table.
These parameters are not propagated to the master.
In addition to system specific-parameters, you can also store in the system table a number of
system parameters that configure the synchronization functionality. Available system param-
eters are listed at the end of the SQL reference.
Usage in Master
Use the SET SYNC PARAMETER in the master for setting database parameters.
Usage in Replica
Use the SET SYNC PARAMETER in replicas for setting database parameters.
Example
SET SYNC PARAMETER db_type 'REPLICA'
SET SYNC PARAMETER db_type NONE
Return Values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
See Also
GET_PARAM
PUT_PARAM
Syntax in replica:
SAVE SET SYNC PROPERTY <propertyname> = { ’value’ | NONE }
Supported in
This command requires Solid SmartFlow.
Usage
This command allows you to specify a property name and value for a replica. Replicas that
have properties may be grouped, and a group may be specified when using the START
AFTER COMMIT statement. For example, you might have some replicas that are related to
the bicycle industry and others that are related to the surfboard industry, and you may want
to update each of those groups of replicas separately. You can use Property Names to group
these replicas. All members of a group have the same property and have the same value for
that property.
For more information, see the section titled "Replica Property Names" in the Solid Smart-
Flow Data Synchronization Guide.
Examples
Master:
SET SYNC PROPERTY color = 'red' FOR REPLICA replica1;
SET SYNC PROPERTY color = NONE FOR REPLICA replica1;
Replica:
SAVE SET SYNC PROPERTY color = 'red';
SAVE SET SYNC PROPERTY color = NONE;
Supported in
This command requires Solid SmartFlow.
Usage
This statement is used to define the username and password for the registration process
when the replica database is being registered in the master database. To use this command,
you are required to have SYS_SYNC_ADMIN_ROLE access.
Note
Note
The SET SYNC USER statement is used for replica registration only. Aside from registra-
tion, all other synchronization operations require a valid master user ID in a replica data-
base. If you want to designate a different master user for a replica, you must map the replica
ID on the replica database with the master ID on the master database. For details, read the
section titled "Mapping Replica User ID With Master User ID" in the Solid SmartFlow Data
Synchronization Guide.
You define the registration username in the master database. The name you specify must
have sufficient rights to execute the replica registration tasks. You can provide registration
rights for a master user in the master database by designating the user with the
SYS_SYNC_REGISTER_ROLE or the SYS_SYNC_ADMIN_ROLE using the GRANT
rolename TO user statement.
After the registration has been successfully completed, you must reset the sync user to
NONE; otherwise, if a master user saves statements, propagates messages, or refreshes from
or registers to publications, the following error message is returned:
User definition not allowed for this operation.
Usage in Master
This statement is not available in the master database.
Usage in Replica
Use this statement in the replica to set the user name.
Example
SET SYNC USER homer IDENTIFIED BY marge ;
SET SYNC USER NONE ;
SET TRANSACTION
SET TRANSACTION DURABILITY { RELAXED | STRICT }
SET TRANSACTION ISOLATION LEVEL {
READ COMMITTED |
REPEATABLE READ |
SERIALIZABLE }
SET TRANSACTION SAFENESS {1SAFE | 2SAFE | DEFAULT}
SET TRANSACTION { READ ONLY | READ WRITE }
Usage
The settings apply only to the current transaction.
The command SET TRANSACTION ISOLATION is based on ANSI SQL. It sets the trans-
action isolation level (READ COMMITTED, REPEATABLE READ, or SERIALIZABLE)
and the read level (READ ONLY or READ WRITE). For more information about isolation
levels, see “TRANSACTION ISOLATION Levels” on page 4-27.
The command SET TRANSACTION { READ ONLY | READ WRITE } is based on ANSI
SQL. It allows the user to specify whether the transaction is allowed to make any changes to
data.
The command SET TRANSACTION DURABILITY { RELAXED | STRICT } controls
whether the server uses "strict" or "relaxed" durability for transaction logging. This com-
mand is a Solid extension to SQL; it is not part of the ANSI standard.
Your choice will not affect any other user, any other open session that you yourself currently
have, or any future session that you may have. Each user session may set its own durability
level, based on how important it is for the session not to lose any data.
Note that if the new transaction durability setting is STRICT, then any previous transactions
that have not yet been written to disk will be written at the time that the current transaction is
committed. (Note that those transactions are not written to disk as soon as the transaction
durability level is changed to STRICT; the writes wait until the current transaction is com-
mitted.)
The server uses transaction logging to ensure that it can recover data in the event of an
abnormal shutdown. "Strict" durability means that as soon as a transaction is committed, the
server writes the information to the transaction log file. "Relaxed" durability means that the
server may not write the information as soon as the transaction is committed; instead, the
server may wait, for example, until it is less busy, or until it can write multiple transactions
in a single write operation. If you use relaxed durability, then if the server shuts down abnor-
mally, you may lose a few of the most recent transactions. For more information about dura-
bility, see the In-Memory Database Guide.
If the SET TRANSACTION DURABILITY statement matches the level of durability
already set for the session, the statement has no effect, and status "SUCCESS" is returned.
ments, however.) If this rule is violated, an error is returned. The session-level com-
mands may be executed at any point in a transaction.
■ The transaction-level commands take precedence over the session-level commands.
However, the transaction-level commands apply only to the current transaction. After
the current transaction is finished, the settings will return to the value set by the most
recent previous SET command (if any). For example:
COMMIT WORK; -- Finish previous transaction;
SET ISOLATION LEVEL SERIALIZABLE;
COMMIT WORK;
-- Isolation level is now SERIALIZABLE
...
COMMIT WORK;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- Isolation level is now REPEATABLE READ because transaction-level settings
-- take precedence over session-level settings.
COMMIT WORK;
-- Isolation level is now back to SERIALIZABLE, since the transaction-level
-- settings applied only to that transaction.
The complete precedence hierarchy for isolation level and read level settings is below. Items
closer to the top of the list have higher precedence.
1. SET TRANSACTION... (i.e. transaction-level settings)
2. SET ... (session-level settings)
3. The server-level settings specified by the value in solid.ini configuration parameter (e.g.
IsolationLevel or Durability Level (there is no solid.ini parameter for the READ ONLY
/ READ WRITE setting)). You may change these settings by editing the solid.ini file, or
by issuing a command like the following:
ADMIN COMMAND 'parameter Logging.DurabilityLevel = 2';
Note that if you change the solid.ini parameter, the new setting will not take effect until the
next time that the server starts.
4. The server’s default (REPEATABLE READ, STRICT, or READ WRITE).
■ There is no "DEFAULT" option to set the value to whatever value the DurabilityLevel
parameter has specified. Also, there is no way to read the durability level that applies to
the current session. Therefore, once you have explicitly set the durability by executing
the SET DURABILITY statement, you cannot restore the "default" durability level
specified by the DurabilityLevel parameter. You can, of course, switch from RELAXED
to STRICT durability and back whenever you wish, but you cannot "undo" your change
and restore the default level without actually knowing what that default level was.
Caution
The behavior of the SET TRANSACTION command changed in Solid Database Engine
Version 4.0. In previous Solid product versions, the SET TRANSACTION command applied
to all subsequent transactions, rather than to the current transaction. If you want to keep the
old behavior, you may use the SET command (see “SET” on page B-143) or you may set the
solid.ini configuration parameter SetTransCompatibility3 (see the Solid Administrator Guide
for a description of this parameter).
The SET TRANSACTION command is based on ANSI SQL. However, the Solid imple-
mentation has some differences from the ANSI definition. The ANSI definition allows the
two ANSI-defined "clauses" (isolation level and read level) to be combined, e.g.:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ WRITE;
Solid Database Engine does not support this syntax. Solid does, however, support multiple
SET statements in a single transaction, e.g.:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET TRANSACTION READ WRITE;
Examples
SET TRANSACTION DURABILITY RELAXED;
SET TRANSACTION ISOLATION REPEATABLE READ;
SET TRANSACTION READ WRITE;
See Also
“SET” on page B-143
“TRANSACTION ISOLATION Levels” on page 4-27
“Logging and Transaction Durability” on page 5-1 of the Solid Administrator Guide
Usage
The START AFTER COMMIT statement specifies an SQL statement (such as a call to a
stored procedure) that will be executed when the current transaction commits. (If the trans-
action is rolled back, then the specified SQL statement will not be executed.)
The START AFTER COMMIT statement returns a result set with one INTEGER column.
This integer is a unique "job" id and can be used to query the status of a statement that failed
to start due to an invalid SQ -statement, insufficient access rights, replica not available etc.
If you use the UNIQUE keyword before the <stmt> then that the statement will be executed
only if there isn’t already an identical statement executing or “pending”. Statements are
compared using simple string compare. For example ‘call foo(1)’ is different from ‘call
foo(2)’. The server also takes into account whether the statement already being executed (or
pending for execution) is on the same replica or a different replica; only identical statements
on the same replica are discarded.
Important
Remember that when duplicate statements are discarded by using the UNIQUE keyword, the
most recent statements are the ones thrown out, and the oldest one is the one that keeps run-
ning. It is quite possible to create a situation where you do multiple updates, for example,
and you trigger multiple START AFTER COMMIT operations, but only the oldest one exe-
cutes and thus the newest updated data may not get sent to the replicas immediately.
NONUNIQUE means that duplicate statements can be executed simultaneously in the back-
ground.
FOR EACH REPLICA specifies that the statement is executed for each replica that fulfills
the property conditions given in the search_condition part of the WHERE clause. Before
executing the statement, a connection to the replica is established. If a procedure call is
started, then the procedure can get the “current” replica name using the keyword
“DEFAULT”.
If RETRY is specified, then the operation is re-executed after N seconds (defined by sec-
onds in the retry_spec) if the replica is not reached on the first attempt. The count specifies
how many times a retry is attempted.
See Chapter 3,“Stored Procedures, Events, Triggers, and Sequences,” for a more detailed
description of the START AFTER COMMIT command.
Transactions
A statement started in the background using START AFTER COMMIT is executed in a sep-
arate transaction. That transaction is executed in autocommit mode, i.e it cannot be rolled
back once it has started.
Durability
Background statements are NOT durable. I.e. the execution of statements started with
START AFTER COMMIT is not guaranteed.
Rollback
Background statements cannot be rolled back after they have been started. So after a state-
ment that has been started with START AFTER COMMIT has executed successfully, there
is no way to roll it back.
The START AFTER COMMIT statement itself can of course be rolled back, and this will
prevent the specified statement from executing. E.g.
START AFTER COMMIT UNIQUE INSERT INTO MyTable VALUES (1);
ROLLBACK;
In the example above, the transaction rolls back and thus "INSERT INTO MyTable VAL-
UES (1)" will not be executed.
Order Of Execution
Background statements are executed asynchronously and they don't have any guaranteed
order even inside a transaction.
Examples
-- Start local procedure in the background.
-- Start the call if “CALL myproc” is not running in the background already.
START AFTER COMMIT UNIQUE call myproc;
-- Start procedure in the background using replicas which have property “color” = "blue".
START AFTER COMMIT FOR EACH REPLICA WHERE color=’blue’ UNIQUE CALL
myproc;
The following statements are all considered different and therefore each is executed exe-
cuted, despite the presence of the keyword UNIQUE. (Note that "name" is a unique prop-
erty of each replica.)
START AFTER COMMIT UNIQUE call myproc;
START AFTER COMMIT FOR EACH REPLICA WHERE name=’R1’ UNIQUE call myproc;
START AFTER COMMIT FOR EACH REPLICA WHERE name=’R2’ UNIQUE call myproc;
START AFTER COMMIT FOR EACH REPLICA WHERE name=’R3’ UNIQUE call myproc;
But if the following statement is executed in the same transaction as the previous ones and
the condition “color=’blue’” matches some of the replicas R1, R2 or R3, then the call is not
executed for those replicas again.
START AFTER COMMIT FOR EACH REPLICA WHERE color=’blue’ UNIQUE call
myproc;
For additional examples, see Chapter 3,“Stored Procedures, Events, Triggers, and
Sequences,”.
TRUNCATE TABLE
TRUNCATE TABLE tablename
Usage
This statement is, from the caller’s point of view, semantically equivalent to “DELETE
FROM tablename”. However, it is much more efficient thanks to relaxed isolation. During
the execution of this statement, the defined isolation level is not maintained in concurrent
transactions. The effect of removing the rows will be immediately seen in all concurrent
transactions. Therefore this statement is recommended for maintenance purposes only.
UNLOCK TABLE
UNLOCK TABLE { ALL | tablename [,tablename]}
The keyword ALL releases all table-level locks on all tables.
Tablename: The name of the table to unlock. You can also specify the catalog and schema of
the table.
Usage
This command allows you to unlock tables that you manually locked (using the LOCK
TABLE command) with the LONG option. The LONG option allows you to hold a lock past
the end of the transaction in which the lock was placed. Since there is no natural endpoint
for the lock (other than the end of the transaction), you must explicitly release a LONG lock
by using the UNLOCK command.
The UNLOCK TABLE command does not apply to the server’s automatic locks, or to man-
ual locks that were not locked with the LONG option. If a lock is automatic, or if it is man-
ual and not LONG, then the server will automatically release the lock at the end of the
transaction in which the lock was placed. Thus there is no need to manually unlock those
locks.
When the UNLOCK TABLE command is used, it does not take effect immediately; instead,
the locks are released when the current transaction is committed.
Caution
If the current transaction (the one in which the UNLOCK TABLE command was executed)
is not committed (e.g. if it is rolled back), then the tables are not unlocked; they will remain
locked until another UNLOCK TABLE command is successfully executed and committed.
-- Get an exclusive lock that will persist past the end of the current
-- transaction. If you can’t get an exclusive lock immediately, then
-- wait up to 60 seconds to get it.
LOCK TABLE emp, dept IN LONG EXCLUSIVE MODE WAIT 60;
-- Make the schema changes (or do whatever you needed the exclusive
-- lock for).
CALL DO_SCHEMA_CHANGES_1;
COMMIT WORK;
CALL DO_SCHEMA_CHANGES_2;
UNLOCK TABLE ALL; -- at the end of this transaction, release locks.
...
COMMIT WORK;
...
UNLOCK TABLE “ALL”; -- Unlock the table named “ALL”.
Return values
For details on each error code, see the appendix titled Error Codes in the Solid Administra-
tor Guide.
See Also
LOCK TABLE
SET SYNC MODE { MAINTENANCE | NORMAL }
UNREGISTER EVENT
The UNREGISTER EVENT command is allowed only inside stored procedures. See the
CREATE PROCEDURE statement and the CREATE EVENT statement for more details.
UPDATE (Positioned)
UPDATE table_name
SET [table_name.]column_identifier = {expression | NULL}
[, [table_name.]column_identifier = {expression | NULL}]...
WHERE CURRENT OF cursor_name
Usage
The positioned UPDATE statement updates the current row of the cursor. The name of the
cursor is defined using ODBC API function named SQLSetCursorName.
Example
UPDATE TEST SET C = 0.33
WHERE CURRENT OF MYCURSOR
UPDATE (Searched)
UPDATE table-name
SET [table_name.]column_identifier = {expression | NULL}
[, [table_name.]column_identifier = {expression | NULL}]...
[WHERE search_condition]
Usage
The UPDATE statement is used to modify the values of one or more columns in one or more
rows, according the search conditions.
Example
UPDATE TEST SET C = 0.44 WHERE ID = 5
WAIT EVENT
The WAIT EVENT command is allowed only inside stored procedures. See the CREATE
PROCEDURE statement for more details.
Table_reference
Table_reference
table_reference_list ::= table_reference [ , table-reference … ]
table_reference ::= table_name [[AS] correlation_name] |
derived_table [[AS] correlation_name
[( derived_column_list )]] | joined_table
table_name ::= table_identifier | schema_name.table_identifier
derived_table ::= subquery
derived_column_list ::= column_name_list
joined_table ::= cross_join | qualified_join | ( joined_table)
cross_join ::= table_reference CROSS JOIN table_reference
qualified_join ::= table_reference [NATURAL] [join_type] JOIN
table_reference [join_specification]
join_type ::= INNER | outer_join_type [OUTER] | UNION
outer_join_type ::= LEFT | RIGHT | FULL
join_specification ::= join_condition | named_columns_join
join_condition ::= ON search_condition
named_columns_join ::= USING (column_name_list)
column_name_list ::= column_identifier [ { , column_identifier} …]
Query_specification
Query_specification
query_specification ::= SELECT [DISTINCT | ALL] select_list
table_expression
select_list ::= * | select_sublist
[ {, select_sublist} ... ]
select_sublist ::= derived_column |
[table_name | table_identifier].*
derived_column ::= expression [ [AS] column_alias] ]
Search_condition
Search_condition
search_condition ::= search_item | search_item { AND | OR }
search_item
search_item ::= [NOT] { search_test |
(search_condition) }
search_test ::= comparison_test | between_test |
like_test | null_test | set_test |
quantified_test | existence_test
comparison_test ::= expression { = | <> | < | <= | > | >= }
{ expression | subquery }
Note: Spaces on each side of the operator are optional.
between_test ::= column_identifier [NOT] BETWEEN
expression AND expression
like_test ::= column_identifier [NOT] LIKE value [ESCAPE value]
null_test ::= column_identifier IS [NOT] NULL
set_test ::= expression [NOT] IN ( { value
[,value]... | subquery } )
quantified_test ::= expression { = | <> | < | <= | > | >= }
[ALL | ANY | SOME] subquery
existence_test ::= EXISTS subquery
Check_condition
Check_condition
Expression
Expression
expression ::= expression_item | expression_item
{ + | - | * | / } expression_item
Note: Spaces on each side of the operator are optional.
expression_item ::= [ + | - ] { value | column_identifier | function |
case_expression | cast_expression | ( expression ) }
value ::= literal | USER | variable
function ::= set_function | null_function | string_function |
numeric_function |
datetime_function | system_function |
datatypeconversion_function
NOTE: The string, numeric, datetime, and datatypeconver-
sion functions are scalar functions, in which an operation
denoted by a function name is followed by a pair of paren-
thesis enclosing zero or more specified arguments. Each sca-
lar function returns a single value.
Expression
set_function ::= COUNT (*) |
{ AVG | MAX | MIN | SUM | COUNT }
( { ALL | DISTINCT } expression )
null_function ::= { NULLVAL_CHAR( ) | NULLVAL_INT( ) }
datatypeconversion_function ::= CONVERT_CHAR(value_exp) |
CONVERT_DATE(value_exp) |
CONVERT_DECIMAL(value_exp) |
CONVERT_DOUBLE(value_exp) |
CONVERT_FLOAT(value_exp) |
CONVERT_INTEGER(value_exp) |
CONVERT_LONGVARCHAR(value_exp) |
CONVERT_NUMERIC(value_exp) |
CONVERT_REAL(value_exp) |
CONVERT_SMALLINT(value_exp) |
CONVERT_TIME(value_exp) |
CONVERT_TIMESTAMP(value_exp) |
CONVERT_TINYINT(value_exp) |
CONVERT_VARCHAR(value_exp)
Note: These functions are used to implement the
{fn CONVERT(value, odbc_typename)}
escape clauses defined by ODBC. The preferred way, how-
ever, is to use
CAST(value AS sql_typename)
which is defined in SQL-92 and fully supported by Solid.
For details, see Appendix F of the Solid Programmer
Guide.
case_expression ::= case_abbreviation | case_specification
Expression
case_abbreviation ::= NULLIF(value_exp, value_exp) |
COALESCE(value_exp {, value_exp}…)
The NULLIF function returns NULL if the first parameter is
equal to the second parameter; otherwise, it returns the first
parameter. It is equivalent to
IF (p1 = p2) THEN RETURN NULL ELSE RETURN p1;
The NULLIF function is useful if you have a special value
that serves as a flag to indicate NULL. You can use NUL-
LIF to convert that special value to NULL. In other words, it
behaves like
IF (p1 = NullFlag) THEN RETURN NULL ELSE RETURN
p1;
COALESCE returns the first non-NULL argument. The list
of arguments may be of almost any length. All arguments
should be of the same (or compatible) data types.
case_specification ::= CASE [value_exp]
WHEN value_exp
THEN {value_exp}
[WHEN value_exp
THEN {value_exp} …]
[ELSE {value_exp}]
END
cast_expression ::= CAST (value_exp AS -data-type)
row value constructor A row value constructor (RVC) is an ordered sequence of
expression values delimited by parentheses, for example:
(1, 4, 9)
('Smith', 'Lisa')
You can think of this as constructing a row based on a series
of elements/values, just like a row of a table is composed of
a series of fields.
For more information about row value constructors, see
“Row value constructors” on page 4-38.
String Functions
Function Purpose
SUBSTRING(str, start, Derives substring length bytes long from str beginning at
length) start. For example, if str="First Second Third", then SUB-
STRING(str, 7, 6) would return "Second".
Note that string positions are numbered starting from 1 (not
0).
TRIM(str) Removes leading and trailing spaces in str
UCASE(str) Converts str to uppercase
If you are using wildcard characters in your string operations, then see also “Wildcard char-
acters” on page B-184
Numeric Functions
Function Purpose
ABS(numeric) Absolute value of numeric
ACOS(float) Arccosine of float, where float is expressed in radians
ASIN(float) Arcsine of float, where float is expressed in radians
ATAN(float) Arctangent of float, where float is expressed in radians
ATAN2(float1, float2) Arctangent of the x and y coordinates, specified by float1
and float2, respectively, as an angle, expressed in radians
CEILING(numeric) Smallest integer greater than or equal to numeric
COS(float) Cosine of float, where float is expressed in radians
COT(float) Cotangent of float, where float is expressed in radians
DEGREES(numeric) Converts numeric radians to degrees
EXP(float) Exponential value of float
FLOOR(numeric) Largest integer less than or equal to numeric
LOG(float) Natural logarithm of float
LOG10(float) Base 10 log of float
MOD(integer1, integer2) Modulus of integer1 divided by integer2
PI() Pi as a floating point number
POWER(numeric, integer) Value of numeric raised to the power of integer
Function Purpose
RADIANS(numeric) Converts from numeric degrees to radians
ROUND(numeric, integer) Numeric rounded to integer
SIGN(numeric) Sign of numeric
SIN(float) Sine of float, where float is expressed in radians
SQRT(float) Square root of float
TAN(float) Tangent of float, where float is expressed in radians
TRUNCATE(numeric, inte- Numeric truncated to integer
ger)
Function Purpose
CURDATE() Returns the current date
CURTIME() Returns the current time
DAYNAME(date) Returns a string with the day of the week
DAYOFMONTH(date) Returns the day of the month as an integer between 1 and 31
DAYOFWEEK(date) Returns the day of the week as an integer between 1 and 7,
where 1 represents Sunday
DAYOFYEAR(date) Returns the day of the year as an integer between 1 and 366
EXTRACT (date field FROM Isolates a single field of a datetime or a interval and con-
date_exp) verts it to a number.
HOUR(time_exp) Returns the hour as an integer between 0 and 23
MINUTE(time_exp) Returns the minute as an integer between 0 and 59
MONTH(date) Returns the month as an integer between 1 and 12
MONTHNAME(date) Returns the month name as a string
NOW() Returns the current date and time as a timestamp
QUARTER(date) Returns the quarter as an integer between 1 and 4
SECOND(time_exp) Returns the second as an integer between 0 and 59
Function Purpose
TIMESTAMPADD(interval, Calculates a timetamp by adding integer_exp intervals of
integer_exp, timestamp_exp) type interval to timestamp_exp
Keywords used to express valid TIMESTAMPADD interval
values are:
SQL_TSI_FRAC_SECOND
SQL_TSI_SECOND
SQL_TSI_MINUTE
SQL_TSI_HOUR
SQL_TSI_DAY
SQL_TSI_WEEK
SQL_TSI_MONTH
SQL_TSI_QUARTER
SQL_TSI_YEAR
TIMESTAMPDIFF(interval, Returns the integer number of intervals by which timestamp-
timestamp-exp1, timestamp- exp2 is greater than timestamp-exp1
exp2)
Keywords used to express valid TIMESTAMPDIFF interval
values are:
SQL_TSI_FRAC_SECOND
SQL_TSI_SECOND
SQL_TSI_MINUTE
SQL_TSI_HOUR
SQL_TSI_DAY
SQL_TSI_WEEK
SQL_TSI_MONTH
SQL_TSI_QUARTER
SQL_TSI_YEAR
WEEK(date) Returns the week of the year as an integer between 1 and 52
YEAR(date) Returns the year as an integer
System Functions
The system functions return special information about the Solid database.
Function Purpose
USER() Returns the user authorization name
UIC() Returns the connection id associated with the connection
CURRENT_USERID () Returns the current user id
LOGIN_USERID () Returns the login userid
CURRENT_CATALOG () Returns the current catalog
LOGIN_CATALOG () Returns the login catalog
CURRENT_SCHEMA () Returns the current schema
LOGIN_SCHEMA () Returns the login schema
Miscellaneous Functions
.
Function Purpose
BIT_AND(integer1, integer2) Returns the result of the bit-wise AND operation.
Data_type
Data_type
data_type ::= {BINARY |
CHAR [ length ] | DATE |
DECIMAL [ ( precision [ , scale ] ) ] |
DOUBLE PRECISION |
FLOAT [ ( precision ) ] |
INTEGER |
LONG VARBINARY |
LONG VARCHAR |
LONG WVARCHAR |
NUMERIC [ ( precision [ , scale ] ) ] |
REAL |
SMALLINT |
TIME |
TIMESTAMP [ ( timestamp precision ) ] |
TINYINT | VARBINARY |
VARCHAR [ ( length ) ] } |
WCHAR |
WVARCHAR [ length ]
Date/time literal
date_literal ´YYYY-MM-DD´
time_literal ´HH:MM:SS´
timestamp_literal ´YYYY-MM-DD HH:MM:SS´
Pseudo Columns
The following pseudo columns may also be used in the select-list of a SELECT statement:
Note
Note
Since ROWID and ROWVER refer to a single row, they may only be used with queries that
return rows from a single table.
Wildcard characters
The following may be used as wildcard characters in certain expressions, such as
LIKE ’<string>’.
Character Explanation
_ (underscore) The underscore character matches any single character. For exam-
ple, ’J_NE’ matches ’JANE’ and ’JUNE’.
% (percent sign) The percent sign character matches any group of 0 or more charac-
ters. For example ’ED%’ matches ’EDWARD’ and ’EDITOR’. As
another example, ’%ED%’ matches ’EDWARD’, ’TEDDY’, and
’FRED’.
returns all people who have JO somewhere in their name, included but not limited to:
JOANNE, BILLY JO, and LONG JOHN SILVER
Multiple wildcards are allowed in a single string. For example, the string J_V_ matches
JAVA and JIVE and any other four-character words or names that start with J and have V as
the third character. Note that because the underscore (_) only matches exactly one character,
the string J_V_ does not match the string JOVIAL, which has more than four characters.
Note
Note
Solid SQL allows some reserved words to be used as identifiers even if those words are not
in double quotes. However, we strongly recommend that you use double quotes around any
reserved word that you want to use as an identifier; this will increase portability.
BOTH • •
BREADTH (•)
BY • • • •
CALL (•) •
CASCADE • • • •
CASCADED • • •
CASE • • •
CAST • • •
CATALOG • • •
CHAR • • • •
CHAR_LENGTH • •
CHARACTER • • •
CHARACTER_LENGT • •
H
CHECK • • • •
CLOSE • • • •
COALESCE • • •
COLLATE • •
COLLATION • •
COLUMN • • •
COMMIT • • • •
COMMITBLOCK •
COMMITTED •
COMPLETION (•)
CONNECT • • •
DATA (•) •
DATE • • •
DAY • •
DEALLOCATE • • •
DEC • • • •
DECIMAL • • • •
DECLARE • • • •
DEFAULT • • • •
DEFERRABLE • •
DEFERRED • •
DELETE • • • •
DESC • • • •
DESCRIBE • • •
DESCRIPTOR • • •
DIAGNOSTICS • • •
DICTIONARY (•)
DISCONNECT • • •
DISTINCT • • • •
DOMAIN • • •
DOUBLE • • • •
DROP • • • •
EACH (•)
ELSE • • •
ELSEIF (•) •
ENABLE •
END • • • •
END-EXEC • •
EQUALS (•)
ESCAPE • • •
EVENT •
EXCEPT • • •
EXCEPTION • • •
EXEC • • • •
EXECUTE • • • •
EXISTS • • • •
EXPLAIN •
EXPORT •
GET • • • •
GLOBAL • •
GO • •
GOTO • • •
GRANT • • • •
GROUP • • • •
HAVING • • • •
HINT •
HOUR • •
IDENTIFIED •
IDENTITY • •
NOT • • • •
NULL • • • •
NULLIF • • •
NUMERIC • • • •
OBJECT (•)
OCTET_LENGTH • •
OF • • • •
OFF • (•)
OID (•)
OLD (•) •
ON • • • •
ONLY • • •
OPEN • • •
OPERATION (•)
OPERATORS (•)
OPTIMISTIC •
OPTION • • •
OR • • • •
ORDER • • • •
OTHERS (•)
OUTER • • •
OUTPUT • •
OVERLAPS • •
PARAMETERS (•)
PARTIAL • •
PASCAL •
PESSIMISTIC •
PLAN •
PLI •
POSITION • •
POST •
PRECISION • • • •
PREORDER (•)
PREPARE • • • •
PRESERVE • •
PRIMARY • • • •
PRIOR • •
PRIVATE (•)
PRIVILEGES • • •
PROCEDURE • • •
PROPAGATE •
PROTECTED (•)
PUBLIC • • • •
PUBLICATION •
READ • •
REAL • • •
RECURSIVE (•)
REF (•)
REFERENCES • • • •
REFERENCING (•) •
REFRESH •
REGISTER •
RELATIVE • •
REPLICA •
REPLY •
RESIGNAL (•)
RESTART •
RESTRICT • • • •
RESULT •
RETURN (•) •
RETURNS (•) •
REVERSE •
REVOKE • • • •
RIGHT • • •
ROLE (•) •
ROLLBACK • • • •
ROUTINE (•)
ROW (•)
ROWID •
ROWNUM •
ROWSPERMESSAGE •
ROWVER •
ROWS • •
SAVEPOINT (•) •
SCAN •
SCHEMA • • •
SCROLL • •
SEARCH (•)
SEQUENCE (•) •
SERIALIZABLE •
SESSION • •
SESSION_USER • •
SET • • • •
SIGNAL (•)
SIMILAR (•)
SIZE • •
SMALLINT • • • •
SOME • • •
SORT •
SPACE •
SQL • • • •
SQLCA • •
SQLCODE • •
SQLERROR • • • •
SQLEXCEPTION (•)
SQLSTATE • •
SQLWARNING • (•)
START •
STRUCTURE (•)
SUBSCRIBE •
SUBSCRIPTION •
SUBSTRING • •
THEN • • •
THERE (•)
TIME • • •
TIMEOUT •
TIMESTAMP • • •
TIMEZONE_HOUR • •
TIMEZONE_MINUTE • •
TINYINT •
TO • • • •
TRAILING •
TRANSACTION • • •
TRANSACTIONS •
TRANSLATE • •
TRANSLATION • •
TRIGGER (•) •
TRIM • •
TRUE • •
TRUNCATE •
TYPE (•)
UNDER (•)
UNION • • • •
VARWCHAR •
VARYING • • •
VIEW • • • •
VIRTUAL (•)
VISIBLE (•)
WAIT (•) •
WCHAR •
WHEN • • •
WHENEVER • • •
WHERE • • • •
WHILE (•) •
WITH • • • •
WITHOUT (•)
WORK • • • •
WRITE • •
NOTES:
CASCADED: The word CASCADED is reserved in Solid database servers; however, the
word is not currently used in any Solid SQL statements.
SYSTEM TABLES
SQL_LANGUAGES
The SQL_LANGUAGES system table lists the SQL standards and SQL dialects which are
supported.
SYS_ATTAUTH
SYS_BACKGROUNDJOB_INFO
If the body of a START AFTER COMMIT statement cannot be started, the reason is logged
in the system table SYS_BACKGROUNDJOB_INFO. Only failed START AFTER COM-
MIT statements are logged in this table. If the statement (e.g. a procedure call) starts suc-
cessfully, no information is stored in this system table. Also, statements that start
successfully but do not finish executing are not stored in this system table.
The user can retrieve information from the table SYS_BACKGROUNDJOB_INFO by using
either an SQL SELECT statement or by calling a system procedure
SYS_GETBACKGROUNDJOB_INFO. See “SYS_BACKGROUNDJOB_INFO” on page
D-2 for more details.
Also a system-defined event SYS_EVENT_SACFAILED is posted when a START AFTER
COMMIT statement fails to start. See “SYS_EVENT_SACFAILED” on page F-6 for more
details. The application can wait for this event and use the jobid to retrieve the error mes-
sage from the system table SYS_BACKGROUNDJOB_INFO.
The system table SYS_BACKGROUNDJOB_INFO can be emptied with the admin com-
mand:
ADMIN COMMAND 'cleanbgjobinfo';
Only a DBA can execute the ’cleanbgjobinfo’ command.
ID INTEGER
STMT WVARCHAR The statement that could not be
executed
USER_ID INTEGER user or role id
ERROR_CODE INTEGER The error that occurred when we
tried to execute the statement.
ERROR_TEXT WVARCHAR A description of the error
SYS_CARDINAL
SYS_CATALOGS
The SYS_CATALOGS lists available catalogs.
SYS_COLUMNS
This table lists all system table columns.
There are no owner or user viewing restrictions for viewing the system columns, which
means owners can view columns other than those they have created in this table and users
with no access rights or with specific access rights can still view any system column in this
table.
SYS_EVENTS
SYS_FORKEYPARTS
SYS_FORKEYS
SYS_INFO
SYS_KEYPARTS
SYS_KEYS
SYS_PROCEDURES
This system table lists procedures.
Specific users are restricted from viewing procedures. Owners are restricted to viewing pro-
cedures they have created. Users can only view procedures to which they have execute
access to see the procedure definition. If users have no access rights, they are restricted from
viewing all procedures. Note that execute access does not allow users to see procedure defi-
nitions. No restrictions apply to DBAs.
SYS_PROCEDURE_COLUMNS
The SYS_PROCEDURE_COLUMNS defines input parameters and result set columns.
SYS_RELAUTH
SYS_SCHEMAS
The SYS_SCHEMAS lists available schemas.
SYS_SEQUENCES
SYS_SYNC_REPLICA_PROPERTIES
SYS_SYNONYM
SYS_TABLEMODES
SYS_TABLEMODES shows the mode only of tables for which the mode was explicitly set.
SYS_TABLEMODES doesn't show the mode of tables that were left at the default mode.
(The default mode is "optimistic" unless you set the solid.ini configuration parameter Pessi-
mistic=Yes.)
To list the names and modes of tables that were explicitly set to optimistic or pessimistic,
execute the command:
SYS_TABLES
This table lists all the system tables.
There are no restrictions for viewing the system tables, which means even users with no
access rights can view them. However, specific users are restricted from viewing the user
table information. Owners are restricted to viewing user tables they have created and users
can only view tables to which they have INSERT, UPDATE, DELETE, or SELECT access.
Users are restricted from viewing any user tables if they have no access rights. No restric-
tions apply to DBAs.
SYS_TRIGGERS
This system table lists procedures.
Specific users are restricted from viewing triggers. Owners are restricted to viewing only
those triggers that they have created. Normal users are restricted from viewing triggers. No
restrictions apply to DBAs.
SYS_TYPES
SYS_UROLE
The SYS_UROLE contains mapping of users to roles.
SYS_USERS
The SYS_USERS list information about users and roles.
SYS_VIEWS
SYS_BULLETIN_BOARD
This table contains persistent parameters that are always available in the parameter bulletin
board when transactions are executed in this database catalog.
SYS_PUBLICATION_ARGS
This table contains the publication input arguments in this master database
SYS_PUBLICATION_REPLICA_ARGS
This table contains the definition of the publication arguments in a replica database.
SYS_PUBLICATION_REPLICA_STMTARGS
This table contains the mapping between the publication arguments and the statements in the
replica.
SYS_PUBLICATION_REPLICA_STMTS
This table contains the definition of the publication statements in a replica database.
SYS_PUBLICATION_STMTARGS
This table contains mapping between the publication arguments and the statements in the
master database.
SYS_PUBLICATION_STMTS
This table contains the publication statements in the master database.
SYS_PUBLICATIONS
This table contains the publications that have been defined in this master database.
SYS_PUBLICATIONS_REPLICA
This table contains publications that are being used in this replica database.
SYS_SYNC_BOOKMARKS
This table contains bookmarks that are being used in a master database.
SYS_SYNC_HISTORY_COLUMNS
If you turn on synchronization history for a table, you may turn it on for all columns, or only
for a subset of columns. If you turn it on for a subset of columns, then the
SYS_SYNC_HISTORY_COLUMNS table records which columns you are keeping syn-
chronization history information for. There is one row in
SYS_SYNC_HISTORY_COLUMNS for each column that you keep synchronization his-
tory for.
.
SYS_SYNC_INFO
This table contains synchronization information, one row for each node.
SYS_SYNC_MASTER_MSGINFO
This table contains information about the currently active message in the master database.
Data in this table is used to control the synchronization process between the replica and mas-
ter database. This table also contains information that is useful for troubleshooting pur-
poses. If the execution of a message halts in the master database due to an error, you can
query this table to obtain the reason for the problem, as well as the transaction and state-
ment that caused the error.
SYS_SYNC_MASTER_RECEIVED_MSGPARTS
This table contains parts of the messages that were received in the master database from a
replica database, but not yet processed in the master database.
SYS_SYNC_MASTER_RECEIVED_MSGS
This table contains messages that were received in the master database from a replica data-
base, but are not yet processed in the master database.
SYS_SYNC_MASTER_STORED_MSGPARTS
This table contains parts of the message result sets that were created in the master database,
but not yet sent to the replica database.
SYS_SYNC_MASTER_STORED_MSGS
This table contains messages that were created in the master database, but not yet sent to the
replica database.
SYS_SYNC_MASTER_SUBSC_REQ
This table contains the list of requested subscriptions waiting to be executed in the master.
SYS_SYNC_MASTER_VERSIONS
This table contains the list of subscriptions (that have been subscribed) to replica databases
from the master database.
SYS_SYNC_MASTERS
This table contains the list of master databases accessed by the replica.
SYS_SYNC_RECEIVED_STMTS
This table contains the propagated statements that have been received in the master database.
SYS_SYNC_REPLICA_MSGINFO
This table contains information about currently active messages in the replica database.
Data in this table is used to control the synchronization process between the replica and mas-
ter database. This table also contains information that is useful for troubleshooting pur-
poses. If the execution of a message halts in the replica database due to an error, you can
query this table to obtain the reason for the problem, as well as the transaction and state-
ment that caused the error.
SYS_SYNC_REPLICA_RECEIVED_MSGPARTS
This table contains parts of the reply messages that have been received into the replica data-
base from the master database, but are not yet processed in the replica database.
SYS_SYNC_REPLICA_RECEIVED_MSGS
This table contains reply messages that were received in the replica database from the mas-
ter database, but not yet processed in the replica database.
SYS_SYNC_REPLICA_STORED_MSGS
This table contains messages that were created in the replica database, but not yet sent to the
master database.
SYS_SYNC_REPLICA_STORED_MSGPARTS
This table contains parts of the messages that were created in the replica database, but not
yet sent to the master database.
SYS_SYNC_REPLICA_VERSIONS
This table contains the list of subscriptions (that have been subscribed) to this replica data-
base from the master database.
SYS_SYNC_REPLICAS
This table contains the list of replica databases registered with the master.
SYS_SYNC_SAVED_STMTS
This table contains statements that have been saved in replica database for later propagation.
SYS_SYNC_USERMAPS
This table maps replica user ids to master users in the SYS_SYNC_USERS table.
SYS_SYNC_USERS
This table contains a list of users that have access to the synchronization functions of the
replica database. These functions include saving transactions and creating synchronization
messages.
In a replica the data of this table is downloaded from the master in a message with the com-
mand:
MESSAGE unique-message-name APPEND SYNC_CONFIG
['sync-config-arg']
SYSTEM VIEWS
Solid Database Engine supports views as specified in the X/Open SQL Standard.
COLUMNS
The COLUMNS system view identifies the columns which are accessible to the current user.
SERVER_INFO
The SERVER_INFO system view provides attributes of the current database system or
server.
TABLES
The TABLES system view identifies the tables accessible to the current user.
USERS
The USERS system view identifies users and roles.
SYNCHRONIZATION-RELATED VIEWS
Solid provides 4 views that show information about synchronization messages between mas-
ters and replicas. One pair of views (SYNC_FAILED_MESSAGES and
SYNC_FAILED_MASTER_MESSAGES) shows failed messages. The other pair
(SYNC_ACTIVE_MESSAGES and SYNC_ACTIVE_MASTER_MESSAGES) shows
active messages.
SYNC_FAILED_MESSAGES
This table is on the master and holds information about messages received from the replica.
It is possible to view all necessary information about failed messages using one simple view:
SELECT * FROM SYNC_FAILED_MESSAGES.
This returns the following columns:
All users have access to this view; no particular privileges are required.
SYNC_FAILED_MASTER_MESSAGES
This table is on the replica and holds information about messages sent to the master. It is
possible to view all necessary information about failed messages using one simple view:
SELECT * FROM SYNC_FAILED_MASTER_MESSAGES.
All users have access to this view; no particular privileges are required.
SYNC_ACTIVE_MESSAGES
This table is on the master and holds information about messages received from the replica.
This returns the following columns:
All users have access to this view; no particular privileges are required.
SYNC_ACTIVE_MASTER_MESSAGES
This table is on the replica and holds information about messages sent to the master. It is
possible to view all necessary information about failed messages using one simple view:
SELECT * FROM SYNC_FAILED_MASTER_MESSAGES.
This returns the following columns:
All users have access to this view; no particular privileges are required.
SYNC_SETUP_CATALOG
CALL SYNC_SETUP_CATALOG (
catalog_name, -- WVARCHAR
node_name, -- WVARCHAR
is_master, -- INTEGER
is_replica -- INTEGER
)
EXECUTES ON: master or replica.
The SYNC_SETUP_CATALOG() procedure creates a catalog, assigns it a node name, and
sets the role of the catalog to be master, replica, or both.
If the catalog_name parameter is NULL, then the current catalog is assigned the speciified
node name and role(s).
For is_master and is_replica, a value of 0 means "no"; any other value means "yes". At least
one of these should be non-zero, of course. Note that because a single catalog can be both a
replica and a master, it is legal to set both is_master and is_replica to non-zero values.
Error Codes
Table 6–2
RC Text Description
13047 No privilege for operation
13110 NULL not allowed Only the catalog name can be NULL; all other
parameters must be non-NULL.
13133 Not a valid license for this product.
25031 Transaction is active, operation failed. The user has made some changes that have not
yet been committed.
25052 Failed to set node name to The node_name may be invalid.
node_name.
25059 After registration nodename cannot Catalog has a name already and has one or more
be changed. replicas.
SYNC_REGISTER_REPLICA
CALL SYNC_REGISTER_REPLICA (
replica_node_name, -- WVARCHAR
replica_catalog_name, -- WVARCHAR
master_network_name, -- VARCHAR
master_node_name, -- WVARCHAR
user_id, -- WVARCHAR
password -- WVARCHAR
)
EXECUTES ON: replica.
Table 6–3
RC Text Description
13047 No privilege for operation
13110 NULL not allowed Only the catalog name and master node name
can be NULL; all other parameters must be
non-NULL.
13133 Not a valid license for this product.
21xxx Communication error Was not able to connect to master. For more
details about 21xxx errors, see the appendix of
the Solid Administrator Guide titled "Error
Codes".
25005 Message is already active.
25031 Transaction is active, operation failed. The user has made some changes that have not
yet been committed.
25035 Message is in use.
25051 Unfinished messages found.
25052 Failed to set node name to The node_name may be invalid.
node_name.
25056 Autocommit not allowed. You must run this stored procedure with auto-
commit off.
25057 The replica database has already been
registered to a master database.
Table 6–3
RC Text Description
25059 After registration nodename cannot
be changed.
SYNC_UNREGISTER_REPLICA
CALL SYNC_UNREGISTER_REPLICA (
replica_catalog_name, -- WVARCHAR
drop_catalog, -- integer
force -- integer
)
EXECUTES ON: replica.
The SYNC_UNREGISTER_REPLICA() system procedure unregisters the specified replica
catalog from the master and optionally drops the replica catalog if the drop_catalog parame-
ter has nonzero value. Any possibly hanging messages for this replica are deleted in both
ends of the system. User must have Administrator or Synchronization Administrator access
rights.
If the replica catalog name is NULL, then the current catalog is used. If force is non-zero,
then the master accepts unregistration even if messages for this replica exist in the master. In
that case, those messages are deleted.
If the user has any uncommitted changes (i.e. open transactions), then the call will fail with
an error.
This system procedure does not return a resultset.
Error Codes
Table 6–4
RC Text Description
13047 No privilege for operation
13110 NULL not allowed Catalog name cannot be NULL if drop_catalog
is non-zero.
Table 6–4
RC Text Description
13133 Not a valid license for this product.
21xxx Communication error Was not able to connect to master. For more
details about 21xxx errors, see the appendix of
the Solid Administrator Guide titled "Error
Codes".
25005 Message is already active.
25019 Database is not a replica database.
25020 Database is not a master database.
25023 Replica not registered.
25031 Transaction is active, operation failed. The user has made some changes that have not
yet been committed.
25035 Message is in use.
25051 Unfinished messages found.
25056 Autocommit not allowed. You must run this stored procedure with auto-
commit off.
25079
25093
SYNC_REGISTER_PUBLICATION
CALL SYNC_REGISTER_PUBLICATION (
replica_catalog_name, -- WVARCHAR
publication_name -- WVARCHAR
)
EXECUTES ON: replica.
The SYNC_REGISTER_PUBLICATION() system procedure registers a publication from
the master database.
If the replica catalog name is NULL, then the current catalog is used.
If the user has uncommitted changes, then the call will fail with an error.
This system procedure does not return a resultset.
Error Codes
Table 6–5
RC Text Description
13047 No privilege for operation
13110 NULL not allowed Only the catalog name can be NULL; all other
parameters must be non-NULL.
13133 Not a valid license for this product.
21xxx Communication error Was not able to connect to master. For more
details about 21xxx errors, see the appendix of
the Solid Administrator Guide titled "Error
Codes".
25005 Message is already active.
25010 Publication not found
25019 Database is not a replica database
25020 Database is not a master database.
25023 Replica not registered.
25035 Message is in use.
25056 Autocommit not allowed. You must run this stored procedure with auto-
commit off.
25072 Already registered to publication.
SYNC_UNREGISTER_PUBLICATION
CALL SYNC_UNREGISTER_PUBLICATION (
replica_catalog_name, -- WVARCHAR
publication_name, -- WVARCHAR
drop_data -- INTEGER
)
EXECUTES ON: replica.
The SYNC_UNREGISTER_PUBLICATION() system procedure unregisters a publication.
If the drop_data flag is set to a non-zero value, then all subscriptions to the publication are
automatically dropped.
If the replica catalog name is NULL, then the current catalog is used.
If the user has uncommitted changes, then the call will fail with an error.
This system procedure does not return a resultset.
Error Codes
Table 6–6
RC Text Description
13047 No privilege for operation
13110 NULL not allowed Only the catalog name can be NULL; all other
parameters must be non-NULL.
13133 Not a valid license for this product.
21xxx Communication error Was not able to connect to master. For more
details about 21xxx errors, see the appendix of
the Solid Administrator Guide titled "Error
Codes".
25005 Message is already active.
25010 Publication not found.
25019 Database is not a replica database.
25020 Database is not a master database.
25023 Replica not registered.
25031 Transaction is active, operation failed. User has made some changes that are not yet
committed.
25035 Message is in use.
25056 Autocommit not allowed. You must run this stored procedure with auto-
commit off.
25071 Not registered to publication.
SYNC_SHOW_SUBSCRIPTIONS
CREATE PROCEDURE SYNC_SHOW_SUBSCRIPTIONS (
publication_name -- WVARCHAR
)
EXECUTES ON: replica.
Often it is useful for the application to know which subscriptions (i.e. publication name and
parameters as string representation) of a publication are active in replica or master data-
base(s). This functionality is available in both master and replica catalogs. Use this function
(SYNC_SHOW_SUBSCRIPTIONS) in the replica catalog. Use the function
SYNC_SHOW_REPLICA_SUBSCRIPTIONS in the master catalog.
If the publication is not found, then error 25010 ‘Publication not found’ is returned.
The resultset of this procedure call is:
Table 6–7
Column Name Data Type Description
SUBSCRIPTION WVARCHAR Publication name and parameters as a
string
SUBSCRIPTION_TIME TIMESTAMP Time of last subscription.
Error Codes
Table 6–8
RC Text Description
13047 No privilege for operation
13133 Not a valid license for this product.
25009 Replica not found.
25010 Publication not found
25019 Database is not a replica database
Table 6–8
RC Text Description
25020 Database is not a master database.
25023 Replica not registered.
25071 Not registered to publication.
See Also
SYNC_SHOW_REPLICA_SUBSCRIPTIONS
SYNC_SHOW_REPLICA_SUBSCRIPTIONS
Syntax in master:
CREATE PROCEDURE SYNC_SHOW_REPLICA_SUBSCRIPTIONS (
replica_name, -- WVARCHAR
publication_name -- WVARCHAR
)
EXECUTES ON: master.
Often it is useful for the application to know which subscriptions (i.e. publication name and
parameters as string representation) of a publication are active in a specified replica data-
base(s). This functionality is available in both master and replica catalogs.
If the publication name is NULL, then subscriptions to all publications are listed.
If the replica name is NULL, then all subscriptions from all replicas are listed.
If the publication is not found, then error 25010 ’Publication not found’ is returned.
If replica is not found, then error 25009 ‘Replica not found’ is returned.
If the replica is not registered to the specified publication, then error 25071 ‘Not registered
to publication’ is returned.
The resultset of this procedure call is:
Table 6–9
Column Name Data Type Description
REPLICA_NAME WVARCHAR Replica name.
SUBSCRIPTION WVARCHAR Publication name and parameters as a
string
SUBSCRIPTION_TIME TIMESTAMP Time of last subscription.
Error Codes
Table 6–10
RC Text Description
13047 No privilege for operation
13133 Not a valid license for this product.
25009 Replica not found.
25010 Publication not found
25019 Database is not a replica database
25020 Database is not a master database.
25023 Replica not registered.
25071 Not registered to publication.
See Also
SYNC_SHOW_SUBSCRIPTIONS
SYNC_DELETE_MESSAGES
CALL SYNC_DELETE_MESSAGES()
EXECUTES ON: replica.
If a replica application creates lots of messages and does not check / handle errors properly,
then there are lots of messages hanging. Often the right way to recover is to delete all of
them, regardless of the state of the messages, in both master and replica ends. If the other
database is not available, this operation proceeds in the originating database but returns a
warning, e.g. “operation in master failed”.
This procedure does not return a resultset.
Error Codes
Table 6–11
RC Text Description
13047 No privilege for operation
13133 Not a valid license for this product.
25005 Message is already active.
25009 Replica not found.
25019 Database is not a replica database
25020 Database is not a master database.
25035 Message is in use.
See Also
SYNC_DELETE_REPLICA_MESSAGES
SYNC_DELETE_REPLICA_MESSAGES
CALL SYNC_DELETE_REPLICA_MESSAGES(
replica_name -- WVARCHAR
)
EXECUTES ON: master.
If a replica application creates lots of messages and does not check / handle errors properly,
then there are lots of messages hanging. Often the right way to recover is to delete all of
them, regardless of the state of the messages, in both master and replica ends. If the other
database is not available, this operation proceeds in the originating database but returns a
warning, e.g. “operation in replica failed”.
This procedure does not return a resultset.
Error Codes
Table 6–12
RC Text Description
13047 No privilege for operation
13133 Not a valid license for this product.
25005 Message is already active.
25009 Replica not found.
25019 Database is not a replica database
25020 Database is not a master database.
25035 Message is in use.
See Also
SYNC_DELETE_MESSAGES
The user can retrieve information from the table SYS_BACKGROUNDJOB_INFO using
either an SQL SELECT statement or by calling the system stored procedure
SYS_GETBACKGROUNDJOB_INFO. The procedure
SYS_GETBACKGROUNDJOB_INFO returns the row that matches the given jobid. The
jobid is the job ID of the START AFTER COMMIT statement that was executed. (The job
ID is returned by the server when the START AFTER COMMIT statement is executed.)
Miscellaneous Events
The following events are mostly related to the server’s internal scheduling and "housekeep-
ing". For example, there are events related to backups, checkpoints, and merges. Although
users do not post these events, in many cases users may indirectly cause events, for example
when requesting a backup, or when turning on "Maintenance Mode". You may monitor
these events if you wish.
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_BACKUP The system has started or completed a ENAME WVARCHAR,
backup operation. The "state" parame- POSTSRVTIME TIMES-
ter (NUMDATAINFO) indicates: TAMP,
UID INTEGER,
0: backup completed.
NUMDATAINFO INTEGER,
1: backup started.
TEXTDATA WVARCHAR
Note that the server also posts a sec-
ond event
(SYS_EVENT_MESSAGES) when it
starts or completes a backup.
SYS_EVENT_BACKUPREQ A backup operation has been ENAME WVARCHAR,
requested (but has not yet started). POSTSRVTIME TIMES-
TAMP,
If the user application’s callback func-
UID INTEGER,
tion returns non-zero, then backup is
NUMDATAINFO INTEGER,
not performed.
TEXTDATA WVARCHAR
This event can be caught by the user
only if the user is using the Accelera-
torLib.
None of the parameters are used.
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_CHECKPOINT The system has started or completed a ENAME WVARCHAR,
checkpoint operation. POSTSRVTIME TIMES-
TAMP,
If the system started a checkpoint,
UID INTEGER,
then the "state" parameter (NUM-
NUMDATAINFO INTEGER,
DATAINFO) is 1, and the message
TEXTDATA WVARCHAR
(TEXTDATA) parameter is "started".
If the system completed a checkpoint,
then the "state" parameter (NUM-
DATAINFO) is 0, and the message
(TEXTDATA) parameter is "com-
pleted".
SYS_EVENT_CHECKPOINTREQ A checkpoint operation has been ENAME WVARCHAR,
requested (but has not yet started). POSTSRVTIME TIMES-
Checkpoints are typically executed TAMP,
each time a certain number of log UID INTEGER,
writes has completed. NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
If the user application’s callback func-
tion returns non-zero, then the merge
is not performed.
This event can be caught by the user
only if the user is using the Accelera-
torLib.
None of the parameters are used.
SYS_EVENT_ERROR Some type of server error has ENAME WVARCHAR,
occurred. The message parameter POSTSRVTIME TIMES-
(TEXTDATA) contains the error text. TAMP,
See “Errors That Cause UID INTEGER,
SYS_EVENT_ERROR” on page F-8 NUMDATAINFO INTEGER,
for a list of server errors that can cause TEXTDATA WVARCHAR
this event to be posted.
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_IDLE The system is idle. (Note that some ENAME WVARCHAR,
tasks have a priority of "idle" and are POSTSRVTIME TIMES-
only run when the system is not run- TAMP,
ning any other tasks. Because very UID INTEGER,
low priority tasks may be running in NUMDATAINFO INTEGER,
an "idle" system, the system is not TEXTDATA WVARCHAR
necessarily truly idle in the sense of
not doing anything.)
This event can be caught by the user
only if the user is using the Accelera-
torLib.
None of the parameters are used.
SYS_EVENT_ILL_LOGIN There has been an illegal login ENAME WVARCHAR,
attempt. The username (TEXT- POSTSRVTIME TIMES-
DATA) and userid (NUM- TAMP,
DATAINFO) indicate the user who UID INTEGER,
tried to log in. NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
SYNC_MAINTENANCEMODE_BEGI When the sync mode changes from node_name WVARCHAR.
N NORMAL to MAINTENANCE, the
server will send this system event. The
node_name is the name of the node in
which maintenance mode started.
(Remember that a single Solid server
may have multiple "nodes" (cata-
logs).) For more details about sync
mode, see “SET SYNC MODE” on
page B-151.
SYNC_MAINTENANCEMODE_END When the sync mode changes from node_name WVARCHAR
MAINTENANCE to NORMAL, the
server will send this system event. The
node_name is the name of the node in
which maintenance mode started.
(Remember that a single Solid server
may have multiple "nodes" (cata-
logs).) For more details about sync
mode, see “SET SYNC MODE” on
page B-151.
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_MERGE An event associated with the "merge" ENAME WVARCHAR,
operation (merging data from the Bon- POSTSRVTIME TIMES-
sai Tree to the main storage tree) has TAMP,
occurred. The parameter STATE UID INTEGER,
(NUMDATAINFO) gives more NUMDATAINFO INTEGER,
details: TEXTDATA WVARCHAR
0: stop the merge
1: start the merge
2: merge progressing
3: merge accelerated.
SYS_EVENT_MERGEREQ A merge operation has been requested ENAME WVARCHAR,
(but has not yet started). POSTSRVTIME TIMES-
TAMP,
If the user application’s callback func-
UID INTEGER,
tion returns non-zero, then the merge
NUMDATAINFO INTEGER,
is not performed.
TEXTDATA WVARCHAR
This event can be caught by the user
only if the user is using the Accelera-
torLib.
None of the parameters are used.
SYS_EVENT_MESSAGES This event is posted when the server ENAME WVARCHAR,
has a message (error message or warn- POSTSRVTIME TIMES-
ing message) to log to solerror.out or TAMP,
solmsg.out. In this case, the TEXT- UID INTEGER,
DATA contains the message text and NUMDATAINFO INTEGER,
NUMDATAINFO the code. If the MESSAGE WVARCHAR
message to be written is an error, then
both SYS_EVENT_ERROR and
SYS_EVENT_MESSAGES will be
posted. If the message is only a warn-
ing, then only
SYS_EVENT_MESSAGES is posted.
For a list of the warnings that can
cause SYS_EVENT_MESSAGES,
see “Conditions or Warnings That
Cause SYS_EVENT_MESSAGES”
on page F-10.
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_NOTIFY Event sent with the admin command ENAME WVARCHAR,
’notify’. POSTSRVTIME TIMES-
TAMP,
UID INTEGER,
NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
SYS_EVENT_PARAMETER This event is posted if a configuration ENAME WVARCHAR,
parameter is changed with the com- POSTSRVTIME TIMES-
mand TAMP,
ADMIN COMMAND ’parameter...’; UID INTEGER,
The parameter MESSAGE (TEXT- NUMDATAINFO INTEGER,
DATA) contains the section name TEXTDATA WVARCHAR
(e.g. "SRV") and the parameter name.
SYS_EVENT_ROWS2MERGE This event indicates that there are ENAME WVARCHAR,
rows that need to be merged from the POSTSRVTIME TIMES-
Bonsai Tree to the main storage tree. TAMP,
The rows parameter (NUM- UID INTEGER,
DATAINFO) indicates the number of
NUMDATAINFO INTEGER,
non-merged rows in the Bonsai Tree.
TEXTDATA WVARCHAR
SYS_EVENT_SACFAILED This event is posted when a START ENAME WVARCHAR,
AFTER COMMIT (SAC) fails. The POSTSRVTIME TIMESTAMP,
application can wait for this event and UID INTEGER,
use the job ID (which is in the NUM- NUMDATAINFO INTEGER,
DATAINFO field) to retrieve the error TEXTDATA WVARCHAR
message from the system table
SYS_BACKGROUNDJOB_INFO.
(The job ID in NUMDATAINFO
matches the job ID that is returned
when the START AFTER COMMIT
statement is executed.)
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_SHUTDOWNREQ A shutdown request has been ENAME WVARCHAR,
received. If the user application’s call- POSTSRVTIME TIMES-
back function returns non-zero, then TAMP,
shutdown is not performed. UID INTEGER,
This event can be caught by the user NUMDATAINFO INTEGER,
only if the user is using the Accelera- TEXTDATA WVARCHAR
torLib.
None of the parameters are used.
SYS_EVENT_STATE_MONITOR This event is posted when monitoring ENAME WVARCHAR,
settings are changed. POSTSRVTIME TIMES-
State (NUMDATAINFO) is one of the TAMP,
following: UID INTEGER,
NUMDATAINFO INTEGER,
0: monitoring off.
TEXTDATA WVARCHAR
1: monitoring on.
UID is the user ID of the user for
whom monitoring was turned on or
off.
SYS_EVENT_STATE_OPEN This event is posted when the "state" ENAME WVARCHAR,
of the database is changed. The POSTSRVTIME TIMES-
parameter STATE (NUM- TAMP,
DATAINFO) indicates the new state: UID INTEGER,
0: Closed. No new connections NUMDATAINFO INTEGER,
allowed. TEXTDATA WVARCHAR
1: Opened: New connections allowed.
SYS_EVENT_STATE_SHUTDOWN This event is posted when a server ENAME WVARCHAR,
shutdown is started. Note that the POSTSRVTIME TIMES-
NUMDATAINFO and TEXTDATA TAMP,
parameters have no useful informa- UID INTEGER,
tion.
NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
Table 1:
EVENT NAME EVENT DESCRIPTION PARAMETERS
SYS_EVENT_STATE_TRACE Server trace is turned on or off with ENAME WVARCHAR,
ADMIN COMMAND ’trace’; POSTSRVTIME TIMES-
The parameter STATE (NUM- TAMP,
DATAINFO) indicates the new trace UID INTEGER,
state: NUMDATAINFO INTEGER,
0: tracing off. TEXTDATA WVARCHAR
1: tracing on.
SYS_EVENT_TMCMD This event is posted when an "AT" ENAME WVARCHAR,
command (i.e. a timed command) is POSTSRVTIME TIMES-
executed. The message parameter TAMP,
(TEXTDATA) contains the command. UID INTEGER,
NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
SYS_EVENT_TRX_TIMEOUT This event is currently not used. ENAME WVARCHAR,
POSTSRVTIME TIMES-
TAMP,
UID INTEGER,
NUMDATAINFO INTEGER,
TEXTDATA WVARCHAR
SYS_EVENT_USERS The parameter REASON (NUM- ENAME WVARCHAR,
DATAINFO) contains the reason for POSTSRVTIME TIMES-
the event: TAMP,
0: User connected. UID INTEGER,
1: User disconnected. NUMDATAINFO INTEGER,
2: User disconnected abnormally. TEXTDATA WVARCHAR
4: User disconnected because of time-
out.
The numbers in the "Error Code" column match the error code numbers in the appendix
"Error Codes" in the Solid Administrator Guide. These values get passed in the NUM-
DATAINFO event parameter.
Table 2:
Error code Error description
30104 Shutdown aborted; denied by user callback
30208 Merge not started; denied by user callback
30284 Checkpoint not started; denied by user callback
30302 Backup start failed. Shutdown is in progress
30302 Backup start failed. Backup is already active
30303 Backup aborted
30304 Backup failed. <error description>
30305 Backup not started; denied by user callback
30306 Backup not started; Backup is not supported on diskless
server.
30307 Backup not started, index check failed. Errors written to file
ssdebug.log.
30360 AT command failed. <reason>
30403 Log file write failure.
30454 Failed to save configuration file <file name>
30573 Network backup failed. <reason>
30640 <Server RPC error message>
SYS_EVENT_MESSAGES
The table below shows the warning messages that can cause the server to post the event
SYS_EVENT_MESSAGES.
Table 3:
Error code Error description
30010 User '<username>' failed to connect, version mismatch. Cli-
ent version <version>, server version <version>.
30011 User '<username>' failed to connect, collation version mis-
match.
30012 User '<username>' failed to connect, there are too many con-
nected clients.
30020 Server is in fatal state, no new connections are allowed
30282 Checkpoint creation not started because shutdown is in
progress.
30283 Checkpoint creation not started because it's disabled.
30300 Backup completed succesfully.
Note that the server also posts a second event
(SYS_EVENT_BACKUP) when it starts or completes a backup.
30301 Backup started to <directory path>.
Note that the server also posts a second event
(SYS_EVENT_BACKUP) when it starts or completes a backup.
30359 Server noticed time inconsistency during at-command execu-
tion. If the system time has been changed, please restart
server.
30361 Illegal at command <command> ignored.
30362 Illegal immediate at command <command> ignored.
30405 Unable to open message log file 'file name'
30800 Unable to reserve requested <number> memory blocks for
external sorter.
Only <number> memory blocks were available. SQL: <sql
statement>
Table 3:
Error code Error description
30801 Unable to reserve requested <number> memory blocks for
external sorter.
Only <number> memory blocks were available.
HotStandby Events
For a description of events related to HotStandby, see the Solid High Availability User
Guide.
This glossary gives you a description of the terminology used in this guide.
Client/server computing
Client/server computing divides a large piece of software into modules that need not all be
executed within the same memory space nor on the same processor. The calling module
becomes the ‘client’ that requests services, and the called module becomes the ‘server’ that
provides services. Client and server processes exchange information by sending messages
through a computer network. They may run on different hardware and software platforms as
appropriate for their special functions.
Two basic client/server architecture types are called two-tier and three-tier application archi-
tectures.
Communication protocol
A communication protocol is a set of rules and conventions used in the communication
between servers and clients. The server and client have to use the same communication pro-
tocol in order to establish a connection. TCP/IP is an example of a common communication
protocol.
Glossary-1
Database administrator
The database administrator is a person responsible for tasks such as:
■ managing users, tables, and indices
■ backing up data
■ allocating disk space for the database files
Database procedures
See stored procedures.
Index
An index of records has an entry for each key field (for example, employee name, identifica-
tion number, etc.) and the location of the record. Indexes are used to speed up access to
tables. The database engine uses indexes to access the rows in a table directly. Without
indexes, the engine would have to search the whole contents of a table to find the desired
row. A single table can have more than one index; however, adding indexes does slow down
write operations, such as inserts, deletes, and updates on that table. There are two kinds of
indexes: non unique indexes and unique indexes. A unique index is an index where all key
values are unique.
Optimizer Hints
Optimizer hints (which are an extension of SQL) are directives specified through embedded
pseudo comments within query statements. The Optimizer detects these directives or hints
and bases its query execution plan accordingly. Optimizer hints allow applications to be opti-
mized under various conditions to the data, query type, and the database. They not only pro-
vide solutions to performance problems occasionally encountered with queries, but shift
control of response times from the system to the user.
Stored procedures
Stored procedures allow programmers to split the application logic between the client and
the server. These procedures are stored in the database, and they accept parameters in the
activation call from the client application. This arrangement is used by intelligent transac-
tions that are implemented with calls to stored procedures.
Triggers
Triggers are pieces of logic that a Solid server automatically executes when a user attempts
to change the data in a table. When a user modifies data within the table, the trigger that cor-
responds to the command (such as insert, delete, or update) is activated.
Glossary-3
Glossary-4 Solid Database Engine SQL Guide
Index
- (minus), B-174 AND (operator), B-174
AND operator, 3-8
API, Glossary-1
Symbols
%, B-184 APPEND (keyword), B-109
* (asterisk), B-174 Application Programming Interface, Glossary-1
+ (plus), B-174, B-177 AS
/ (slash), B-174 AS clause in SELECT statement, 2-16
< (less than or equal to), B-173 ASCending, 4-12
< (less than), B-173 ASCII, B-177
<> (not equal to), B-173 ASIN, B-178
= (equal to), B-173 ATAN, B-178
> (greater than), B-173 ATAN2, B-178
>= (greater than or equal to), B-173 autocommit, B-35
_ (underscore), B-184 AVG (function), B-175
|| (concatenation operator), B-177
B
backup
A
ABS, B-178 and SYS_EVENT_BACKUP, F-2
access rights batch inserts and updates
publications, B-94, B-137 optimizing, 6-7
registration user, B-159 Bcktime, B-5
ACOS, B-178 BEGIN, B-31
ADMIN EVENT, B-11 BINARY
ALL (keyword) using CAST to enter values, A-4
PROPAGATE TRANSACTIONS, B-109 BINARY data type, A-4
ALTER TABLE SET NOSYNCHISTORY BIT_AND function (bit-wise AND operator), B-181
described, B-17 BLOB, A-6
ALTER TABLE SET SYNCHISTORY entering values by using CAST, A-4
described, B-17 bookmarks
ALTER TABLE statement, B-13 dropping, B-56, B-83
ALTER TRIGGER (statement), 3-70 bulletin board
ALTER TRIGGER statement, B-18 see parameter bulletin board, B-90
ALTER USER statement, B-19
Index-1
C Comparison operators
CALL described, 3-7
example of using with EXECDIRECT and CONCAT, B-177
parameter, B-46 concatenated indexes, 6-5
CALL statement, B-21 concurrency, 4-17
invoking procedures, 3-2 Concurrency Control, 4-16
CASCADE, 4-9, B-73, B-80 concurrency control, 4-29
CASCADE keyword in REVOKE statements, B-136 optimistic, 4-18
CASCADED pessimistic, 4-18
reserved word, C-15 concurrency control mode
CASE, B-176 displaying, D-11
CAST, 2-17, B-176 configuring sychronization
entering BINARY values with, A-4 setting system parameters, B-156
catalogs connect string
creating, B-25 changing to master name, B-150
described, 4-34 ConnectStrForMaster, B-24
CEILING, B-178 constraints
CHAR, B-177 foreign key, B-60
CHAR data type, A-2 Control structures
checkpoint in stored procedures, 3-10
and makecp command, B-6 CONVERT_CHAR, B-175
and SYS_EVENT_CHECKPOINT, F-3 CONVERT_DATE, B-175
Client-Server Architecture, 1-5 CONVERT_DECIMAL, B-175
CLOB, A-6 CONVERT_DOUBLE, B-175
clustering, 4-11 CONVERT_FLOAT, B-175
COALESCE, B-176 CONVERT_INTEGER, B-175
Column, 1-1, 2-1 CONVERT_LONGVARCHAR, B-175
columns CONVERT_NUMERIC, B-175
adding to a table, 4-9 CONVERT_REAL, B-175
deleting from a table, 4-9 CONVERT_SMALLINT, B-175
COLUMNS system view, D-31 CONVERT_TIME, B-175
commit block CONVERT_TIMESTAMP, B-175
defining refresh size, B-123, B-129 CONVERT_TINYINT, B-175
COMMIT statements CONVERT_VARCHAR, B-175
stored procedures, 3-35 CONVERTORSTOUNIONS, B-148
COMMIT WORK, 1-6, 2-18 COS, B-178
COMMIT WORK statement, B-24 COT, B-178
COMMITBLOCK (keyword) COUNT (function), B-175
DROP SUBSCRIPTION, B-82 Cptime, B-5
MESSAGE FORWARD, B-123 CREATE CATALOG statement, 4-36, B-25
MESSAGE GET REPLY, B-129 CREATE EVENT statement, 3-93, B-28
committing work CREATE INDEX statement, B-30
after altering table, 4-10 CREATE PROCEDURE statement, B-31
after altering users and roles, 4-6 Declare section, 3-4
Communication protocol, Glossary-1 parameter section, 3-2
Index-3
DROP TABLE statement, B-85 HotStandby, F-11
DROP TRIGGER (statement), 3-69 EXCLUSIVE (lock mode), 4-23
DROP TRIGGER statement, B-85 exclusive locks, 4-17
DROP USER statement, B-86 EXECDIRECT, B-40
DROP VIEW statement, B-86 example usage, B-46
dropping using an SQL statement in a VARCHAR
bookmarks, B-56, B-83 variable., B-45
master database, B-75 executing
publications, B-76, B-77 failed messages, B-127
replica databases, B-79 messages, B-120
subscriptions, B-81 EXP, B-178
duplicate inserts EXPLAIN PLAN statement, 5-2, B-87
fixing, B-116, B-127 EXPORT SUBSCRIPTION
described, B-87
expression, B-174
E
END, B-31 Expressions
END LOOP, B-34 in stored procedures, 3-7
ending EXTRACT FROM, B-179
messages, B-118
Error handling F
stored procedures, 3-27 fatal errors
Errors recovering from, B-130
receiving in triggers, 3-62 FLOAT data type, A-3
errors FLOOR, B-178
see also fatal errors, synchronization errors fn
DBMS, B-116, B-127 usage in {fn func_name}, 3-5, 3-18
escape character, B-185 FOR EACH REPLICA, 3-75
escape sequence, 3-7 foreign key, 4-13
evaluating foreign key constraints, B-60
application performance, 6-2 free space in database, B-5
EVENT FULL (keyword)
posting an event, B-32 SUBSCRIBE
registering for an event, B-32 See "Refresh", B-109
unregistering for an event, B-32 full table scan, 6-5
waiting on an event, B-32 Functions
event for triggers, 3-71
dropping an event, B-74 stack viewing in stored procedures, 3-36
notify user, B-6 functions
Events see SQL functions
code example, 3-94 SET_PARAM(), B-91
using, 3-93
events
G
ADMIN EVENT command, B-11 GET_PARAM()
High Availability described, B-90
See HotStandby Events, F-11
Index-5
from info command, B-5 See "Message Append Refresh", B-109
LONG VARBINARY MESSAGE APPEND SYNC_CONFIG
using CAST to enter values, A-4 described, B-109
LONG VARBINARY data type, A-4 MESSAGE APPEND UNREGISTER PUBLICATION
LONG VARCHAR data type, A-2 described, B-109
LONG WVARCHAR data type, A-2 MESSAGE APPEND UNREGISTER REPLICA, B-
LOOP, B-34 109
Loops MESSAGE BEGIN
in stored procedures, 3-14 described, B-112
LTRIM, B-177 MESSAGE DELETE
described, B-114
MESSAGE END
M
MAINTENANCE described, B-118
set sync mode maintenance, B-151 MESSAGE EXECUTE
Maintenance Mode, B-151 described, B-120
makecp (command) MESSAGE FORWARD
and checkpoint, B-6 described, B-122
master database, B-109 MESSAGE FROM REPLICA DELETE
changing network name, B-150 described, B-116
dropping, B-75 MESSAGE FROM REPLICA EXECUTE, B-127, B-
granting access to publications, B-94 128
propagating transactions to described, B-127
properties in, B-140 MESSAGE GET REPLY
requesting reply messages from, B-129 described, B-129
retrieving parameter values from, B-91 messages
revoking access to publications, B-137 see also error messages, failed messages, reply
setting node name, B-155 messages
setting parameters in, B-134, B-157 beginning, B-112
user information, B-109 deleting, B-114
master users ending, B-118
downloading list of, B-109 re-executing, B-120
MAX (function), B-175 requesting replies from the master database, B-129
MaxStartStatements, 5-14 saving, B-118
Maxusers, B-5 sending, B-122
MESSAGE APPEND PROPAGATE metadata
TRANSACTIONS, B-109 exporting, B-87
MESSAGE APPEND PROPAGATE WHERE MIN (function), B-175
using properties, B-141 MINUTE, B-179
MESSAGE APPEND REFRESH MOD, B-178
described, B-109 monitorstate, B-5
MESSAGE APPEND REGISTER PUBLICATION MONTH, B-179
described, B-109 MONTHNAME, B-179
MESSAGE APPEND REGISTER REPLICA multi-column indexes, 6-5
described, B-109
MESSAGE APPEND SUBSCRIBE
Index-7
transactions, B-109 handling failure in the replica, B-130
propagating transactions refreshing
SAVE command, B-138 publications, B-109
setting default properties, B-141 registering
setting priority, B-141 replica databases, B-109
properties setting replica node names, B-155
assigning, B-140 registrating databases
saving as default, B-141 registration user, B-159
saving default transaction propagation criteria, B- Relational Databases, 1-1
141 REPEAT, B-177
saving default transaction properties, B-141 REPEATABLE READ, B-143
pseudo columns, B-183 REPLACE, B-177
PUBLIC role replica databases
described, 4-3 deleting messages, B-114
publications dropping, B-79
creating, B-47 properties in, B-140
dropping, B-76, B-77 refreshing from publications, B-109
granting access, B-94 registering, B-109, B-155, B-159
refreshing, B-109 requesting reply messages from the master
revoking access, B-137 database, B-129
Push Synchronization, 3-74 retrieving parameter values from, B-91
PUT_PARAM() saving transactions, B-138
described, B-133 setting parameters in, B-134, B-157
unregistering, B-109
Replica Property Names, 3-76
Q
QUARTER, B-179 reply messages
requesting from the master database, B-129
setting timeout, B-122
R RESTRICT, 4-9, B-73, B-80, B-85
RADIANS, B-179 RESTRICT keyword in REVOKE statements, B-136
READ COMMITTED, B-143 RETURN keyword, 3-18
READ ONLY, B-143 REVOKE (Privilege from Role or User) statement, B-
READ WRITE, B-143 136
REAL data type, A-3, A-5 REVOKE (Role from User) statement, B-135
Recovery REVOKE REFRESH ON
and transaction logging, 1-6 described, B-137
recovery REVOKE SUBSCRIBE
DBMS level error, B-116, B-127 see Revoke Refresh, B-137
REFERENCES (keyword), B-59, B-92, B-136 RIGHT, B-177
Referential integrity roles
triggers, 3-58 _SYSTEM, 4-3
referential integrity, 4-15, B-60 PUBLIC, 4-3
REFRESH, B-123 SYS_ADMIN_ROLES, 4-3
refreshes SYS_CONSOLE_ROLE, 4-3
handling failure in the master database, B-130 ROLLBACK, 1-6
Index-9
data types, 4-1 SQLROWCOUNT (variable)
extensions, 4-2 row count, 3-28
using, 4-1 SQLSUCCESS (variable)
Solid SQL API stored procedures, 3-27
troubleshooting, 5-9 SQRT, B-179
Solid UNIFACE Driver SSC_TASK_BACKGROUND, 5-13
troubleshooting, 5-10 START AFTER COMMIT, B-164
soltrace.out, 5-11 Analyzing Failure in, 5-13
SPACE, B-177 Tuning Performance of, 5-13
space Start After Commit, B-164
free space in database, B-5 STORE
SQL Store clause of the CREATE TABLE command, B-
see Structure Query Language, Glossary-3 60
using in stored procedures, 3-36 Stored procedures, Glossary-3
SQL functions assigning values to variables, 3-5
GET_PARAM(), B-90, B-91 CREATE PROCEDURE statement, 3-2
PUT_PARAM(), B-133 cursors, 3-36
SQL Info facility, 5-1 declaring local variables, 3-4
SQL scripts, 4-2 default cursor management, 3-35
sample.sql, 4-6 described, 3-1
users.sql, 4-2 error handling, 3-27
SQL statements, 4-1 exiting, 3-18
examples for administering indexes, 4-10 loops, 3-14
examples for managing database objects, 4-36 naming
examples for managing indexes, 4-10, 4-36 nesting procedures, 3-32
examples for managing tables, 4-6 parameter markers in cursors, 3-30
examples for managing users and roles, 4-4 positioned updates and deletes, 3-33
tuning, 6-1 privileges, 3-37
SQL_LANGUAGES system table, D-1 procedure body, 3-5
SQL_TSI_DAY, B-180 procedure stack viewing, 3-36
SQL_TSI_FRAC_SECOND, B-180 transactions, 3-35
SQL_TSI_HOUR, B-180 triggers, 3-45
SQL_TSI_MINUTE, B-180 using events
SQL_TSI_MONTH, B-180 , 3-93
SQL_TSI_QUARTER, B-180 using parameters, 3-2
SQL_TSI_SECOND, B-180 using SQL, 3-23, 3-36
SQL_TSI_WEEK, B-180 stored procedures
SQL_TSI_YEAR, B-180 autocommit, B-35
SQLERRNUM (variable) string function, B-176
error code, 3-28 Strings
SQLERROR (variable) zero-length, 3-17
error string, 3-28 subscriptions, B-129
SQLERROR of cursorname (variable), 3-28 dropping, B-81
SQLERRSTR (variable) exporting, B-87
error string, 3-28 importing, B-102
Index-11
SYS_SYNC_MASTERS system table, D-24 deleting columns from, 4-9
SYS_SYNC_RECEIVED_STMTS system table, D-25 managing, 4-6
SYS_SYNC_REPLICA_MSGINFO system table, D-26 removing, 4-8
SYS_SYNC_REPLICA_PROPERTIES system TABLES system view, D-33
table, D-11 TAN, B-179
SYS_SYNC_REPLICA_RECEIVED_MSGPARTS Temporary Table, B-60
system table, D-27 THEN
SYS_SYNC_REPLICA_RECEIVED_MSGS system keyword in CASE statement, B-176
table, D-27 TIME data type, A-5
SYS_SYNC_REPLICA_STORED_MSGPARTS system timeout
table, D-28 setting for reply messages, B-122
SYS_SYNC_REPLICA_STORED_MSGS system TIMEOUT (keyword)
table, D-28 MESSAGE FORWARD, B-122
SYS_SYNC_REPLICA_VERSIONS system table, D- MESSAGE GET REPLY, B-122
28 TIMESTAMP data type, A-5
SYS_SYNC_REPLICAS system table, D-29 TIMESTAMPADD, B-180
SYS_SYNC_SAVED_STMTS system table, D-29 TIMESTAMPDIFF, B-180
SYS_SYNC_USERMAPS system table, D-30 TINYINT data type, A-3
SYS_SYNC_USERS system table, D-30 TO (keyword)
SYS_SYNONYM system table, D-11 MESSAGE FORWARD, B-122
SYS_TABLE system tables, D-12 tracestate, B-5
SYS_TABLEMODES system table, D-11 Transaction, 1-6, 2-18
SYS_TRIGGERS (system table), 3-72 transaction - explained, 4-26
SYS_TYPES transaction bulletin board
system table, D-13 see parameter bulletin board, B-90
SYS_UROLE system table, D-14 TRANSACTION ISOLATION LEVELS, 4-27
SYS_VIEWS system table, D-15 Transaction Log, 1-6
system functions, B-181 transaction propagation, B-109
system parameters Transactions
see parameters stored procedures, 3-35
System table using triggers in, 3-50
for triggers, 3-72 transactions
system tables, 4-7, D-1 see also Intelligent Transactions
described, 4-7 assigning properties, B-140
viewing, 4-7 propagating, B-109
system views, D-31 saving, B-138
saving default properties, B-141
setting default properties for propagation, B-141
T
Table, 1-1, 2-1 setting propagation priority, B-141
TABLE LOCKS, 4-24 Transient Table, B-60
Table Locks, 4-28 Triggers, Glossary-3
tables altering attributes, 3-70
adding columns to, 4-9 code example, 3-63
committing work after altering, 4-10 comments and restrictions, 3-44, B-68
creating, 4-8 creating, 3-39
Index-13