SQL Anywhere Server Programming en
SQL Anywhere Server Programming en
This book describes how to build and deploy database applications using the C, C++, Java, Perl, PHP, Python,
Ruby, and Microsoft .NET programming languages (such as Microsoft Visual Basic and Microsoft Visual C#). A
variety of programming interfaces such as Microsoft ADO.NET, OLE DB, and ODBC are described.
In this section:
Several data access programming interfaces provide flexibility in the kinds of applications and application
development environments you can use.
The following diagram displays the supported interfaces, and the interface libraries used. The name of the
interface library and the interface are usually the same.
The applications supplied with SQL Anywhere use several of these interfaces:
Each interface library communicates with the database server using a communication protocol.
SQL Anywhere supports two communication protocols, Command Sequence (CmdSeq) and Tabular Data
Stream (TDS). These protocols are internal, and for most purposes it does not matter which one you are using.
Your choice of development environment is governed by your available tools.
The major differences are visible when connecting to the database. Command Sequence applications and TDS
applications use different methods to identify a database and database server, and so connection parameters
are different.
Command Sequence
This protocol is used by SQL Anywhere, the SQL Anywhere JDBC driver, and the Embedded SQL, ODBC,
OLE DB, and ADO.NET APIs. Client-side data transfers are supported by the CmdSeq protocol.
This protocol is used by SAP Adaptive Server Enterprise, the jConnect JDBC driver, and SAP Open Client
applications. Client-side data transfers are not supported by the TDS protocol.
SQL can be executed from applications by using a variety of application programming interfaces such as
ADO.NET, JDBC, ODBC, Embedded SQL, OLE DB, and Open Client.
In this section:
The way you include SQL statements in your application depends on the application development tool and
programming interface you use.
ADO.NET
You can execute SQL statements using various ADO.NET objects. The SACommand object is one example:
ODBC
If you are writing directly to the ODBC programming interface, your SQL statements appear in function
calls. For example, the following C function call executes a DELETE statement:
SQLExecDirect( stmt,
"DELETE FROM Employees
WHERE EmployeeID = 105",
SQL_NTS );
JDBC
If you are using the JDBC programming interface, you can execute SQL statements by invoking methods of
the statement object. For example:
stmt.executeUpdate(
"DELETE FROM Employees
WHERE EmployeeID = 105" );
Embedded SQL
If you are using Embedded SQL, you prefix your C language SQL statements with the keyword EXEC SQL.
The code is then run through a preprocessor before compiling. For example:
If you use the SAP Open Client interface, your SQL statements appear in function calls. For example, the
following pair of calls executes a DELETE statement:
For more details about including SQL in your application, see your development tool documentation. If you are
using ODBC or JDBC, consult the software development kit for those interfaces.
In many ways, stored procedures and triggers act as applications or parts of applications running inside the
database server. You can also use many of the techniques here in stored procedures.
Java classes in the database can use the JDBC interface in the same way as Java applications outside the
server.
Related Information
Each time a statement is sent to a database, the database server must perform a series of steps.
● It must parse the statement and transform it into an internal form. This process is sometimes called
preparing the statement.
● It must verify the correctness of all references to database objects by checking, for example, that columns
named in a query actually exist.
● If the statement involves joins or subqueries, then the query optimizer generates an access plan.
● It executes the statement after all these steps have been carried out.
If you use the same statement repeatedly, for example inserting many rows into a table, repeatedly preparing
the statement causes a significant and unnecessary overhead. To remove this overhead, some database
programming interfaces provide ways of using prepared statements. A prepared statement is a statement
containing a series of placeholders. When you want to execute the statement, assign values to the
placeholders, rather than prepare the entire statement over again.
Using prepared statements is useful when carrying out many similar actions, such as inserting many rows.
In this step, you generally provide the statement with some placeholder character instead of the values.
Repeatedly execute the prepared statement
In this step, you free the resources associated with the prepared statement. Some programming interfaces
handle this step automatically.
In general, do not prepare statements if they are only executed once. There is a slight performance penalty for
separate preparation and execution, and it introduces unnecessary complexity into your application.
In some interfaces, however, you must prepare a statement to associate it with a cursor.
The calls for preparing and executing statements are not a part of SQL, and they differ from interface to
interface. Each of the application programming interfaces provides a method for using prepared statements.
In this section:
Related Information
The general procedure for using prepared statements is consistent, but the details vary from interface to
interface.
Comparing how to use prepared statements in different interfaces illustrates this point.
You typically perform the following tasks to use a prepared statement in ADO.NET:
For an example of preparing statements using ADO.NET, see the source code in %SQLANYSAMP17%
\SQLAnywhere\ADO.NET\SimpleWin32.
You typically perform the following tasks to use a prepared statement in ODBC:
For an example of preparing statements using ODBC, see the source code in %SQLANYSAMP17%
\SQLAnywhere\ODBCPrepare.
For more information about ODBC prepared statements, see the ODBC SDK documentation.
You typically perform the following tasks to use a prepared statement in JDBC:
1. Prepare the statement using the prepareStatement method of the connection object. This returns a
prepared statement object.
2. Set the statement parameters using the appropriate setType methods of the prepared statement object.
Here, Type is the data type assigned.
3. Execute the statement using the appropriate method of the prepared statement object. For inserts,
updates, and deletes this is the executeUpdate method.
You typically perform the following tasks to use a prepared statement in Embedded SQL:
You typically perform the following tasks to use a prepared statement in Open Client:
1. Prepare the statement using the ct_dynamic function, with a CS_PREPARE type parameter.
2. Set statement parameters using ct_param.
3. Execute the statement using ct_dynamic with a CS_EXECUTE type parameter.
4. Free the resources associated with the statement using ct_dynamic with a CS_DEALLOC type parameter.
Related Information
When you execute a query in an application, the result set consists of several rows. In general, you do not know
how many rows the application is going to receive before you execute the query.
Cursors provide a way of handling query result sets in applications. The way you use cursors and the kinds of
cursors available to you depend on the programming interface you use.
Several system procedures are provided to help determine what cursors are in use for a connection, and what
they contain:
In addition, some programming interfaces allow you to use special features to tune the way result sets return to
your application, providing substantial performance benefits for your application.
In this section:
Related Information
1.2.3.1 Cursors
A cursor is a name associated with a result set. The result set is obtained from a SELECT statement or stored
procedure call.
A cursor is a handle on the result set. At any time, the cursor has a well-defined position within the result set.
With a cursor, you can examine and possibly manipulate the data one row at a time. With a cursor, you can
move forward and backward through the query results.
Cursor Positions
Although server-side cursors are not required in database applications, they do provide several benefits.
Response time
Server-side cursors do not require that the whole result set be assembled before the first row is fetched by
the client. A client-side cursor requires that the entire result set be obtained and transferred to the client
before the first row is fetched by the client.
Client-side memory
For large result sets, obtaining the entire result set on the client side can lead to demanding memory
requirements.
Concurrency control
If you make updates to your data and do not use server-side cursors in your application, you must send
separate SQL statements like UPDATE, INSERT, or DELETE to the database server to apply changes. This
raises the possibility of concurrency problems if any corresponding rows in the database have changed
since the result set was delivered to the client. As a consequence, updates by other clients may be lost.
Server-side cursors can act as pointers to the underlying data, permitting you to impose proper
concurrency constraints on any changes made by the client by setting an appropriate isolation level.
The approach for using a cursor in Embedded SQL differs from the approach used in other interfaces. Follow
these general steps to use a cursor in Embedded SQL:
1. Prepare a statement.
Cursors generally use a statement handle rather than a string. Prepare a statement to have a handle
available.
2. Declare the cursor.
Each cursor refers to a single SELECT or CALL statement. When you declare a cursor, you state the name
of the cursor and the statement it refers to.
3. Open the cursor.
For a CALL statement, opening the cursor executes the procedure up to the point where the first row is
about to be obtained.
4. Fetch results.
Although simple fetch operations move the cursor to the next row in the result set, more complicated
movement around the result set is also possible. How you declare the cursor determines which fetch
operations are available to you.
5. Close the cursor.
When you have finished with the cursor, close it. This frees any resources associated with the cursor.
6. Drop the statement.
To free the memory associated with the statement, you must drop the statement.
In this section:
Related Information
When a cursor is opened, it is positioned before the first row. You can move the cursor position to an absolute
position from the start or the end of the query results, or to a position relative to the current cursor position.
The specifics of how you change cursor position, and what operations are possible, are governed by the
programming interface.
The number of row positions you can fetch in a cursor is governed by the size of an integer. You can fetch rows
numbered up to number 2147483646, which is one less than the value that can be held in an integer. When
using negative numbers (rows from the end) you can fetch down to one more than the largest negative value
that can be held in an integer.
You can use special positioned update and delete operations to update or delete the row at the current position
of the cursor. If the cursor is positioned before the first row or after the last row, an error is returned indicating
that there is no corresponding cursor row.
Note
Inserts and some updates to asensitive cursors can cause problems with cursor positioning. Inserted rows
are not put at a predictable position within a cursor unless there is an ORDER BY clause on the SELECT
The UPDATE statement may cause a row to move in the cursor. This happens if the cursor has an ORDER
BY clause that uses an existing index (a work table is not created). Using STATIC SCROLL cursors alleviates
these problems but requires more memory and processing.
Isolation level
You can explicitly set the isolation level of operations on a cursor to be different from the current isolation
level of the transaction. To do this, set the isolation_level option.
Duration
By default, cursors in Embedded SQL close at the end of a transaction. Opening a cursor WITH HOLD
allows you to keep it open until the end of a connection, or until you explicitly close it. ADO.NET, ODBC,
JDBC, and Open Client leave cursors open at the end of transactions by default.
1. Declare and open the cursor (Embedded SQL), or execute a statement that returns a result set (ODBC,
JDBC, Open Client) or SADataReader object (ADO.NET).
2. Continue to fetch the next row until you get a Row Not Found error.
3. Close the cursor.
The technique used to fetch the next row is dependent on the interface you use. For example:
ADO.NET
SQLFetch, SQLExtendedFetch, or SQLFetchScroll advances the cursor to the next row and returns the
data.
JDBC
The next method of the ResultSet object advances the cursor and returns the data.
Embedded SQL
The ct_fetch function advances the cursor to the next row and returns the data.
Multiple-row fetching should not be confused with prefetching rows. Multiple row fetching is performed by the
application, while prefetching is transparent to the application, and provides a similar performance gain.
Some interfaces provide methods for fetching more than one row at a time into the next several fields in an
array. Generally, the fewer separate fetch operations you execute, the fewer individual requests the server must
respond to, and the better the performance. A modified FETCH statement that retrieves multiple rows is also
sometimes called a wide fetch. Cursors that use multiple-row fetches are sometimes called block cursors or
fat cursors.
● In ODBC, you can set the number of rows that are returned on each call to SQLFetchScroll or
SQLExtendedFetch by setting the SQL_ATTR_ROW_ARRAY_SIZE or SQL_ROWSET_SIZE attribute.
● In Embedded SQL, the FETCH statement uses an ARRAY clause to control the number of rows fetched at a
time.
● Open Client and JDBC do not support multi-row fetches. They do use prefetching.
ODBC and Embedded SQL provide methods for using scrollable cursors and dynamic scrollable cursors. These
methods allow you to move several rows forward at a time, or to move backward through the result set.
The JDBC and Open Client interfaces do not support scrollable cursors.
Prefetching does not apply to scrollable operations. For example, fetching a row in the reverse direction does
not prefetch several previous rows.
Cursors can do more than just read result sets from a query. You can also modify data in the database while
processing a cursor.
These operations are commonly called positioned insert, update, and delete operations, or PUT operations if
the action is an insert.
Not all query result sets allow positioned updates and deletes. If you perform a query on a non-updatable view,
then no changes occur to the underlying tables. Also, if the query involves a join, then you must specify which
table you want to delete from, or which columns you want to update, when you perform the operations.
Inserts through a cursor can only be executed if any non-inserted columns in the table allow NULL or have
defaults.
If multiple rows are inserted into a value-sensitive (keyset driven) cursor, they appear at the end of the cursor
result set. The rows appear at the end, even if they do not match the WHERE clause of the query or if an
ORDER BY clause would normally have placed them at another location in the result set. This behavior is
independent of programming interface. For example, it applies when using the Embedded SQL PUT statement
or the ODBC SQLBulkOperations function. The value of an AUTOINCREMENT column for the most recent row
inserted can be found by selecting the last row in the cursor. For example, in Embedded SQL the value could be
obtained using FETCH ABSOLUTE -1 cursor-name. As a result of this behavior, the first multiple-row insert
for a value-sensitive cursor may be expensive.
ODBC, JDBC, Embedded SQL, and Open Client permit data manipulation using cursors, but ADO.NET does
not. With Open Client, you can delete and update rows, but you can only insert rows on a single-table query.
If you attempt a positioned delete through a cursor, the table from which rows are deleted is determined as
follows:
1. If no FROM clause is included in the DELETE statement, the cursor must be on a single table only.
2. If the cursor is for a joined query (including using a view containing a join), then the FROM clause must be
used. Only the current row of the specified table is deleted. The other tables involved in the join are not
affected.
3. If a FROM clause is included, and no table owner is specified, the table-spec value is first matched against
any correlation names.
4. If a correlation name exists, the table-spec value is identified with the correlation name.
5. If a correlation name does not exist, the table-spec value must be unambiguously identifiable as a table
name in the cursor.
6. If a FROM clause is included, and a table owner is specified, the table-spec value must be unambiguously
identifiable as a table name in the cursor.
7. The positioned DELETE statement can be used on a cursor open on a view as long as the view is updatable.
Clauses in the SELECT statement can affect updatable statements and cursors in various ways.
Specifying FOR READ ONLY in the cursor declaration, or including a FOR READ ONLY clause in the statement,
renders the statement read-only. In other words, a FOR READ ONLY clause, or the appropriate read-only cursor
declaration when using a client API, overrides any other updatability specification.
If the outermost block of a SELECT statement contains an ORDER BY clause, and the statement does not
specify FOR UPDATE, then the cursor is read-only. If the SQL SELECT statement specifies FOR XML, then the
cursor is read-only. Otherwise, the cursor is updatable.
For updatable statements, both optimistic and pessimistic concurrency control mechanisms on cursors are
provided to ensure that a result set stays consistent during scrolling operations. These mechanisms are
alternatives to using INSENSITIVE cursors or snapshot isolation, although they have different semantics and
tradeoffs.
The specification of FOR UPDATE can affect whether a cursor is updatable. However, the FOR UPDATE syntax
has no other effect on concurrency control. If FOR UPDATE is specified with additional parameters, the
processing of the statement is altered to incorporate one of two concurrency control options as follows:
Pessimistic
For all rows fetched in the cursor's result set, the database server acquires intent row locks to prevent the
rows from being updated by any other transaction.
Optimistic
The cursor type used by the database server is changed to a keyset-driven cursor (insensitive row
membership, value-sensitive) so that the application can be informed when a row in the result has been
modified or deleted by this, or any other transaction.
Pessimistic or optimistic concurrency is specified at the cursor level either through options with DECLARE
CURSOR or FOR statements, or though the concurrency setting API for a specific programming interface. If a
statement is updatable and the cursor does not specify a concurrency control mechanism, the statement's
specification is used. The syntax is as follows:
The database server acquires intent row locks on fetched rows of the result set. These are long-term locks
that are held until transaction COMMIT or ROLLBACK.
FOR UPDATE BY { VALUES | TIMESTAMP }
The database server uses a keyset-driven cursor to enable the application to be informed when rows have
been modified or deleted as the result set is scrolled.
FOR UPDATE ( column-list ) enforces the restriction that only named result set attributes can be modified in
a subsequent UPDATE WHERE CURRENT OF statement.
If you cancel a request that is carrying out a cursor operation, the position of the cursor is indeterminate. After
canceling the request, you must locate the cursor by its absolute position, or close it.
The various programming interfaces do not support all aspects of database cursors. The terminology may also
be different. The mappings between cursors and the options available to for each programming interface is
explained in the following material.
In this section:
Related Information
Related Information
You request a cursor type, either explicitly or implicitly, from the programming interface. Different interface
libraries offer different choices of cursor types.
Uniqueness
Declaring a cursor to be unique forces the query to return all the columns required to uniquely identify
each row. Often this means returning all the columns in the primary key. Any columns required but not
specified are added to the result set. The default cursor type is non-unique.
Updatability
A cursor declared as read-only cannot be used in a positioned update or delete operation. The default
cursor type is updatable.
Scrollability
You can declare cursors to behave different ways as you move through the result set. Some cursors can
fetch only the current row or the following row. Others can move backward and forward through the result
set.
Sensitivity
Cursors with a variety of mixes of these characteristics are possible. When you request a cursor of a given type,
a match to those characteristics is attempted.
There are some occasions when not all characteristics can be supplied. For example, insensitive cursors must
be read-only. If your application requests an updatable insensitive cursor, a different cursor type (value-
sensitive) is supplied instead.
Bookmarks for value-sensitive and insensitive cursors are supported. For example, the ODBC cursor types
SQL_CURSOR_STATIC and SQL_CURSOR_KEYSET_DRIVEN support bookmarks while cursor types
SQL_CURSOR_DYNAMIC and SQL_CURSOR_FORWARD_ONLY do not.
ODBC provides a cursor type called a block cursor. When you use a BLOCK cursor, you can use SQLFetchScroll
or SQLExtendedFetch to fetch a block of rows, rather than a single row.
Any cursor, once opened, has an associated result set. The cursor is kept open for a length of time. During that
time, the result set associated with the cursor may be changed, either through the cursor itself or, subject to
isolation level requirements, by other transactions.
Some cursors permit changes to the underlying data to be visible, while others do not reflect these changes. A
sensitivity to changes to the underlying data causes different cursor behavior, or cursor sensitivity.
Changes to the underlying data can affect the result set of a cursor in the following ways:
Membership
The set of rows in the result set, as identified by their primary key values.
Order
For example, consider the following simple table with employee information (EmployeeID is the primary key
column):
EmployeeID Surname
1 Whitney
2 Cobb
3 Chin
A cursor on the following query returns all results from the table in primary key order:
The membership of the result set could be changed by adding a new row or deleting a row. The values could be
changed by changing one of the names in the table. The order could be changed by changing the primary key
value of one of the employees.
Subject to isolation level requirements, the membership, order, and values of the result set of a cursor can be
changed after the cursor is opened. Depending on the type of cursor in use, the result set as seen by the
application may or may not change to reflect these changes.
Changes to the underlying data may be visible or invisible through the cursor. A visible change is a change that
is reflected in the result set of the cursor. Changes to the underlying data that are not reflected in the result set
seen by the cursor are invisible.
In this section:
Related Information
Cursors are classified by their sensitivity to changes in the underlying data. In other words, cursor sensitivity is
defined by the changes that are visible.
Insensitive cursors
The result set is fixed when the cursor is opened. No changes to the underlying data are visible.
Sensitive cursors
The result set can change after the cursor is opened. All changes to the underlying data are visible.
Asensitive cursors
Changes may be reflected in the membership, order, or values of the result set seen through the cursor, or
may not be reflected at all.
Value-sensitive cursors
Changes to the order or values of the underlying data are visible. The membership of the result set is fixed
when the cursor is opened.
The differing requirements on cursors place different constraints on execution and therefore affect
performance.
This example uses a simple query to illustrate how different cursors respond to a row in the result set being
deleted.
1. An application opens a cursor on the following query against the sample database.
EmployeeID Surname
102 Whitney
105 Cobb
129 Chin
... ...
2. The application fetches the first row through the cursor (102).
3. The application fetches the next row through the cursor (105).
4. A separate transaction deletes employee 102 (Whitney) and commits the change.
The results of cursor actions in this situation depend on the cursor sensitivity:
Insensitive cursors
The DELETE is not reflected in either the membership or values of the results as seen through the cursor:
Action Result
Fetch previous row Returns the original copy of the row (102).
Fetch the first row (absolute fetch) Returns the original copy of the row (102).
Fetch the second row (absolute fetch) Returns the unchanged row (105).
Sensitive cursors
Action Result
Fetch previous row Returns Row Not Found. There is no previous row.
Value-sensitive cursors
The membership of the result set is fixed, and so row 105 is still the second row of the result set. The
DELETE is reflected in the values of the cursor, and creates an effective hole in the result set.
Action Result
Fetch the first row (absolute fetch) Returns No current row of cursor. There is a
hole in the cursor where the first row used to be.
Asensitive cursors
For changes, the membership and values of the result set are indeterminate. The response to a fetch of the
previous row, the first row, or the second row depends on the particular optimization method for the query,
whether that method involved the formation of a work table, and whether the row being fetched was
prefetched from the client.
The benefit of asensitive cursors is that for many applications, sensitivity is unimportant. In particular, if
you are using a forward-only, read-only cursor, no underlying changes are seen. Also, if you are running at a
high isolation level, underlying changes are disallowed.
This example uses a simple query to illustrate how different cursor types respond to a row in the result set
being updated in such a way that the order of the result set is changed.
1. An application opens a cursor on the following query against the sample database.
EmployeeID Surname
102 Whitney
105 Cobb
129 Chin
... ...
2. The application fetches the first row through the cursor (102).
3. The application fetches the next row through the cursor (105).
4. A separate transaction updates the employee ID of employee 102 (Whitney) to 165 and commits the
change.
The results of the cursor actions in this situation depend on the cursor sensitivity:
Insensitive cursors
The UPDATE is not reflected in either the membership or values of the results as seen through the cursor:
Action Result
Fetch previous row Returns the original copy of the row (102).
Fetch the first row (absolute fetch) Returns the original copy of the row (102).
Fetch the second row (absolute fetch) Returns the unchanged row (105).
Sensitive cursors
The membership of the result set has changed so that row 105 is now the first row in the result set:
Action Result
Fetch previous row Returns SQLCODE 100. The membership of the result set
has changed so that 105 is now the first row. The cursor is
moved to the position before the first row.
In addition, a fetch on a sensitive cursor returns a SQLE_ROW_UPDATED_WARNING warning if the row has
changed since the last reading. The warning is given only once. Subsequent fetches of the same row do not
produce the warning.
Similarly, a positioned update or delete through the cursor on a row since it was last fetched returns the
SQLE_ROW_UPDATED_SINCE_READ error. An application must fetch the row again for an update or delete
on a sensitive cursor to work.
An update to any column causes the warning/error, even if the column is not referenced by the cursor. For
example, a cursor on a query returning Surname would report the update even if only the Salary column
was modified.
Value-sensitive cursors
The membership of the result set is fixed, and so row 105 is still the second row of the result set. The
UPDATE is reflected in the values of the cursor, and creates an effective "hole" in the result set.
Action Result
Fetch previous row Returns SQLCODE 100. The membership of the result set
has changed so that 105 is now the first row: The cursor is
positioned on the hole: it is before row 105.
Fetch the first row (absolute fetch) Returns SQLCODE -197. The membership of the result set
has changed so that 105 is now the first row: The cursor is
positioned on the hole: it is before row 105.
Asensitive cursors
For changes, the membership and values of the result set are indeterminate. The response to a fetch of the
previous row, the first row, or the second row depends on the particular optimization method for the query,
whether that method involved the formation of a work table, and whether the row being fetched was
prefetched from the client.
Note
Update warning and error conditions do not occur in bulk operations mode (-b database server option).
These cursors have insensitive membership, order, and values. No changes made after cursor open time are
visible.
Standards
Insensitive cursors correspond to the ISO/ANSI standard definition of insensitive cursors, and to ODBC static
cursors.
Programming Interfaces
JDBC INSENSITIVE
Insensitive cursors always return rows that match the query's selection criteria, in the order specified by any
ORDER BY clause.
The result set of an insensitive cursor is fully materialized as a work table when the cursor is opened. This has
the following consequences:
● If the result set is very large, the disk space and memory requirements for managing the result set may be
significant.
● No row is returned to the application before the entire result set is assembled as a work table. For complex
queries, this may lead to a delay before the first row is returned to the application.
● Subsequent rows can be fetched directly from the work table, and so are returned quickly. The client
library may prefetch several rows at a time, further improving performance.
● Insensitive cursors are not affected by ROLLBACK or ROLLBACK TO SAVEPOINT.
Standards
Sensitive cursors correspond to the ISO/ANSI standard definition of sensitive cursors, and to ODBC dynamic
cursors.
Programming Onterfaces
Prefetching is disabled for sensitive cursors. All changes are visible through the cursor, including changes
through the cursor and from other transactions. Higher isolation levels may hide some changes made in other
transactions because of locking.
Changes to cursor membership, order, and all column values are all visible. For example, if a sensitive cursor
contains a join, and one of the values of one of the underlying tables is modified, then all result rows composed
from that base row show the new value. Result set membership and order may change at each fetch.
Sensitive cursors always return rows that match the query's selection criteria, and are in the order specified by
any ORDER BY clause. Updates may affect the membership, order, and values of the result set.
The requirements of sensitive cursors place restrictions on the implementation of sensitive cursors:
● Rows cannot be prefetched, as changes to the prefetched rows would not be visible through the cursor.
This may impact performance.
● Sensitive cursors must be implemented without any work tables being constructed, as changes to those
rows stored as work tables would not be visible through the cursor.
● The no work table limitation restricts the choice of join method by the optimizer and therefore may impact
performance.
● For some queries, the optimizer is unable to construct a plan that does not include a work table that would
make a cursor sensitive.
Work tables are commonly used for sorting and grouping intermediate results. A work table is not needed
for sorting if the rows can be accessed through an index. It is not possible to state exactly which queries
employ work tables, but the following queries do employ them:
○ UNION queries, although UNION ALL queries do not necessarily use work tables.
○ Statements with an ORDER BY clause, if there is no index on the ORDER BY column.
○ Any query that is optimized using a hash join.
○ Many queries involving DISTINCT or GROUP BY clauses.
In these cases, either an error is returned to the application, or the cursor type is changed to an asensitive
cursor and a warning is returned.
These cursors do not have well-defined sensitivity in their membership, order, or values. The flexibility that is
allowed in the sensitivity permits asensitive cursors to be optimized for performance.
Standards
Asensitive cursors correspond to the ISO/ANSI standard definition of asensitive cursors, and to ODBC cursors
with unspecific sensitivity.
Description
A request for an asensitive cursor places few restrictions on the methods used to optimize the query and
return rows to the application. For these reasons, asensitive cursors provide the best performance. In
particular, the optimizer is free to employ any measure of materialization of intermediate results as work
tables, and rows can be prefetched by the client.
There are no guarantees about the visibility of changes to base underlying rows. Some changes may be visible,
others not. Membership and order may change at each fetch. In particular, updates to base rows may result in
only some of the updated columns being reflected in the cursor's result.
Asensitive cursors do not guarantee to return rows that match the query's selection and order. The row
membership is fixed at cursor open time, but subsequent changes to the underlying values are reflected in the
results.
Asensitive cursors always return rows that matched the customer's WHERE and ORDER BY clauses at the time
the cursor membership is established. If column values change after the cursor is opened, rows may be
returned that no longer match WHERE and ORDER BY clauses.
For value-sensitive cursors, membership is insensitive, and the order and value of the result set is sensitive.
Standards
Value-sensitive cursors do not correspond to an ISO/ANSI standard definition. They correspond to ODBC
keyset-driven cursors.
JDBC INSENSITIVE and CONCUR_UPDATA With the SQL Anywhere JDBC driver, a
BLE request for an updatable INSENSITIVE
cursor is answered with a value-sensi
tive cursor.
Description
If the application fetches a row composed of a base underlying row that has changed, then the application
must be presented with the updated value, and the SQL_ROW_UPDATED status must be issued to the
application. If the application attempts to fetch a row that was composed of a base underlying row that was
deleted, a SQL_ROW_DELETED status must be issued to the application.
Changes to primary key values remove the row from the result set (treated as a delete, followed by an insert). A
special case occurs when a row in the result set is deleted (either from cursor or outside) and a new row with
the same key value is inserted. This will result in the new row replacing the old row where it appeared.
There is no guarantee that rows in the result set match the query's selection or order specification. Since row
membership is fixed at open time, subsequent changes that make a row not match the WHERE clause or
ORDER BY do not change a row's membership nor position.
All values are sensitive to changes made through the cursor. The sensitivity of membership to changes made
through the cursor is controlled by the ODBC option SQL_STATIC_SENSITIVITY. If this option is on, then
inserts through the cursor add the row to the cursor. Otherwise, they are not part of the result set. Deletes
through the cursor remove the row from the result set, preventing a hole returning the SQL_ROW_DELETED
status.
Value-sensitive cursors use a key set table. When the cursor is opened, a work table is populated with
identifying information for each row contributing to the result set. When scrolling through the result set, the key
set table is used to identify the membership of the result set, but values are obtained, if necessary, from the
underlying tables.
The fixed membership property of value-sensitive cursors allows your application to remember row positions
within a cursor and be assured that these positions will not change.
● If a row was updated or may have been updated since the cursor was opened, a
SQLE_ROW_UPDATED_WARNING is returned when the row is fetched. The warning is generated only
once: fetching the same row again does not produce the warning.
An update to any column of the row causes the warning, even if the updated column is not referenced by
the cursor. For example, a cursor on Surname and GivenName would report the update even if only the
Birthdate column was modified. These update warning and error conditions do not occur in bulk
operations mode (-b database server option) when row locking is disabled.
Rows cannot be prefetched for value-sensitive cursors. This requirement may affect performance.
When inserting multiple rows through a value-sensitive cursor, the new rows appear at the end of the result set.
Related Information
There is a trade-off between performance and other cursor properties. In particular, making a cursor updatable
places restrictions on the cursor query processing and delivery that constrain performance. Also, putting
requirements on cursor sensitivity may constrain cursor performance.
To understand how the updatability and sensitivity of cursors affects performance, you must understand how
the results that are visible through a cursor are transmitted from the database to the client application.
In particular, results may be stored at two intermediate locations for performance reasons:
Work tables
Either intermediate or final results may be stored as work tables. Value-sensitive cursors employ a work
table of primary key values. Query characteristics may also lead the optimizer to use work tables in its
chosen execution plan.
Prefetching
The client side of the communication may retrieve rows into a buffer on the client side to avoid separate
requests to the database server for each row.
In this section:
1.2.6.8.1 Prefetches
Prefetching assists performance by cutting down on client/server round trips, and increases throughput by
making many rows available without a separate request to the server for each row or block of rows.
Prefetches and multiple-row fetches are different. Prefetches can be carried out without explicit instructions
from the client application. Prefetching retrieves rows from the server into a buffer on the client side, but does
not make those rows available to the client application until the application fetches the appropriate row.
By default, the client library prefetches multiple rows whenever an application fetches a single row. The client
library stores the additional rows in a buffer.
● The prefetch option controls whether prefetching occurs. You can set the prefetch option to Always,
Conditional, or Off for a single connection. By default, it is set to Conditional.
● In Embedded SQL, you can control prefetching on a per-cursor basis when you open a cursor on an
individual FETCH operation using the BLOCK clause.
The application can specify a maximum number of rows contained in a single fetch from the server by
specifying the BLOCK clause. For example, if you are fetching and displaying 5 rows at a time, you could
use BLOCK 5. Specifying BLOCK 0 fetches 1 record at a time and also causes a FETCH RELATIVE 0 to
always fetch the row from the server again.
Although you can also turn off prefetch by setting a connection parameter on the application, it is more
efficient to specify BLOCK 0 than to set the prefetch option to Off.
Prefetch dynamically increases the number of prefetch rows when improvements in performance could be
achieved. This includes cursors that meet the following conditions:
all cursors
● They perform only FETCH NEXT operations (no absolute, relative, or backward fetching).
● The application does not change the host variable type between fetches and does not use a GET DATA
statement to get column data in chunks (using one GET DATA statement to get the value is supported).
A lost update is a scenario in which two or more transactions update the same row, but neither transaction is
aware of the modification made by the other transaction, and the second change overwrites the first
modification.
1. An application opens a cursor on the following query against the sample database.
ID Quantity
300 28
301 54
302 75
... ...
2. The application fetches the row with ID = 300 through the cursor.
3. A separate transaction updates the row using the following statement:
UPDATE Products
SET Quantity = Quantity - 10
WHERE ID = 300;
4. The application then updates the row through the cursor to a value of (Quantity - 5).
In a database application, the potential for a lost update exists at any isolation level if changes are made to rows
without verification of their values beforehand. At higher isolation levels (2 and 3), locking (read, intent, and
write locks) can be used to ensure that changes to rows cannot be made by another transaction once the row
has been read by the application. However, at isolation levels 0 and 1, the potential for lost updates is greater: at
isolation level 0, read locks are not acquired to prevent subsequent changes to the data, and isolation level 1
only locks the current row. Lost updates cannot occur when using snapshot isolation since any attempt to
change an old value results in an update conflict. Also, the use of prefetching at isolation level 1 can also
introduce the potential for lost updates, since the result set row that the application is positioned on, which is in
the client's prefetch buffer, may not be the same as the current row that the server is positioned on in the
cursor.
To prevent lost updates from occurring with cursors at isolation level 1, the database server supports three
different concurrency control mechanisms that can be specified by an application:
1. The acquisition of intent row locks on each row in the cursor as it is fetched. Intent locks prevent other
transactions from acquiring intent or write locks on the same row, preventing simultaneous updates.
However, intent locks do not block read row locks, so they do not affect the concurrency of read-only
statements.
2. The use of a value-sensitive cursor. Value-sensitive cursors can be used to track when an underlying row
has changed, or has been deleted, so that the application can respond.
3. The use of FETCH FOR UPDATE, which acquires an intent row lock for that specific row.
How these alternatives are specified depends on the interface used by the application. For the first two
alternatives that pertain to a SELECT statement:
● In ODBC, lost updates cannot occur because the application must specify a cursor concurrency parameter
to the SQLSetStmtAttr function when declaring an updatable cursor. This parameter is one of
SQL_CONCUR_LOCK, SQL_CONCUR_VALUES, SQL_CONCUR_READ_ONLY, or
SQL_CONCUR_TIMESTAMP. For SQL_CONCUR_LOCK, the database server acquires row intent locks. For
SQL_CONCUR_VALUES and SQL_CONCUR_TIMESTAMP, a value-sensitive cursor is used.
SQL_CONCUR_READ_ONLY is used for read-only cursors, and is the default.
● In JDBC, the concurrency setting for a statement is similar to that of ODBC. The JDBC driver supports the
JDBC concurrency values RESULTSET_CONCUR_READ_ONLY and RESULTSET_CONCUR_UPDATABLE.
The first value corresponds to the ODBC concurrency setting SQL_CONCUR_READ_ONLY and specifies a
read-only statement. The second value corresponds to the ODBC SQL_CONCUR_LOCK setting, so row
intent locks are used to prevent lost updates. Value-sensitive cursors cannot be specified directly in the
JDBC driver.
● In jConnect, updatable cursors are supported at the API level, but the underlying implementation (using
TDS) does not support updates through a cursor. Instead, jConnect sends a separate UPDATE statement
to the database server to update the specific row. To avoid lost updates, the application must run at
isolation level 2 or higher. Alternatively, the application can issue separate UPDATE statements from the
cursor, but you must ensure that the UPDATE statement verifies that the row values have not been altered
since the row was read by placing appropriate conditions in the UPDATE statement's WHERE clause.
● In Embedded SQL, a concurrency specification can be set by including syntax within the SELECT
statement itself, or in the cursor declaration. In the SELECT statement, the syntax SELECT...FOR UPDATE
BY LOCK causes the database server to acquire intent row locks on the result set.
Alternatively, SELECT...FOR UPDATE BY [ VALUES | TIMESTAMP ] causes the database server to change
the cursor type to a value-sensitive cursor, so that if a specific row has been changed since the row was last
read through the cursor, the application receives either a warning (SQLE_ROW_UPDATED_WARNING) on a
FETCH FOR UPDATE functionality is also supported by the Embedded SQL and ODBC interfaces, although the
details differ depending on the API that is used.
In Embedded SQL, the application uses FETCH FOR UPDATE, rather than FETCH, to cause an intent lock to be
acquired on the row. In ODBC, the application uses the API call SQLSetPos with the operation argument
SQL_POSITION or SQL_REFRESH, and the lock type argument SQL_LOCK_EXCLUSIVE, to acquire an intent
lock on a row. These are long-term locks that are held until the transaction is committed or rolled back.
Both cursor sensitivity and isolation levels address the problem of concurrency control, but in different ways,
and with different sets of tradeoffs.
By choosing an isolation level for a transaction (typically at the connection level), you determine the type and
locks to place, and when, on rows in the database. Locks prevent other transactions from accessing or
modifying rows in the database. In general, the greater the number of locks held, the lower the expected level of
concurrency across concurrent transactions.
However, locks do not prevent updates from other portions of the same transaction from occurring. So, a single
transaction that maintains multiple updatable cursors cannot rely on locking to prevent such problems as lost
updates.
Snapshot isolation is intended to eliminate the need for read locks by ensuring that each transaction sees a
consistent view of the database. The obvious advantage is that a consistent view of the database can be
queried without relying on fully serializable transactions (isolation level 3), and the loss of concurrency that
comes with using isolation level 3. However, snapshot isolation comes with a significant cost because copies of
modified rows must be maintained to satisfy the requirements of both concurrent snapshot transactions
already executing, and snapshot transactions that have yet to start. Because of this copy maintenance, the use
of snapshot isolation may be inappropriate for heavy-update workloads.
Cursor sensitivity, however, determines which changes are visible (or not) to the cursor's result. Because
cursor sensitivity is specified on a cursor basis, cursor sensitivity applies to both the effects of other
transactions and to update activity of the same transaction, although these effects depend entirely on the
cursor type specified. By setting cursor sensitivity, you are not directly determining when locks are placed on
rows in the database. However, it is the combination of cursor sensitivity and isolation level that controls the
various concurrency scenarios that are possible with a particular application.
When you request a cursor type from your client application, a cursor is provided. Cursors are defined, not by
the type as specified in the programming interface, but by the sensitivity of the result set to changes in the
underlying data.
Depending on the cursor type you ask for, a cursor is provided with behavior to match the type.
Forward-only, read-only cursors are available by using SACommand.ExecuteReader. The SADataAdapter object
uses a client-side result set instead of cursors.
The following table illustrates the cursor sensitivity that is set in response to different ODBC scrollable cursor
types.
STATIC Insensitive
KEYSET-DRIVEN Value-sensitive
DYNAMIC Sensitive
MIXED Value-sensitive
A MIXED cursor is obtained by setting the cursor type to SQL_CURSOR_KEYSET_DRIVEN, and then specifying
the number of rows in the keyset for a keyset-driven cursor using SQL_ATTR_KEYSET_SIZE. If the keyset size is
0 (the default), the cursor is fully keyset-driven. If the keyset size is greater than 0, the cursor is mixed (keyset-
driven within the keyset and dynamic outside the keyset). The default keyset size is 0. It is an error if the keyset
size is greater than 0 and less than the rowset size (SQL_ATTR_ROW_ARRAY_SIZE).
ODBC cursor characteristics help you decide the curser type to request.
If a STATIC cursor is requested as updatable, a value-sensitive cursor is supplied instead and a warning is
issued.
If a DYNAMIC or MIXED cursor is requested and the query cannot be executed without using work tables, a
warning is issued and an asensitive cursor is supplied instead.
Related Information
The SQL Anywhere JDBC driver supports three types of cursors: insensitive, sensitive, and forward-only
asensitive.
The SQL Anywhere JDBC driver supports these different cursor types for a JDBC ResultSet object. However,
there are cases when the database server cannot construct an access plan with the required semantics for a
given cursor type. In these cases, the database server either returns an error or substitutes a different cursor
type.
Related Information
To request a cursor from an Embedded SQL application, you specify the cursor type on the DECLARE
statement. The following table illustrates the cursor sensitivity that is set in response to different requests:
NO SCROLL Asensitive
SCROLL Value-sensitive
INSENSITIVE Insensitive
SENSITIVE Sensitive
Exceptions
If a DYNAMIC SCROLL cursor is requested, if the prefetch database option is set to Off, and if the query
execution plan involves no work tables, then a sensitive cursor may be supplied. Again, this uncertainty fits the
definition of asensitive behavior.
The underlying protocol (TDS) for Open Client supports only forward-only, read-only, asensitive cursors.
Some applications build SQL statements that cannot be completely specified in the application. Sometimes
statements are dependent on a user response before the application knows exactly what information to
retrieve, such as when a reporting application allows a user to select which columns to display.
In such a case, the application needs a method for retrieving information about both the nature of the result
set and the contents of the result set. The information about the nature of the result set, called a descriptor,
identifies the data structure, including the number and type of columns expected to be returned. Once the
application has determined the nature of the result set, retrieving the contents is straightforward.
This result set metadata (information about the nature and content of the data) is manipulated using
descriptors. Obtaining and managing the result set metadata is called describing.
Since cursors generally produce result sets, descriptors and cursors are closely linked, although some
interfaces hide the use of descriptors from the user. Typically, statements needing descriptors are either
SELECT statements or stored procedures that return result sets.
1. Allocate the descriptor. This may be done implicitly, although some interfaces allow explicit allocation as
well.
2. Prepare the statement.
Implementation Notes
● In Embedded SQL, a SQLDA (SQL Descriptor Area) structure holds the descriptor information.
● In ODBC, a descriptor handle allocated using SQLAllocHandle provides access to the fields of a descriptor.
You can manipulate these fields using SQLSetDescRec, SQLSetDescField, SQLGetDescRec, and
SQLGetDescField.
Alternatively, you can use SQLDescribeCol, SQLColAttribute, or SQLColAttributes (ODBC 2.0) to obtain
column information.
● In Open Client, you can use ct_dynamic to prepare a statement and ct_describe to describe the result set
of the statement. However, you can also use ct_command to send a SQL statement without preparing it
first and use ct_results to handle the returned rows one by one. This is the more common way of operating
in Open Client application development.
● In JDBC, the java.sql.ResultSetMetaData class provides information about result sets.
● You can also use descriptors for sending data to the database server (for example, with the INSERT
statement); however, this is a different kind of descriptor than for result sets.
Related Information
Transactions are sets of atomic SQL statements. Either all statements in the transaction are executed, or none.
In this section:
Database programming interfaces can operate in either manual commit mode or autocommit mode.
Operations are committed only when your application carries out an explicit commit operation or when the
database server carries out an automatic commit, for example when executing an ALTER TABLE statement
or other data definition statement. Manual commit mode is also sometimes called chained mode.
To use transactions in your application, including nested transactions and savepoints, you must operate in
manual commit mode.
Autocommit mode
Autocommit mode can affect the performance and behavior of your application. Do not use autocommit if your
application requires transactional integrity.
In this section:
The way to control the commit behavior of your application depends on the programming interface you are
using. The implementation of autocommit may be client-side or server-side, depending on the interface.
By default, the ADO.NET data provider operates in autocommit mode. To use explicit transactions, use the
SAConnection.BeginTransaction method. Automatic commit is handled on the client side by the provider.
By default, Embedded SQL applications operate in manual commit mode. To enable automatic commits
temporarily, set the auto_commit database option (a server-side option) to On using a statement such as the
following:
By default, JDBC operates in autocommit mode. To turn off autocommit, use the setAutoCommit method of
the connection object:
conn.setAutoCommit( false );
By default, ODBC operates in autocommit mode. The way you turn off autocommit depends on whether you
are using ODBC directly, or using an application development tool. If you are programming directly to the
ODBC interface, set the SQL_ATTR_AUTOCOMMIT connection attribute. The following example disables
autocommit.
By default, the OLE DB provider operates in autocommit mode. To use explicit transactions, use the
ITransactionLocal::StartTransaction, ITransaction::Commit, and ITransaction::Abort methods.
By default, a connection made through Open Client operates in autocommit mode. You can change this
behavior by setting the chained database option (a server-side option) to On in your application using a
statement such as the following:
By default, Perl operates in autocommit mode. To disable autocommit, set the AutoCommit option:
By default, PHP operates in autocommit mode. To disable autocommit, use the sasql_set_option function:
By default, Python operates in manual commit mode. To enable autocommit, set the auto_commit database
option (a server-side option) to On using a statement such as the following:
By default, Ruby operates in manual commit mode. To enable autocommit, set the auto_commit database
option (a server-side option) to On using a statement such as the following:
By default, the database server operates in manual commit mode. To enable automatic commits temporarily,
set the auto_commit database option (a server-side option) to On using a statement such as the following:
Note
Do not set the auto_commit server option directly when using an API such as ADO.NET, JDBC, ODBC, or
OLE DB. Use the API-specific mechanism for enabling or disabling automatic commit. For example, in
ODBC set the SQL_ATTR_AUTOCOMMIT connection attribute using SQLSetConnectAttr. When you use
the API, the driver can track the current setting of automatic commit.
The auto_commit option cannot be set in Interactive SQL using a SET TEMPORARY OPTION statement
since this sets the Interactive SQL option of the same name. You could imbed the statement in an EXECUTE
IMMEDIATE statement, however it is inadvisable to do so as unpredictable behavior can result. By default,
Interactive SQL operates in manual commit mode (auto_commit = 'Off').
Related Information
Autocommit mode has slightly different behavior depending on the interface and provider that you are using
and how you control the autocommit behavior.
Some application programming interfaces operate in manual commit mode by default. Others operate in
automatic commit (or autocommit) mode by default. Consult the documentation for each API to determine
which mode is default.
The way autocommit behavior is controlled depends on the application programming interface.
To enable or disable autocommit, some application programming interfaces provide native callable
methods. Examples are ADO.NET, ADO/OLE DB, JDBC, ODBC, Perl, and PHP.
For example, in ODBC applications you enable autocommit using an API call as follows:
To enable or disable autocommit, some application programming interfaces require the execution of a SQL
SET OPTION statement. Examples are ESQL, Python, and Ruby. The following statement temporarily
enables autocommit on the database server for this connection only.
When you use the SAP Open Client interface, you set the CHAINED option to manipulate autocommit
behavior. The CHAINED option is provided for TDS application compatibility. If you are using the jConnect
JDBC driver, then you must call the JDBC setAutoCommit method rather than set the CHAINED option.
Note
If the application programming interface provides a callable native method specifically for enabling or
disabling autocommit, then that interface must be used.
You can set the isolation level of a current connection using the isolation_level database option.
Some interfaces, such as ODBC, allow you to set the isolation level for a connection at connection time. You can
reset this level later using the isolation_level database option.
You can override any temporary or public settings for the isolation_level database option within individual
INSERT, UPDATE, DELETE, SELECT, UNION, EXCEPT, and INTERSECT statements by including an OPTION
clause in the statement.
If either of these two cases is true, the cursor stays open on a COMMIT.
If a transaction rolls back, then cursors close except for those cursors opened WITH HOLD. However, don't rely
on the contents of any cursor after a rollback.
The draft ISO SQL3 standard states that on a rollback, all cursors (even those cursors opened WITH HOLD)
should close. You can obtain this behavior by setting the ansi_close_cursors_on_rollback option to On.
Savepoints
If a transaction rolls back to a savepoint, and if the ansi_close_cursors_on_rollback option is On, then all
cursors (even those cursors opened WITH HOLD) opened after the SAVEPOINT close.
You can change the isolation level of a connection during a transaction using the SET OPTION statement to
alter the isolation_level option. However, this change does not affect open cursors.
A snapshot of all rows committed at the snapshot start time is visible when the WITH HOLD clause is used with
the snapshot, statement-snapshot, and readonly-statement-snapshot isolation levels. Also visible are all
Use SQL Anywhere with .NET, including the API for the SQL Anywhere .NET Data Provider.
In this section:
Microsoft ADO.NET is the latest data access API in the line of ODBC, OLE DB, and ADO. It is the preferred data
access component for the Microsoft .NET Framework and allows you to access relational database systems.
The SQL Anywhere .NET Data Provider implements the Sap.Data.SQLAnywhere namespace and allows you to
write programs in any of the .NET supported languages, such as Microsoft C# and Microsoft Visual Basic .NET,
and access data from SQL Anywhere databases.
You can develop Internet and intranet applications using object-oriented languages, and then connect these
applications to the database server using the SQL Anywhere.NET Data Provider.
Using the SQL Anywhere .NET Data Provider in a Microsoft Visual Studio Project [page 55]
Use the SQL Anywhere .NET Data Provider to develop Microsoft .NET applications with Microsoft Visual
Studio by including both a reference to the SQL Anywhere .NET Data Provider, and a line in your source
code referencing the SQL Anywhere .NET Data Provider classes.
Sap.Data.SQLAnywhere
The ADO.NET object model is an all-purpose data access model. ADO.NET components were designed to
factor data access from data manipulation. There are two central components of ADO.NET that do this: the
DataSet, and the .NET Framework data provider, which is a set of components including the Connection,
Command, DataReader, and DataAdapter objects. A .NET Entity Framework Data Provider is included that
communicates directly with a database server without adding the overhead of OLE DB or ODBC. The .NET
Data Provider is represented in the .NET namespace as Sap.Data.SQLAnywhere.
The SQL Anywhere .NET Data Provider namespace is described in this document.
System.Data.Oledb
This namespace supports OLE DB data sources. This namespace is an intrinsic part of the Microsoft .NET
Framework. You can use System.Data.Oledb together with the OLE DB provider, SAOLEDB, to access
databases.
System.Data.Odbc
This namespace supports ODBC data sources. This namespace is an intrinsic part of the Microsoft .NET
Framework. You can use System.Data.Odbc together with the ODBC drivers to access databases.
There are some key benefits to using the .NET Data Provider:
● In the .NET environment, the .NET Data Provider provides native access to a database. Unlike the other
supported providers, it communicates directly with a database server and does not require bridge
technology.
● As a result, the .NET Data Provider is faster than the OLE DB and ODBC Data Providers. It is the
recommended Data Provider for accessing SQL Anywhere databases.
There are several sample projects included with the SQL Anywhere .NET Data Provider.
DeployUtility
This is a code example to assist in the deployment of the unmanaged code portions of the SQL
Anywhere .NET Data Provider in a ClickOnce deployment.
LinqSample
A .NET Framework sample project for Windows that demonstrates language-integrated query, set, and
transform operations using the SQL Anywhere .NET Data Provider and C#.
SimpleWin32
A Microsoft .NET Framework sample project for Microsoft Windows that demonstrates how to obtain XML
data from a database via Microsoft ADO.NET. Samples for Microsoft C#, Visual Basic, and Microsoft Visual
C++ are provided.
SimpleViewer
A Microsoft .NET Framework sample project for Microsoft Windows that allows you to enter and execute
SQL statements.
Related Information
ClickOnce and .NET Data Provider Unmanaged Code DLLs [page 812]
Tutorial: Using the Simple Code Sample in SimpleWin32 [page 103]
Tutorial: Developing a Simple .NET Database Application with Microsoft Visual Studio [page 111]
Tutorial: Using the Table Viewer Code Sample [page 107]
Use the SQL Anywhere .NET Data Provider to develop Microsoft .NET applications with Microsoft Visual Studio
by including both a reference to the SQL Anywhere .NET Data Provider, and a line in your source code
referencing the SQL Anywhere .NET Data Provider classes.
Procedure
The reference indicates which provider to include and locates the code for the SQL Anywhere .NET Data
Provider.
3. Click the .NET tab (or open Assemblies/Extensions), and scroll through the list to locate any of the
following:
○ Sap.Data.SQLAnywhere for .NET 3.5
○ Sap.Data.SQLAnywhere for .NET 4
○ Sap.Data.SQLAnywhere for .NET 4.5
○ Sap.Data.SQLAnywhere for Entity Framework 6
The provider is added to the References folder in the Solution Explorer window of your project.
5. Specify a directive to your source code to assist with the use of the SQL Anywhere .NET Data Provider
namespace and the defined types.
○ If you are using C#, add the following line to the list of using directives at the beginning of your source
code:
using Sap.Data.SQLAnywhere;
○ If you are using Visual Basic, add the following line at the beginning of source code:
Imports Sap.Data.SQLAnywhere
Results
The SQL Anywhere .NET Data Provider is set up for use with your SQL Anywhere .NET application.
Example
The following C# example shows how to create a connection object when a using directive has been specified:
The following C# example shows how to create a connection object when a using directive has not been
specified:
Sap.Data.SQLAnywhere.SAConnection conn =
new Sap.Data.SQLAnywhere.SAConnection();
The following Microsoft Visual Basic example shows how to create a connection object when an Imports
directive has been specified:
The following Microsoft Visual Basic example shows how to create a connection object when an Imports
directive has not been specified:
To connect to a database, an SAConnection object must be created. The connection string can be specified
when creating the object or it can be established later by setting the ConnectionString property.
A well-designed application should handle any errors that occur when attempting to connect to a database.
A connection to the database is created when the connection is opened and released when the connection is
closed.
The following Microsoft C# code creates a button click handler that opens a connection to the sample
database and then closes it. An exception handler is included.
The following Microsoft Visual Basic code creates a button click handler that opens a connection to the sample
database and then closes it. An exception handler is included.
In this section:
Use connection parameters in connection strings to connect and authenticate to a database server.
Specify a connection string in a .NET application when the connection object is created or by setting the
ConnectionString property of a connection object.
Connection parameter names are case insensitive. For example, UserID and userid are equivalent. If a
connection parameter name contains spaces, then they must be preserved.
Connection parameter values can be case sensitive. For example, passwords are usually case sensitive.
Connection lifetime Specifies the maximum lifetime (in seconds) for a connec
tion that is to be pooled. If the time between closing a con
nection and the time it was originally opened is longer than
the maximum lifetime, then the connection is not pooled but
closed. A connection may have been opened, closed, and
pooled many times but when the total lifetime is exceeded, it
is no longer returned to the pool. This can result in the pool
of available connections shrinking over time. The default is 0,
which means no maximum.
Connection lifetime=seconds
Connection timeout Specifies the length of time (in seconds) to wait for a con
nection to the database server before terminating the at
tempt and generating an error. The default is 15. The alter
nate form Connect Timeout can be used.
Connection timeout=seconds
DatabaseName Identifies the database name. The alternate form DBN can
be used.
DatabaseName=db-name
Data Source Identifies the data source name. The alternate forms
DataSourceName and DSN can be used.
Data Source=datasource-name
Enlist=boolean-value
When set false, transactions are not enlisted with the Mi
crosoft Distributed Transaction Coordinator.
FileDataSourceName Identifies the file data source name. The alternate form
FileDSN can be used.
FileDataSourceName=file-name
Host=host-spec[:host-port]
InitString=sql-statement
Max pool size Specifies the maximum size of the connection pool. The de
fault is 100.
Min pool size Specifies the minimum size of the connection pool. The de
fault is 0.
PWD=passcode
Persist security info Indicates whether the Password (PWD) connection param
eter must be retained in the ConnectionString prop
erty of the connection object. The default is false.
When set true, the application can obtain the user's pass
word from the ConnectionString property if the
Password (PWD) connection parameter was specified in
the original connection string.
Pooling=boolean-value
Server Identifies the host name and port of the database server.
The alternate form ServerName can be used.
Server=sqla-server:port
UID=username
Note
Avoid the use of User ID in connection strings in ap
plication configuration files. UserID and UID can be
used instead.
Not all connection parameters are described here. For information on other connection parameters, see the
topic on database connection parameters.
Note that the ConnectionPool(CPOOL) option is ignored by the .NET data provider (it has no effect at all).
● Creates a connection object setting the ConnectionString property and then connects to the database
server.
● Creates a connection object, sets the ConnectionString property for the connection object, and then
connects to the database server. A SET TEMPORARY OPTION statement is executed after connecting to
the database server to verify the application signature against the database signature for authenticated
applications. Since the SET TEMPORARY OPTION statement contains semicolons, it must be enclosed by
quotation marks (").
● Connects to a database server running on the computer identified by an IP address using a user ID and
password that were obtained from the user.
The SQL Anywhere .NET Data Provider supports native .NET connection pooling. Connection pooling allows
your application to reuse existing connections by saving the connection handle to a pool so it can be reused,
rather than repeatedly creating a new connection to the database.
Connection pooling is enabled and disabled using the Pooling connection parameter. Connection pooling is
enabled by default.
The maximum pool size is set in your connection string using the Max Pool Size parameter. The minimum or
initial pool size is set in your connection string using the Min Pool Size parameter. The default maximum pool
size is 100, while the default minimum pool size is 0.
The following is an example of a .NET connection string. Data source names are supported.
Do not use the ConnectionPool (CPOOL) connection parameter to enable or disable .NET connection pooling.
Use the Pooling connection parameter for this purpose.
When your application first attempts to connect to the database, it checks the pool for an existing connection
that uses the same connection parameters you have specified. If a matching connection is found, that
connection is used. Otherwise, a new connection is used. When you disconnect, the connection is returned to
the pool so that it can be reused.
Multiple connection pools are supported by .NET and a different connection string creates a different pool. To
reuse a pooled connection, the connection strings must be textually identical. For .NET applications, the
following two connection strings are not considered equivalent for connection pooling; thus each string would
create its own connection pool.
UID=DBA;PWD=passwd;ConnectionName=One
UserID=DBA;PWD=passwd;ConnectionName=One
If Min Pool Size=5, then 5 connections are made to the database server at the time the first connection is
made. The next 4 concurrent connections will come out of this pool. Any additional concurrent connections are
added as needed. This option is useful for multithreaded applications that make several connections where you
want a quick response to an "open" request (the speed is equivalent to unpooling a pooled connection).
If Max Pool Size=20, then up to 20 concurrent connections will be pooled when closed. Any concurrent
connections beyond the first 20 closed connections will not be pooled (that is, at most 20 connections will be
pooled and the rest will be closed). Note each pooled connection counts as a "real" connection to the database
server. Max Pool Size does not prevent the application from making as many concurrent connections as it
desires.
A .NET application that makes at most one connection to the database server at a time will not need to adjust
the Min and Max pool size settings.
Connection pooling is not supported for non-standard database authentication such as Integrated or Kerberos
logins. Only user ID and password authentication is supported.
The database server also supports connection pooling. This feature is controlled using the ConnectionPool
(CPOOL) connection parameter. However, the .NET Data Provider does not use this server feature and disables
it (CPOOL=NO). All connection pooling is done in the .NET client application instead (client-side connection
pooling).
Once your application has established a connection to the database, you can check the connection state to
ensure that the connection is still open before communicating a request to the database server.
If a connection is closed, you can return an appropriate message to the user and/or attempt to reopen the
connection.
The SAConnection class has a State property that can be used to check the state of the connection. Possible
state values are ConnectionState.Open and ConnectionState.Closed.
The following code checks whether the SAConnection object has been initialized, and if it has, it checks that
the connection is open. A message is returned to the user if the connection is not open.
With the SQL Anywhere .NET Data Provider, there are two ways you can access data, using the SACommand
class or the SADataAdapter class.
SACommand object
The SACommand object is the recommended way of accessing and manipulating data in .NET.
The SACommand object allows you to execute SQL statements that retrieve or modify data directly from
the database. Using the SACommand object, you can issue SQL statements and call stored procedures
directly against the database.
Within an SACommand object, an SADataReader is used to return read-only result sets from a query or
stored procedure. The SADataReader returns only one row at a time, but this does not degrade
performance because the client-side libraries use prefetch buffering to prefetch several rows at a time.
Using the SACommand object allows you to group your changes into transactions rather than operating in
autocommit mode. When you use the SATransaction object, locks are placed on the rows so that other
users cannot modify them.
SADataAdapter object
The SADataAdapter object retrieves the entire result set into a DataSet. A DataSet is a disconnected store
for data that is retrieved from a database. You can then edit the data in the DataSet and when you are
finished, the SADataAdapter object updates the database with the changes made to the DataSet. When
you use the SADataAdapter, there is no way to prevent other users from modifying the rows in your
DataSet. You must include logic within your application to resolve any conflicts that may occur.
There is no performance impact from using the SADataReader within an SACommand object to fetch rows
from the database rather than the SADataAdapter object.
In this section:
SACommand: Insert, Delete, and Update Rows Using ExecuteNonQuery [page 68]
To perform an insert, update, or delete with an SACommand object, use the ExecuteNonQuery method.
The ExecuteNonQuery method issues a query (SQL statement or stored procedure) that does not
return a result set.
SACommand: Retrieve Primary Key Values for Newly Inserted Rows [page 69]
If the table you are updating has an autoincremented primary key, uses UUIDs, or if the primary key
comes from a primary key pool, you can use a stored procedure to obtain the primary key values
generated by the data source.
SADataAdapter: Retrieve Primary Key Values for Newly Inserted Rows [page 77]
If the table you are updating has an autoincremented primary key, uses UUIDs, or if the primary key
comes from a primary key pool, you can use a stored procedure to obtain the primary key values
generated by the data source.
The SACommand object allows you to execute a SQL statement or call a stored procedure against a database.
You can use the ExecuteReader or ExecuteScalar methods to retrieve data from the database.
ExecuteReader
Issues a SQL query that returns a result set. This method uses a forward-only, read-only cursor. You can
loop quickly through the rows of the result set in one direction.
ExecuteScalar
Issues a SQL query that returns a single value. This can be the first column in the first row of the result set,
or a SQL statement that returns an aggregate value such as COUNT or AVG. This method uses a forward-
only, read-only cursor.
When using the SACommand object, you can use the SADataReader to retrieve a result set that is based on a
join. However, you can only make changes (inserts, updates, or deletes) to data that is from a single table. You
cannot update result sets that are based on joins.
When using the SADataReader, there are several Get methods available that you can use to return the results in
the specified data type.
The following Microsoft C# code opens a connection to the sample database and uses the ExecuteReader
method to create a result set containing the last names of employees in the Employees table:
The following Microsoft Visual Basic code opens a connection to the sample database and uses the
ExecuteReader method to create a result set containing the last names of employees in the Employees table:
The following Microsoft C# code opens a connection to the sample database and uses the ExecuteScalar
method to obtain a count of the number of male employees in the Employees table:
You can obtain schema information about columns in a result set using the GetSchemaTable method.
The GetSchemaTable method of the SADataReader class obtains information about the current result set. The
GetSchemaTable method returns the standard .NET DataTable object, which provides information about all the
columns in the result set, including column properties.
The following example obtains information about a result set using the GetSchemaTable method and binds the
DataTable object to the datagrid on the screen.
To perform an insert, update, or delete with an SACommand object, use the ExecuteNonQuery method. The
ExecuteNonQuery method issues a query (SQL statement or stored procedure) that does not return a result
set.
You can only make changes (inserts, updates, or deletes) to data that is from a single table. You cannot update
result sets that are based on joins. You must be connected to a database to use the SACommand object.
For information about obtaining primary key values for AUTOINCREMENT primary keys, see the
documentation on retrieving primary key values for newly inserted rows.
To set the isolation level for a SQL statement, you must use the SACommand object as part of an
SATransaction object. When you modify data without an SATransaction object, the provider operates in
autocommit mode and any changes that you make are applied immediately.
The following example opens a connection to the sample database and uses the ExecuteNonQuery method to
remove all departments whose ID is greater than or equal to 600 and then add two new rows to the
Departments table. It displays the updated table in a datagrid.
The following example opens a connection to the sample database and uses the ExecuteNonQuery method to
update the DepartmentName column to "Engineering" in all rows of the Departments table where the
DepartmentID is 100. It displays the updated table in a datagrid.
Related Information
If the table you are updating has an autoincremented primary key, uses UUIDs, or if the primary key comes
from a primary key pool, you can use a stored procedure to obtain the primary key values generated by the
data source.
The following example shows how to obtain the primary key that is generated for a newly inserted row. The
example uses an SACommand object to call a SQL stored procedure and an SAParameter object to retrieve the
primary key that it returns. For demonstration purposes, the example creates a sample table
(adodotnet_primarykey) and the stored procedure (sp_adodotnet_primarykey) that is used to insert rows and
return primary key values.
The SADataAdapter retrieves a result set into a DataTable. A DataSet is a collection of tables (DataTables) and
the relationships and constraints between those tables. The DataSet is built into the .NET Framework, and is
independent of the Data Provider used to connect to your database.
When you use the SADataAdapter, you must be connected to the database to fill a DataTable and to update the
database with changes made to the DataTable. However, once the DataTable is filled, you can modify the
DataTable while disconnected from the database.
ds.WriteXml("Employees.xml");
ds.WriteXml("EmployeesWithSchema.xml", XmlWriteMode.WriteSchema);
For more information, see the .NET Framework documentation for WriteXml and ReadXml.
When you call the Update method to apply changes from the DataSet to the database, the SADataAdapter
analyzes the changes that have been made and then invokes the appropriate statements, INSERT, UPDATE, or
DELETE, as necessary. When you use the DataSet, you can only make changes (inserts, updates, or deletes) to
data that is from a single table. You cannot update result sets that are based on joins. If another user has a lock
on the row you are trying to update, an exception is thrown.
Caution
Any changes you make to the DataSet are made while you are disconnected. Your application does not have
locks on these rows in the database. Your application must be designed to resolve any conflicts that may
occur when changes from the DataSet are applied to the database if another user changes the data you are
modifying before your changes are applied to the database.
When you use the SADataAdapter, no locks are placed on the rows in the database. This means there is the
potential for conflicts to arise when you apply changes from the DataSet to the database. Your application
should include logic to resolve or log conflicts that arise.
Some of the conflicts that your application logic should address include:
If two users insert new rows into a table, each row must have a unique primary key. For tables with
AUTOINCREMENT primary keys, the values in the DataSet may become out of sync with the values in the
data source.
It is possible to obtain the values for AUTOINCREMENT primary keys for newly inserted rows.
Updates made to the same value
If two users modify the same value, your application should include logic to determine which value is
correct.
Schema changes
If a user modifies the schema of a table you have updated in the DataSet, the update will fail when you
apply the changes to the database.
Data concurrency
Concurrent applications should see a consistent set of data. The SADataAdapter does not place a lock on
rows that it fetches, so another user can update a value in the database once you have retrieved the
DataSet and are working offline.
Many of these potential problems can be avoided by using the SACommand, SADataReader, and
SATransaction objects to apply changes to the database. The SATransaction object is recommended because it
To simplify the process of conflict resolution, you can design your INSERT, UPDATE, or DELETE statement to be
a stored procedure call. By including INSERT, UPDATE, and DELETE statements in stored procedures, you can
catch the error if the operation fails. In addition to the statement, you can add error handling logic to the stored
procedure so that if the operation fails the appropriate action is taken, such as recording the error to a log file,
or trying the operation again.
Related Information
SADataAdapter: Retrieve Primary Key Values for Newly Inserted Rows [page 77]
SACommand: Insert, Delete, and Update Rows Using ExecuteNonQuery [page 68]
The SADataAdapter allows you to view a result set by using the Fill method to fill a DataTable with the results
from a query and then binding the DataTable to a display grid.
When setting up an SADataAdapter, you can specify a SQL statement that returns a result set. When Fill is
called to populate a DataTable, all the rows are fetched in one operation using a forward-only, read-only cursor.
Once all the rows in the result set have been read, the cursor is closed. Changes made to the rows in a
DataTable can be reflected to the database using the Update method.
You can use the SADataAdapter object to retrieve a result set that is based on a join. However, you can only
make changes (inserts, updates, or deletes) to data that is from a single table. You cannot update result sets
that are based on joins.
Caution
Any changes you make to a DataTable are made independently of the original database table. Your
application does not have locks on these rows in the database. Your application must be designed to
resolve any conflicts that may occur when changes from the DataTable are applied to the database if
another user changes the data you are modifying before your changes are applied to the database.
The following example shows how to fill a DataTable using the SADataAdapter. It creates a new DataTable
object named Results and a new SADataAdapter object. The SADataAdapter Fill method is used to fill the
DataTable with the results of the query. The DataTable is then bound to the grid on the screen.
The following example shows how to fill a DataTable using the SADataAdapter. It creates a new DataSet object
and a new SADataAdapter object. The SADataAdapter Fill method is used to create a DataTable table named
Results in the DataSet and then fill it with the results of the query. The Results DataTable is then bound to the
grid on the screen.
The SADataAdapter allows you to configure the schema of a DataTable to match that of a specific query using
the FillSchema method. The attributes of the columns in the DataTable will match those of the
SelectCommand of the SADataAdapter object.
The following example shows how to use the FillSchema method to set up a new DataTable object with the
same schema as a result set. The Additions DataTable is then bound to the grid on the screen.
The following example shows how to use the FillSchema method to set up a new DataTable object with the
same schema as a result set. The DataTable is added to the DataSet using the Merge method. The Additions
DataTable is then bound to the grid on the screen.
The SADataAdapter allows you to insert rows using the Add and Update methods.
The example shows how to use the Update method of SADataAdapter to add rows to a table. The example
fetches the Departments table into a DataTable using the SelectCommand property and the Fill method of the
SADataAdapter. It then adds two new rows to the DataTable and updates the Departments table from the
DataTable using the InsertCommand property and the Update method of the SADataAdapter.
The SADataAdapter allows you to delete rows using the Delete and Update methods.
The following example shows how to use the Update method of SADataAdapter to delete rows from a table.
The example adds two new rows to the Departments table and then fetches this table into a DataTable using
the SelectCommand property and the Fill method of the SADataAdapter. It then deletes some rows from the
DataTable and updates the Departments table from the DataTable using the DeleteCommand property and the
Update method of the SADataAdapter.
The SADataAdapter allows you to update rows using the Update method.
The following example shows how to use the Update method of SADataAdapter to update rows in a table. The
example adds two new rows to the Departments table and then fetches this table into a DataTable using the
SelectCommand property and the Fill method of the SADataAdapter. It then modifies some values in the
DataTable and updates the Departments table from the DataTable using the UpdateCommand property and
the Update method of the SADataAdapter.
If the table you are updating has an autoincremented primary key, uses UUIDs, or if the primary key comes
from a primary key pool, you can use a stored procedure to obtain the primary key values generated by the
data source.
The following example shows how to obtain the primary key that is generated for a newly inserted row. The
example uses an SADataAdapter object to call a SQL stored procedure and an SAParameter object to retrieve
the primary key that it returns. For demonstration purposes, the example creates a sample table
(adodotnet_primarykey) and the stored procedure (sp_adodotnet_primarykey) that is used to insert rows and
return primary key values.
When fetching long string values or binary data, there are methods that you can use to fetch the data in pieces.
For binary data, use the GetBytes method, and for string data, use the GetChars method.
Otherwise, BLOB data is treated in the same manner as any other data you fetch from the database.
The following example reads three columns from a result set. The first two columns are integers, while the third
column is a LONG VARCHAR. The length of the third column is computed by reading this column with the
GetChars method in chunks of 100 characters.
The .NET Framework does not have a Time structure. To fetch time values from a database, you must use the
GetTimeSpan method.
C# TimeSpan Example
The following example uses the GetTimeSpan method to return the time as TimeSpan.
You can use SQL stored procedures with the SQL Anywhere .NET Data Provider.
The ExecuteReader method is used to call stored procedures that return result sets, while the
ExecuteNonQuery method is used to call stored procedures that do not return any result sets. The
ExecuteScalar method is used to call stored procedures that return only a single value.
The following example shows two ways to call a stored procedure and pass it a parameter. The example uses an
SADataReader to fetch the result set returned by the stored procedure.
Related Information
SACommand: Insert, Delete, and Update Rows Using ExecuteNonQuery [page 68]
SACommand: Fetch Data Using ExecuteReader and ExecuteScalar [page 66]
You can use the SATransaction object to group statements together. Each transaction ends with a call to the
Commit method, which either makes your changes to the database permanent, or the Rollback method, which
cancels all the operations in the transaction.
Once the transaction is complete, you must create a new SATransaction object to make further changes. This
behavior is different from ODBC and Embedded SQL, where a transaction persists after you execute a COMMIT
or ROLLBACK until the transaction is closed.
If you do not create a transaction, the .NET Data Provider operates in autocommit mode by default. There is an
implicit COMMIT after each insert, update, or delete, and once an operation is completed, the change is made
to the database. In this case, the changes cannot be rolled back.
The database isolation level is used by default for transactions. You can choose to specify the isolation level for
a transaction using the IsolationLevel property when you begin the transaction. The isolation level applies to all
statements executed within the transaction. The .NET Data Provider supports snapshot isolation.
The locks that the database server uses when you execute a SQL statement depend on the transaction's
isolation level.
The .NET 2.0 framework introduced a new namespace System.Transactions, which contains classes for writing
transactional applications. Client applications can create and participate in distributed transactions with one or
multiple participants. Client applications can implicitly create transactions using the TransactionScope class.
The connection object can detect the existence of an ambient transaction created by the TransactionScope
and automatically enlist. The client applications can also create a CommittableTransaction and call the
EnlistTransaction method to enlist. This feature is supported by the .NET Data Provider. Distributed transaction
has significant performance overhead. Use database transactions for non-distributed transactions.
C# SATransaction Example
The following example shows how to wrap an INSERT into a transaction so that it can be committed or rolled
back. A transaction is created with an SATransaction object and linked to the execution of a SQL statement
using an SACommand object. Isolation level 2 (RepeatableRead) is specified so that other database users
cannot update the row. The lock on the row is released when the transaction is committed or rolled back. If you
do not use a transaction, the .NET Data Provider operates in autocommit mode and you cannot roll back any
changes that you make to the database.
Your application should be designed to handle any errors that occur. An SAException object is created when an
exception is thrown. Information about the exception is stored in the SAException object.
The SQL Anywhere .NET Data Provider creates an SAException object and throws an exception whenever
errors occur during execution. Each SAException object consists of a list of SAError objects, and these error
objects include the error message and code.
Errors are different from conflicts. Conflicts arise when changes are applied to the database. Your application
should include a process to compute correct values or to log conflicts when they arise.
The following Microsoft C# code creates a button click handler that opens a connection to the sample
database. If the connection cannot be made, the exception handler displays one or more messages.
The following Microsoft Visual Basic code creates a button click handler that opens a connection to the sample
database. If the connection cannot be made, then the exception handler displays one or more messages.
Related Information
The SQL Anywhere .NET Data Provider supports Entity Framework 5.0 and 6.0, separate packages available
from Microsoft.
To use Entity Framework 5.0 or 6.0, you must add it to Microsoft Visual Studio using Microsoft's NuGet Package
Manager.
One of the new features of Entity Framework is Code First. It enables a different development workflow:
defining data model objects by simply writing Microsoft Visual C# .NET or Microsoft Visual Basic .NET classes
mapping to database objects without ever having to open a designer or define an XML mapping file. Optionally,
additional configuration can be performed by using data annotations or the Fluent API. Models can be used to
generate a database schema or map to an existing database.
Here's an example which creates new database objects using the model:
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Linq;
using Sap.Data.SQLAnywhere;
namespace CodeFirstExample
class Program
{
static void Main( string[] args )
{
Database.DefaultConnectionFactory = new SAConnectionFactory();
Database.SetInitializer<Context>(
new DropCreateDatabaseAlways<Context>() );
using ( var db = new Context(
"DSN=SQL Anywhere 17 Demo;Password="+pwd ) )
{
var query = db.Products.ToList();
}
}
}
}
To build and run this example, the following assembly references must be added:
EntityFramework
Sap.Data.SQLAnywhere.v4.5
System.ComponentModel.DataAnnotations
System.Data.Entity
using System;
using System.Collections.Generic;
Be aware that there are some implementation detail differences between the Microsoft .NET Framework Data
Provider for Microsoft SQL Server (SqlClient) and the .NET Data Provider.
2. The major principle of Entity Framework Code First is coding by conventions. The Entity Framework infers
the data model by coding conventions. Entity Framework also does lots of things implicitly. Sometimes the
developer might not realize all these Entity Framework conventions. But some code conventions do not
make sense for database management systems like SQL Anywhere. There are some differences between
Microsoft SQL Server and these database servers.
○ Microsoft SQL Server permits access to multiple databases with a single sign-on. SQL Anywhere
permits a connection to one database at a time.
○ If the user creates a user-defined DbContext using the parameterless constructor, SqlClient will
connect to Microsoft SQL Server Express on the local computer using integrated security. The .NET
Data Provider connects to the default server using integrated login if the user has already created a
login mapping.
○ SqlClient drops the existing database and creates a new database when the Entity Framework calls
DbDeleteDatabase or DbCreateDatabase (Microsoft SQL Server Express Edition only). The .NET Data
Provider never drops or creates the database. It creates or drops the database objects (tables,
relations, constraints for example). The user must create the database first.
○ The IDbConnectionFactory.CreateConnection method treats the string parameter
"nameOrConnectionString" as database name (initial catalog for Microsoft SQL Server) or a
connection string. If the user does not provide the connection string for DbContext, SqlClient
automatically connects to the SQL Express server on the local computer using the namespace of user-
defined DbContext class as the initial catalog. For SQL Anywhere, that parameter can only contain a
connection string. A database name is ignored and integrated login is used instead.
3. The Microsoft SQL Server SqlClient API maps a column with data annotation attribute TimeStamp to
Microsoft SQL Server data type timestamp/rowversion. There are some misconceptions about Microsoft
SQL Server timestamp/rowversion among developers. The Microsoft SQL Server timestamp/rowversion
data type is different from SQL Anywhere and most other RDBMS:
○ The Microsoft SQL Server timestamp/rowversion is binary(8). It is does not support a combined date
and time value. SQL Anywhere supports a data type called timestamp that is equivalent to the
Microsoft SQL Server datetime data type.
○ Microsoft SQL Server timestamp/rowversion values are guaranteed to be unique. SQL Anywhere
timestamp values are not unique.
namespace CodeFirstTest
{
public class Customer
{
[Key()]
public int ID { get; set; }
public string SurName { get; set; }
public string GivenName { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Country { get; set; }
public string PostalCode { get; set; }
public string Phone { get; set; }
public string CompanyName { get; set; }
public virtual ICollection<Contact> Contacts { get; set; }
}
public class Contact
{
[Key()]
public int ID { get; set; }
public string SurName { get; set; }
public string GivenName { get; set; }
public string Title { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Country { get; set; }
public string PostalCode { get; set; }
public string Phone { get; set; }
public string Fax { get; set; }
[ForeignKey( "Customer" )]
public int CustomerID { get; set; }
public virtual Customer Customer { get; set; }
}
[Table( "Departments", Schema = "GROUPO" )]
public class Department
{
[Key()]
public int DepartmentID { get; set; }
public string DepartmentName { get; set; }
public int DepartmentHeadID { get; set; }
}
public class Context : DbContext
{
public Context() : base() { }
public Context( string connStr ) : base( connStr ) { }
public DbSet<Contact> Contacts { get; set; }
public DbSet<Customer> Customers { get; set; }
public DbSet<Department> Departments { get; set; }
protected override void OnModelCreating( DbModelBuilder modelBuilder )
{
modelBuilder.Entity<Contact>().ToTable( "Contacts", "GROUPO" );
modelBuilder.Entity<Customer>().ToTable( "Customers", "GROUPO" );
}
}
}
If you plan to try Microsoft's Code First to a New Database tutorial, then there are some steps that you must do
differently.
Prerequisites
You must have Microsoft Visual Studio and the .NET Framework installed on your computer.
Context
Microsoft's Code First to a New Database tutorial is designed for use with Microsoft's .NET data provider. The
steps below provide guidance for using the SQL Anywhere .NET Data Provider instead. Review these steps
before attempting the tutorial.
Procedure
1. For Entity Framework 6, make sure that the SQL Anywhere .NET Data Provider that supports this version is
installed. Make sure there are no running instances of Microsoft Visual Studio. At a command prompt with
administrator privileges, do the following:
cd %SQLANY17%\Assembly\v4.5
2. Before creating your Microsoft Visual Studio project, connect to the sample database using Interactive
SQL and run the following SQL script to remove the Blogs, Posts, and Users tables, if you have already
created them in another tutorial.
3. Start Microsoft Visual Studio. The steps that follow were performed successfully using Microsoft Visual
Studio 2013 and Entity Framework 6.1.3.
4. One of the steps requires that you install the latest version of Entity Framework 6 into the project, using the
NuGet Package Manager.
This step creates an App.config file that is not suitable for use with SQL Anywhere.
5. Replace the contents of App.config with the following.
6. Update all occurrences of the data provider version number in App.config. The version number should
match the version of the data provider that you have currently installed.
7. Once you have updated App.config, you must build your project.
Results
You have built an Entity Framework application that uses the Code First approach when the tables do not
already exist in the database.
If you plan to try Microsoft's Code First to an Existing Database tutorial, then there are some steps that you
must do differently.
Prerequisites
You must have Microsoft Visual Studio and the .NET Framework installed on your computer.
Context
Microsoft's Code First to an Existing Database tutorial is designed for use with Microsoft's .NET data provider.
The steps below provide guidance for using the SQL Anywhere .NET Data Provider instead. Review these steps
before attempting the tutorial.
1. For Entity Framework 6, make sure that the SQL Anywhere .NET Data Provider that supports this version is
installed. Make sure there are no running instances of Microsoft Visual Studio. At a command prompt with
administrator privileges, do the following:
cd %SQLANY17%\Assembly\v4.5
SetupVSPackage.exe /i /v EF6
2. Before creating your Microsoft Visual Studio project, connect to the sample database using Interactive
SQL and run the following SQL script to set up the Blogs, Posts, and Users tables. It is very similar to the
one presented in Microsoft's tutorials but uses the GROUPO schema owner instead of dbo.
3. Start Microsoft Visual Studio. The steps that follow were performed successfully using Microsoft Visual
Studio 2013 and Entity Framework 6.1.3.
4. In the Microsoft Visual Studio Server Explorer, use the .NET Framework Data Provider for SQL Anywhere 17
to create a data connection. For ODBC data source, use SQL Anywhere 17 Demo. Fill in the UserID and
Password fields. The Test Connection button is used to ensure that a connection can be made to the
database.
5. Immediately after creating your new project, install the latest version of Entity Framework 6 into the
project, using the NuGet Package Manager.
This step creates an App.config file that is not suitable for use with SQL Anywhere.
6. Replace the contents of App.config with the following.
7. Update all occurrences of the data provider version number in App.config. The version number should
match the version of the data provider that you have currently installed.
8. Once you have updated App.config, you must build your project. This must be done before the Reverse
Engineer Model step.
9. In the Entity Data Model Wizard, select the option to include sensitive data in the connection string.
10. Rather than select all tables, expand GROUPO in Tables and select the Blogs and Posts tables.
11. Continue with the instructions in the remaining steps of the tutorial.
Results
You have built an Entity Framework application that uses the Code First approach for tables that already exist in
the database.
If you plan to try Microsoft's Model First tutorial, then there are some steps that you must do differently.
Prerequisites
You must have Microsoft Visual Studio and the .NET Framework installed on your computer.
Microsoft's Model First tutorial is designed for use with Microsoft's .NET data provider. The steps below provide
guidance for using the SQL Anywhere .NET Data Provider instead. Review these steps before attempting the
tutorial.
Procedure
1. For Entity Framework 6, make sure that the SQL Anywhere .NET Data Provider that supports this version is
installed. Make sure there are no running instances of Microsoft Visual Studio. At a command prompt with
administrator privileges, do the following:
cd %SQLANY17%\Assembly\v4.5
SetupVSPackage.exe /i /v EF6
2. Before creating your Microsoft Visual Studio project, connect to the sample database using Interactive
SQL and run the following SQL script to remove the Blogs, Posts, and Users tables, if you have already
created them in another tutorial.
7. Update all occurrences of the data provider version number in App.config. The version number should
match the version of the data provider that you have currently installed.
8. Once you have updated App.config, you must build your project. This must be done before using the
Entity Framework Designer.
9. Before the Generating the Database step, open the Properties of BloggingModel.edmx [Diagram1]
(this is your design form) and set Database Schema Name to GROUPO and set DDL Generation Template to
SSDLToSA17.tt (VS).
10. In the Generate Database Wizard, select the option to include sensitive data in the connection string.
11. Continue with the instructions in the remaining steps of the tutorial.
Results
You have built an Entity Framework application that uses the Model First approach when the tables do not
already exist in the database.
If you plan to try Microsoft's Database First to an Existing Database tutorial, then there are some steps that you
must do differently.
Prerequisites
You must have Microsoft Visual Studio and the .NET Framework installed on your computer.
Microsoft's Database First to an Existing Database tutorial is designed for use with Microsoft's .NET data
provider. The steps below provide guidance for using the SQL Anywhere .NET Data Provider instead. Review
these steps before attempting the tutorial.
Procedure
1. For Entity Framework 6, make sure that the SQL Anywhere .NET Data Provider that supports this version is
installed. Make sure there are no running instances of Microsoft Visual Studio. At a command prompt with
administrator privileges, do the following:
cd %SQLANY17%\Assembly\v4.5
SetupVSPackage.exe /i /v EF6
2. Before creating your Microsoft Visual Studio project, connect to the sample database using Interactive
SQL and run the following SQL script to set up the Blogs, Posts, and Users tables. It is very similar to the
one presented in Microsoft's tutorials but uses the GROUPO schema owner instead of dbo.
3. Start Microsoft Visual Studio. The steps that follow were performed successfully using Microsoft Visual
Studio 2013 and Entity Framework 6.1.3.
4. In the Microsoft Visual Studio Server Explorer, use the .NET Framework Data Provider for SQL Anywhere 17
to create a data connection. For ODBC data source, use SQL Anywhere 17 Demo. Fill in the UserID and
Password fields. The Test Connection button is used to ensure that a connection can be made to the
database.
5. Immediately after creating your new project, install the latest version of Entity Framework 6 into the
project, using the NuGet Package Manager.
This step creates an App.config file that is not suitable for use with SQL Anywhere.
7. Update all occurrences of the data provider version number in App.config. The version number should
match the version of the data provider that you have currently installed.
8. Once you have updated App.config, you must build your project. This must be done before the Reverse
Engineer Model step.
9. In the Entity Data Model Wizard, select the option to include sensitive data in the connection string.
10. Rather than select all tables, expand GROUPO in Tables and select the Blogs and Posts tables.
11. Continue with the instructions in the remaining steps of the tutorial.
Results
You have built an Entity Framework application that uses the Database First approach for tables that already
exist in the database.
The .NET Data Provider supports tracing using the .NET tracing feature.
By default, tracing is disabled. To enable tracing, specify the trace source in your application's configuration
file. The following is an example of a configuration file:
There are four types of trace listeners referenced in the configuration file shown above.
ConsoleTraceListener
Tracing or debugging output is directed to either the standard output or the standard error stream. When
using Microsoft Visual Studio, output appears in the Output window.
DefaultTraceListener
This listener is automatically added to the Debug.Listeners and Trace.Listeners collections using the name
"Default". Tracing or debugging output is directed to either the standard output or the standard error
stream. When using Microsoft Visual Studio, output appears in the Output window. To avoid duplication of
output produced by the ConsoleTraceListener, this listener is removed.
EventLogTraceListener
Tracing or debugging output is directed to an EventLog identified in the initializeData option. In the
example, the event log is named MyEventLog. Writing to the system event log requires administrator
privileges and is not a recommended method for debugging applications.
TextWriterTraceListener
Tracing or debugging output is directed to a TextWriter which writes the stream to the file identified in the
initializeData option.
The trace configuration information is placed in the application's project folder in the App.config file. If the
file does not exist, it can be created and added to the project using Microsoft Visual Studio by choosing Add
New Item and selecting Application Configuration File.
The traceOutputOptions can be specified for any listener and include the following:
Callstack
Write the call stack, which is represented by the return value of the Environment.StackTrace property.
DateTime
Write the logical operation stack, which is represented by the return value of the
CorrelationManager.LogicalOperationStack property.
None
Write the process identity, which is represented by the return value of the Process.Id property.
ThreadId
Write the thread identity, which is represented by the return value of the Thread.ManagedThreadId
property for the current thread.
Timestamp
The example configuration file, shown earlier, specifies trace output options for the TextWriterTraceListener
only.
You can limit what is traced by setting specific trace options. By default the numeric-valued trace option
settings are all 0. The trace options that can be set include the following:
SASourceSwitch
SASourceSwitch can take any of the following values. If it is Off then there is no tracing.
Off
Allows the Stop, Start, Suspend, Transfer, and Resume events through.
All
SATraceAllSwitch
All the trace options are enabled. You do not need to set any other options since they are all selected. You
cannot disable individual options if you choose this option. For example, the following will not disable
exception tracing.
SATraceExceptionSwitch
All exceptions are logged. Trace messages have the following form.
All method scope entry/exits are logged. Trace messages have any of the following forms.
The nnn is an integer representing the scope nesting level 1, 2, 3, ... The optional parameter_names is a list
of parameter names separated by spaces.
SATracePoolingSwitch
All connection pooling is logged. Trace messages have any of the following forms.
<sa.ConnectionPool.AllocateConnection|CPOOL>
connectionString='connection_text'
<sa.ConnectionPool.RemoveConnection|CPOOL> connectionString='connection_text'
<sa.ConnectionPool.ReturnConnection|CPOOL> connectionString='connection_text'
<sa.ConnectionPool.ReuseConnection|CPOOL> connectionString='connection_text'
SATracePropertySwitch
All property setting and retrieval is logged. Trace messages have any of the following forms.
<sa.class_name.get_property_name|API> object_id#
<sa.class_name.set_property_name|API> object_id#
In this section:
Related Information
Enable tracing on the TableViewer sample application by creating a configuration file that references the
ConsoleTraceListener and TextWriterTraceListener listeners, removes the default listener, and enables all
switches that would otherwise be set to 0.
Prerequisites
Procedure
Choose Add New Item and selecting Application Configuration File. Place the following configuration
information in this file:
Results
When the application finishes execution, the trace output is recorded in the bin\Debug\myTrace.log file.
Next Steps
View the trace log in the Output window of Microsoft Visual Studio.
Related Information
When the .NET Data Provider is first loaded by a .NET application (usually when making a database connection
using SAConnection), it unpacks a DLL that contains the provider's unmanaged code.This topic has been
updated for 17.0 PL29 Build 4793.
The file dbdata17.dll is placed by the provider in a subdirectory of the directory identified using the following
strategy.
1. The first directory it attempts to use for unloading is the one returned by the first of the following:
○ The path identified by the TMP environment variable.
○ The path identified by the TEMP environment variable.
○ The path identified by the USERPROFILE environment variable.
The subdirectory name will take the form of a GUID with a suffix including the version number, a machine
architecture tag when not 32-bit (for example, .x64), and an index number used to guarantee uniqueness. The
following is an example of a possible subdirectory name for 32-bit architecture.
{16AA8FB8-4A98-4757-B7A5-0FF22C0A6E33}_1700_1
{16AA8FB8-4A98-4757-B7A5-0FF22C0A6E33}_1700.x64_1
The Simple and Table Viewer sample projects introduce you to .NET application programming using the .NET
provider. A tutorial is included that takes you though the steps of building the Simple Viewer .NET database
application using Microsoft Visual Studio.
These sample projects can be used with Microsoft Visual Studio 2005 or later versions. If you use Microsoft
Visual Studio 2008 or later versions, you may have to run the Microsoft Visual Studio Upgrade Wizard.
In this section:
Tutorial: Developing a Simple .NET Database Application with Microsoft Visual Studio [page 111]
This tutorial takes you though the steps of building the Simple Viewer .NET database application using
Visual Studio.
Use the Simple project as an example of how to obtain a result set from the database server using the .NET
Data Provider.
Prerequisites
Context
The Simple project is included with the samples. It demonstrates a simple listbox that is filled with the names
from the Employees table. You must have Microsoft Visual Studio and the Microsoft .NET Framework installed
on your computer.
Procedure
using Sap.Data.SQLAnywhere;
This line is required forMicrosoft C# projects. If you are using Microsoft Visual Basic .NET, add the
equivalent Imports line to your source code.
6. Click Debug Start Without Debugging or press Ctrl+F5 to run the Simple sample.
7. In the SQL Anywhere Sample window, modify the user ID and password credentials for the sample
database and then click Connect.
8. Close the SQL Anywhere Sample window to shut down the application and disconnect from the sample
database. This also shuts down the database server.
Results
You have built and executed a simple Microsoft .NET application that uses the .NET Data Provider to obtain a
result set from a database.
Example
In this section:
Related Information
Using the SQL Anywhere .NET Data Provider in a Microsoft Visual Studio Project [page 55]
By examining the code from the Simple project, some key features of the SQL Anywhere .NET Data Provider
are illustrated.
The Simple project uses the sample database, demo.db, which is located in your samples directory.
The Simple project is described a few lines at a time. Not all code from the sample is included here. To see all
the code, open the project file %SQLANYSAMP17%\SQLAnywhere\ADO.NET\SimpleWin32\Simple.sln.
Declaring Controls
The following code declares a textbox named txtConnectString, a button named btnConnect, and a listbox
named listEmployees.
The SAConnection object uses the connection string to connect to the sample database when the Open
method is called.
conn.Open();
A SQL statement is executed using an SACommand object. The following code declares and creates a
command object using the SACommand constructor. This constructor accepts a string representing the query
to be executed, along with the SAConnection object that represents the connection that the query is executed
on.
The results of the query are obtained using an SADataReader object. The following code declares and creates
an SADataReader object using the ExecuteReader constructor. This constructor is a member of the
SACommand object, cmd, that was declared previously. ExecuteReader sends the command text to the
connection for execution and builds an SADataReader.
The following code loops through the rows held in the SADataReader object and adds them to the listbox
control. Each time the Read method is called, the data reader gets another row back from the result set. A new
item is added to the listbox for each row that is read. The data reader uses the GetString method with an
argument of 0 to get the first column from the result set row.
listEmployees.BeginUpdate();
while( reader.Read() )
{
listEmployees.Items.Add(reader.GetString(0));
}
listEmployees.EndUpdate();
Finishing Off
The following code at the end of the method closes the data reader and connection objects.
reader.Close();
conn.Close();
Error Handling
Any errors that occur during execution and that originate with .NET Data Provider objects are handled by
displaying them in a window. The following code catches the error and displays its message:
Use the TableViewer project as an example of how to connect to a database, execute SQL statements, and
display the results using a DataGrid object using the .NET Data Provider.
Prerequisites
You must have Microsoft Visual Studio and the Microsoft .NET Framework installed on your computer.
Context
The TableViewer project is included with the samples. The Table Viewer project is more complex than the
Simple project. You can use it to connect to a database, select a table, and execute SQL statements on the
database.
Procedure
using Sap.Data.SQLAnywhere;
This line is required for Microsoft C# projects. If you are using Microsoft Visual Basic, add the
equivalent Imports line to your source code.
The application retrieves the data from the Employees table in the sample database and puts the query
results in the Results datagrid, as follows:
You can also execute other SQL statements from this application: type a SQL statement in the SQL
Statement pane, and then click Execute.
9. Close the Table Viewer window to shut down the application and disconnect from the sample database.
This also shuts down the database server.
Results
You have built and executed a Microsoft .NET application that uses the .NET Data Provider to connect to a
database, execute SQL statements, and display the results using a DataGrid object.
In this section:
Related Information
Using the SQL Anywhere .NET Data Provider in a Microsoft Visual Studio Project [page 55]
By examining the code from the Table Viewer project, some key features of the SQL Anywhere .NET Data
Provider are illustrated.
The Table Viewer project uses the sample database, demo.db, which is located in the samples directory.
The Table Viewer project is described a few lines at a time. Not all code from the sample is included here. To see
all the code, open the project file %SQLANYSAMP17%\SQLAnywhere\ADO.NET\TableViewer
\TableViewer.sln.
Declaring Controls
The following code declares a couple of Labels named label1 and label2, a TextBox named txtConnectString, a
button named btnConnect, a TextBox named txtSQLStatement, a button named btnExecute, and a DataGrid
named dgResults.
The SAConnection type is used to declare an uninitialized connection object. The SAConnection object is used
to represent a unique connection to a data source.
The Text property of the txtConnectString object has a default value of "Data Source=SQL Anywhere 17
Demo;Password=sql". This value can be overridden by the application user by typing a new value into the
txtConnectString text box. You can see how this default value is set by opening up the region in
TableViewer.cs labeled Windows Form Designer Generated Code. In this region, you find the following line of
code.
Later, the SAConnection object uses the connection string to connect to a database. The following code
creates a new connection object with the connection string using the SAConnection constructor. It then
establishes the connection by using the Open method.
Defining a Query
The Text property of the txtSQLStatement object has a default value of "SELECT * FROM Employees". This
value can be overridden by the application user by typing a new value into the txtSQLStatement text box.
The SQL statement is executed using an SACommand object. The following code declares and creates a
command object using the SACommand constructor. This constructor accepts a string representing the query
to be executed, along with the SAConnection object that represents the connection that the query is executed
on.
The results of the query are obtained using an SADataReader object. The following code declares and creates
an SADataReader object using the ExecuteReader constructor. This constructor is a member of the
SADataReader dr = cmd.ExecuteReader();
The following code connects the SADataReader object to the DataGrid object, which causes the result columns
to appear on the screen. The SADataReader object is then closed.
dgResults.DataSource = dr;
dr.Close();
Error Handling
If there is an error when the application attempts to connect to the database or when it populates the tables
combo box, the following code catches the error and displays its message:
try
{
_conn = new SAConnection( txtConnectString.Text );
_conn.Open();
SACommand cmd = new SACommand( "SELECT table_name FROM sys.systable " +
"WHERE creator = 101 AND table_type != 'TEXT'", _conn );
SADataReader dr = cmd.ExecuteReader();
comboBoxTables.Items.Clear();
while ( dr.Read() )
{
comboBoxTables.Items.Add( dr.GetString( 0 ) );
}
dr.Close();
}
catch( SAException ex )
{
MessageBox.Show( ex.Errors[0].Source + " : " + ex.Errors[0].Message + " (" +
ex.Errors[0].NativeError.ToString() + ")",
"Failed to connect" );
}
This tutorial takes you though the steps of building the Simple Viewer .NET database application using Visual
Studio.
Prerequisites
You must have the SELECT ANY TABLE system privilege. To insert new rows, you must be the owner of the
table, or have the INSERT ANY TABLE privilege, or have INSERT privilege on the table. To change the content of
existing rows, you must be the owner of the table being updated, or have UPDATE privilege on the columns
Use Microsoft Visual Studio, the Server Explorer, and the SQL Anywhere .NET Data Provider to create an
application that accesses one of the tables in the sample database, allowing you to examine rows and perform
updates.
Prerequisites
You must have Microsoft Visual Studio and the .NET Framework installed on your computer.
You must have the roles and privileges listed at the beginning of this tutorial.
Context
This tutorial is based on Microsoft Visual Studio and the .NET Framework. The complete application can be
examined by opening the project %SQLANYSAMP17%.
Procedure
Note
When using the Microsoft Visual Studio Add Connection wizard on 64-bit Windows, only the 64-bit
System Data Source Names (DSN) are included with the User Data Source Names. Any 32-bit
System Data Source Names are not displayed. In the Microsoft Visual Studio 32-bit design
environment, the Test Connection button will attempt to establish a connection using the 32-bit
equivalent of the 64-bit System DSN. If the 32-bit System DSN does not exist, the test will fail.
d. Under Login information, in the Password field, enter the sample database password (sql).
e. Click Test Connection to verify that you can connect to the sample database.
f. Click OK.
A new connection named SQL Anywhere.demo17 appears in the Server Explorer window.
6. Expand the SQL Anywhere.demo17 connection in the Server Explorer window until you see the table names.
This shows the rows and columns of the Products table in a window.
b. Close the table data window.
7. Click Data Add New Data Source . For later versions of Microsoft Visual Studio, this is Project
Add New Data Source .
8. In the Data Source Configuration Wizard, do the following:
a. On the Data Source Type page, click Database, then click Next.
b. (Microsoft Visual Studio 2010 or later) On the Database Model page, click Dataset, then click Next.
c. On the Data Connection page, click SQL Anywhere.demo 17, click Yes, include sensitive data in the
connection string, then click Next.
d. On the Save the Connection String page, make sure that Yes, save the connection as is chosen and click
Next.
e. On the Choose Your Database Objects page, click Tables, then click Finish.
9. Click Data Show Data Sources . For later versions of Microsoft Visual Studio, this might be View
Other Windows Data Sources .
A dataset control and several labeled text fields appear on the form.
10. On the form, click the picture box next to Photo.
a. Change the shape of the box to a square.
b. Click the right-arrow in the upper-right corner of the picture box.
The application connects to the sample database and displays the first row of the Products table in the
text boxes and picture box.
Results
You have now created a simple, yet powerful, Microsoft .NET application using Microsoft Visual Studio, the
Server Explorer, and the SQL Anywhere .NET Data Provider.
Next Steps
Proceed to the next lesson, where you add a datagrid control to the form developed in this lesson.
Task overview: Tutorial: Developing a Simple .NET Database Application with Microsoft Visual Studio [page 111]
Prerequisites
You must have the roles and privileges listed at the beginning of this tutorial.
Context
This control updates automatically as you navigate through the result set.
Procedure
3. Right-click an empty area in the DataSet Designer window and click Add TableAdapter .
4. In the TableAdapter Configuration Wizard:
a. On the Choose Your Data Connection page, click Next.
b. On the Choose A Command Type page, click Use SQL statements, then click Next.
c. On the Enter a SQL Statement page, click Query Builder.
d. On the Add Table window, click the Views tab, then click ViewSalesOrders, and then click Add.
e. Click Close to close the Add Table window.
5. Expand the Query Builder window so that all sections of the window are visible.
a. Expand the ViewSalesOrders window so that all the checkboxes are visible.
b. Click Region.
c. Click Quantity.
d. Click ProductID.
e. In the grid below the ViewSalesOrders window, clear the checkbox under Output for the ProductID
column.
f. For the ProductID column, type a question mark (?) in the Filter cell and then click anywhere else on
the form. This generates a WHERE clause for ProductID.
A SQL query has been built that looks like the following:
7. Click OK.
8. Click Finish.
A new TableAdapter called ViewSalesOrders has been added to the DataSet Designer window.
9. Click the form design tab (Form1.cs [Design]).
○ Stretch the form to the right to make room for a new control.
10. Expand ViewSalesOrders in the Data Sources window.
a. Click ViewSalesOrders and click DataGridView from the dropdown list.
b. Click ViewSalesOrders and drag it to your form (Form1).
The datagrid view displays a summary of sales by region for the product ID entered.
You can also use the other control on the form to move through the rows of the result set.
It would be ideal, however, if both controls could stay synchronized with each other. The next few steps
show how to do this.
12. Shut down the application and then save your project.
13. Delete the Fill strip on the form since you do not need it.
○ On the design form (Form1), right-click the Fill strip to the right of the word Fill, then click Delete.
16. Shut down the application and then save your project.
Results
You have now added a control that updates automatically as you navigate through the result set.
In this tutorial, you saw how the powerful combination of Microsoft Visual Studio, the Server Explorer, and the
SQL Anywhere .NET Data Provider can be used to create database applications.
Task overview: Tutorial: Developing a Simple .NET Database Application with Microsoft Visual Studio [page 111]
The SQL Anywhere ASP.NET providers replace the standard ASP.NET providers for Microsoft SQL Server, and
allow you to run your website using SQL Anywhere.
Membership Provider
The membership provider provides authentication and authorization services. Use the membership
provider to create new users and passwords, and validate the identity of users.
Roles Provider
The roles provider provides methods for creating roles, adding users to roles, and deleting roles. Use the
roles provider to assign users to groups and manage permissions.
Profiles Provider
The profiles provider provides methods for reading, storing, and retrieving user information. Use the
profiles provider to save user preferences.
Web Parts Personalization Provider
The web parts personalization provider provides methods for loading and storing the personalized content
and layout of web pages. Use the web parts personalization provider to allow users to create personalized
views of your website.
Health Monitoring Provider
The health monitoring provider provides methods for monitoring the status of deployed web applications.
Use the health monitoring provider to monitor application performance, identify failing applications or
systems, and log and review significant events.
The database server schema used by the SQL Anywhere ASP.NET providers is identical to the schema used by
the standard ASP.NET providers. The methodology used to manipulate and store data are identical.
When you have finished setting up the SQL Anywhere ASP.NET providers, you can use the Microsoft Visual
Studio ASP.NET Web Site Administration Tool to create and manage users and roles. You can also use the
Microsoft Visual Studio Login, LoginView, and PasswordRecovery tools to add security to your web site. Use the
static wrapper classes to access more advanced provider functions, or to make your own login controls.
In this section:
To implement the SQL Anywhere ASP.NET providers, you can add the required schema to a new or existing
database.
To add the schema to a database, run SASetupAspNet.exe. When executed, SASetupAspNet.exe connects
to a database and creates tables and stored procedures required by the SQL Anywhere ASP.NET providers. All
SQL Anywhere ASP.NET provider resources are prefixed with aspnet_. To minimize naming conflicts with
existing database resources you can install provider database resources under any database user.
You can use a wizard or the command line to run SASetupAspNet.exe. To access the wizard, run the
application, or execute a command line statement without arguments. When using the command line to access
the SASetupAspNet.exe, use the question mark (-?) argument to display detailed help for configuring the
database.
Connect with a user having the appropriate privileges for the database objects they are creating. The following
minimum set of privileges are required.
The wizard and command line (-U <user>) allow you to specify the owner of the new resources (tables and
procedures). By default, the owner is DBA. You can specify a different owner using the wizard. When you
specify connection strings for the SQL Anywhere ASP.NET providers, the UserID (UID) must match the user
you specify here. You do not need to grant the specified user any extra privileges; the user owns the resources
and has full privileges on the tables and stored procedures that are created for this user by the wizard.
You can add or remove specific features. Common components are installed automatically. Selecting Remove
for an uninstalled feature has no effect; selecting Add for a feature already installed reinstalls the feature. By
default, the data in tables associated with the selected feature is preserved. If a user significantly changes the
schema of a table, it might not be possible to automatically preserve the data stored in it. If a clean reinstall is
required, data preservation can be turned off.
Membership and roles providers should be installed together. The effectiveness of the Microsoft Visual Studio
ASP.NET Web Site Administration Tool is reduced when the membership provider is not installed with the roles
provider.
Further Reading
A white paper called Tutorial: Creating an ASP.NET Web Page Using SQL Anywhere is available to demonstrate
how to use SQL Anywhere and Microsoft Visual Studio 2010 to build a database-driven ASP.NET web site.
Related Information
There are two methods for registering the connection string: using an ODBC data source or using a full
connection string.
● You can register an ODBC data source using the ODBC Data Source Administrator, and reference it by
name.
● You can specify a full connection string. For example:
connectionString="SERVER=MyServer;DBN=MyDatabase;UID=DBA;PWD=passwd"
When you add the <connectionStrings> element to the web.config file, the connection string and its
provider can be referenced by the application. Updates can be implemented in a single location.
<connectionStrings>
<add name="MyConnectionString"
connectionString="DSN=MyDataSource"
providerName="Sap.Data.SQLAnywhere"/>
</connectionStrings>
Your web application must be configured to use the SQL Anywhere ASP.NET providers and not the default
providers.
The provider database can store data for multiple applications. For each application, the applicationName
attribute must be the same for each SQL Anywhere ASP.NET provider. If you do not specify an
applicationName value, an identical name is assigned to each provider in the provider database.
To reference a previously registered connection string, replace the connectionString attribute with the
connectionStringName attribute.
<membership defaultProvider="SAMembershipProvider">
<providers>
<profile defaultProvider="SAProfileProvider">
<providers>
<add name="SAProfileProvider"
type="iAnywhere.Web.Security.SAProfileProvider"
connectionStringName="MyConnectionString"
applicationName="MyApplication"
commandTimeout="30" />
</providers>
<properties>
<add name="UserString" type="string"
serializeAs="Xml" />
<add name="UserObject" type="object"
serializeAs="Binary" />
</properties>
</profile>
<webParts>
<healthMonitoring enabled="true">
...
<providers>
<add name="SAWebEventProvider"
type="iAnywhere.Web.Security.SAWebEventProvider"
connectionStringName="MyConnectionString"
commandTimeout="30"
bufferMode="Notification"
maxEventDetailsLength="Infinite" /
</providers>
...
</healthMonitoring>
Related Information
type iAnywhere.Web.Security.SAMembershipProvi
der
SARoleProvider stores role information in the aspnet_Roles table of the provider database. The namespace
associated with SARoleProvider is iAnywhere.Web.Security. Each record in the Roles table corresponds to one
role.
type iAnywhere.Web.Security.SARoleProvider
SAProfileProvider stores profile data in the aspnet_Profile table of the provider database. The namespace
associated with SAProfileProvider is iAnywhere.Web.Security. Each record in the Profile table corresponds to
one user's persisted profile properties.
type iAnywhere.Web.Security.SAProfileProvider
SAPersonalizationProvider preserves personalized user content in the aspnet_Paths table of the provider
database. The namespace associated with SAPersonalizationProvider is iAnywhere.Web.Security.
type iAnywhere.Web.Security.SAPersonalization
Provider
SAWebEventProvider logs web events in the aspnet_WebEvent_Events table of the provider database. The
namespace associated with SAWebEventProvider is iAnywhere.Web.Security. Each record in the
WebEvents_Events table corresponds to one web event.
type iAnywhere.Web.Security.SAWebEventProvide
r
maxEventDetailsLength The maximum length of the details string for each event or
Infinite
Related Information
Use SQL Anywhere with .NET, including the API for the SQL Anywhere .NET Data Provider.
The SQL Anywhere .NET API reference is available in the SQL Anywhere- .NET API Reference at https://
help.sap.com/viewer/e2cc7ae777e347529ca8d8db9258fe3c/LATEST/en-US.
An OLE DB provider for OLE DB and ADO applications is included with the software.
OLE DB is a set of Component Object Model (COM) interfaces developed by Microsoft, which provide
applications with uniform access to data stored in diverse information sources and that are used to implement
additional database services. These interfaces support the amount of DBMS functionality appropriate to the
data store, enabling it to share its data.
ADO is an object model for programmatically accessing, editing, and updating a wide variety of data sources
through OLE DB system interfaces. ADO is also developed by Microsoft. Most developers using the OLE DB
programming interface do so by writing to the ADO API rather than directly to the OLE DB API.
Refer to the Microsoft Developer Network for documentation on OLE DB and ADO programming. For product-
specific information about OLE DB and ADO development, use this document.
In this section:
Related Information
1.4.1 OLE DB
OLE DB is a data access model from Microsoft. It uses the Component Object Model (COM) interfaces and,
unlike ODBC, OLE DB does not assume that the data source uses a SQL query processor.
An OLE DB provider, named SAOLEDB, is included with the software. This provider is available for supported
Windows platforms.
● Some features, such as updating through a cursor, are not available using the OLE DB/ODBC bridge.
● If you use the OLE DB provider, ODBC is not required in your deployment.
● MSDASQL allows OLE DB clients to work with any ODBC driver, but does not guarantee that you can use
the full range of functionality of each ODBC driver. Using SAOLEDB, you can get full access to database
software features from OLE DB programming environments.
In this section:
The OLE DB provider is designed to work with Microsoft Data Access Components (MDAC) 2.8 and later
versions.
The OLE DB driver can be used as a resource manager in a distributed transaction environment.
Related Information
ADO (Microsoft ActiveX Data Objects) is a data access object model exposed through an Automation interface,
which allows client applications to discover the methods and properties of objects at runtime without any prior
knowledge of the object.
Automation allows scripting languages like Microsoft Visual Basic to use a standard data access object model.
ADO uses OLE DB to provide data access.
Using Microsoft Visual Basic and ADO, basic tasks such as connecting to a database, executing SQL queries,
and retrieving result sets are demonstrated. This is not a complete guide to programming using ADO.
For information about programming in ADO, see your development tool documentation.
In this section:
How to Obtain Result Sets Using the Recordset Object [page 134]
The ADO Recordset object represents the result set of a query. You can use it to view data from a
database.
Row Updates Through a Cursor Using the Recordset Object [page 136]
The OLE DB provider lets you update a result set through a cursor. This capability is not available
through the MSDASQL provider.
The following Microsoft Visual Basic routine connects to a database using the Connection object.
Sample Code
You can try this routine by placing a command button named cmdTestConnection on a form, and pasting the
routine into its Click event. Run the program and click the button to connect and then disconnect.
Notes
Related Information
The following routine sends a simple SQL statement to the database using the Command object.
Sample Code
You can try this routine by placing a command button named cmdUpdate on a form, and pasting the routine
into its Click event. Run the program and click the button to connect, display a message in the database server
messages window, and then disconnect.
Notes
After establishing a connection, the example code creates a Command object, sets its CommandText property
to an update statement, and sets its ActiveConnection property to the current connection. It then executes the
update statement and displays the number of rows affected by the update in a window.
In this example, the update is sent to the database and committed when it is executed.
The ADO Recordset object represents the result set of a query. You can use it to view data from a database.
Sample Code
You can try this routine by placing a command button named cmdQuery on a form and pasting the routine into
its Click event. Run the program and click the button to connect, display a message in the database server
messages window, execute a query and display the first few rows in windows, and then disconnect.
'Execute a query
myRS = New ADODB.Recordset
myRS.CacheSize = 50
myRS.let_Source("SELECT * FROM Customers")
myRS.let_ActiveConnection(myConn)
myRS.CursorType = ADODB.CursorTypeEnum.adOpenKeyset
myRS.LockType = ADODB.LockTypeEnum.adLockOptimistic
myRS.Open()
myRS.Close()
myConn.Close()
Exit Sub
ErrorHandler:
MsgBox(ErrorToString(Err.Number))
Exit Sub
End Sub
Notes
The Recordset object in this example holds the results from a query on the Customers table. The For loop
scrolls through the first several rows and displays the CompanyName value for each row.
Related Information
The ADO Recordset represents a cursor. You can choose the type of cursor by declaring a CursorType property
of the Recordset object before you open the Recordset. The choice of cursor type controls the actions you can
take on the Recordset and has performance implications.
Cursor Types
The available cursor types, the corresponding cursor type constants, and the SQL Anywhere types they are
equivalent to, are as follows:
Sample Code
The following code sets the cursor type for an ADO Recordset object:
Related Information
The OLE DB provider lets you update a result set through a cursor. This capability is not available through the
MSDASQL provider.
Notes
If you use the adLockBatchOptimistic setting on the Recordset, the myRS.Update method does not make any
changes to the database itself. Instead, it updates a local copy of the Recordset.
The myRS.UpdateBatch method makes the update to the database server, but does not commit it, because it
is inside a transaction. If an UpdateBatch method was invoked outside a transaction, the change would be
committed.
The myConn.CommitTrans method commits the changes. The Recordset object has been closed by this time,
so there is no issue of whether the local copy of the data is changed or not.
By default, any change you make to the database using ADO is committed when it is executed. This includes
explicit updates, and the UpdateBatch method on a Recordset.
However, the previous section illustrated that you can use the BeginTrans and RollbackTrans or CommitTrans
methods on the Connection object to use transactions.
The transaction isolation level is set as a property of the Connection object. The IsolationLevel property can
take on one of the following values:
Browse adXactBrowse 0
Isolated adXactIsolated 3
Serializable adXactSerializable 3
OLE DB connection parameters are defined by Microsoft. The OLE DB provider supports a subset of these
connection parameters.
Below are the OLE DB connection parameters that are supported by the provider. In some cases, OLE DB
connection parameters are identical to (for example, Password) or resemble (for example, User ID) database
server connection parameters. Note the use of spaces in many of these connection parameters.
Provider
This connection parameter maps directly to the UserID (UID) connection parameter. For example: User
ID=DBA.
Password
This connection parameter maps directly to the Password (PWD) connection parameter. For example:
Password=sql.
Data Source
This connection parameter maps directly to the DataSourceName (DSN) connection parameter. For
example: Data Source=SQL Anywhere 17 Demo.
Initial Catalog
This connection parameter maps directly to the DatabaseName (DBN) connection parameter. For
example: Initial Catalog=demo.
This connection parameter maps directly to the Host connection parameter. The parameter value has the
same form as the Host parameter value. For example: Location=localhost:4444.
Downlevel
This connection parameter maps directly to the Delphi connection parameter. When set TRUE, this
connection parameter forces the provider to map the SQL TIME data type to the less precise
DBTYPE_DBTIME OLE DB data type (Downlevel=true). By default or when set FALSE, the provider maps
the SQL TIME data type to DBTYPE_DBTIME2 (ordinal 145) which includes fractional seconds
(Downlevel=false). Use the FALSE setting when migrating tables between relational database
management systems.
When using the ODBC Data Source Administrator, check the Delphi applications box. Use this option for
SAP PowerBuilder applications.
Extended Properties (or Location)
This connection parameter is used by OLE DB to pass in all the database server-specific connection
parameters. For example: Extended
Properties="UserID=DBA;DBKEY=V3moj3952B;DBF=demo.db".
ADO uses this connection parameter to collect and pass in all the connection parameters that it does not
recognize.
Some Microsoft connection windows have a field called Prov String or Provider String. The contents of this
field are passed as the value to Extended Properties.
OLE DB Services
This connection parameter is not directly handled by the OLE DB provider. It controls connection pooling in
ADO.
Prompt
This connection parameter governs how a connection attempt handles errors. The possible prompt values
are 1, 2, 3, or 4. The meanings are DBPROMPT_PROMPT (1), DBPROMPT_COMPLETE (2),
DBPROMPT_COMPLETEREQUIRED (3), and DBPROMPT_NOPROMPT (4).
The default prompt value is 4 which means the provider does not present a connect window. Setting the
prompt value to 1 causes a connect window to always appear. Setting the prompt value to 2 causes a
connect window to appear if the initial connection attempt fails. Setting the prompt value to 3 causes a
connect window to appear if the initial connection attempt fails but the provider disables the controls for
any information not required to connect to the data source.
Window Handle
The application can pass the handle of the parent window, if applicable, or a null pointer if either the
window handle is not applicable or the provider does present any windows. The window handle value is
typically 0 (NULL).
Other OLE DB connection parameters can be specified but they are ignored by the OLE DB provider.
When the OLE DB provider is invoked, it gets the property values for the OLE DB connection parameters. Here
is a typical set of property values obtained from Microsoft's RowsetViewer application.
User ID 'DBA'
Password 'sql'
Location 'localhost:4444'
Initial Catalog 'demo'
The connection string that the provider constructs from this set of parameter values is:
'DSN=testds;HOST=localhost:4444;DBN=demo;UID=DBA;PWD=sql;appinfo=api=oledb'
The OLE DB provider uses the connection string, Window Handle, and Prompt values as parameters to the
database server connection call that it makes.
connection.Open "Provider=SAOLEDB;Location=localhost:4444;UserID=DBA;Pwd=sql"
ADO parses the connection string and passes all of the unrecognized connection parameters in Extended
Properties. When the OLE DB provider is invoked, it gets the property values for the OLE DB connection
parameters. Here is the set of property values obtained from the ADO application that used the connection
string shown above.
User ID ''
Password ''
Location 'localhost:4444'
Initial Catalog ''
Data Source ''
Extended Properties 'UserID=DBA;Pwd=sql'
Prompt 4
Window Handle 0
The connection string that the provider constructs from this set of parameter values is:
'HOST=localhost:4444;UserID=DBA;Pwd=sql'
The provider uses the connection string, Window Handle, and Prompt values as parameters to the database
server connection call that it makes.
Provider=SAOLEDB,Location=192.168.2.2:2638,Downlevel=TRUE;ProviderString='Databas
eName=demo;ServerName=SQLA'
Provider=SAOLEDB,Downlevel=TRUE;ProviderString='DatabaseName=demo;ServerName=SQLA
;Host=192.168.2.2:2638'
Provider=SAOLEDB,Downlevel=TRUE;ProviderString='DatabaseName=demo;ServerName=SQLA
;Location=192.168.2.2:2638'
OLE DB uses the value of the Location property to set the value of the Host connection parameter, but there is
no such thing as a Location connection parameter.
Related Information
The .NET Framework Data Provider for OLE DB automatically pools connections using OLE DB session pooling.
When the application closes the connection, it is not actually closed. Instead, the connection is held for a
period of time. When your application re-opens a connection, ADO/OLE DB recognizes that the application is
using an identical connection string and reuses the open connection. For example, if the application does an
Open/Execute/Close 100 times, there is only 1 actual open and 1 actual close. The final close occurs after
about 1 minute of idle time.
If a connection is terminated by external means (such as a forced disconnect using an administrative tool such
as SQL Central), ADO/OLE DB does not know that this has occurred until the next interaction with the server.
Caution should be exercised before resorting to forcible disconnects.
The flag that controls connection pooling is DBPROPVAL_OS_RESOURCEPOOLING (1). This flag can be turned
off using a connection parameter in the connection string.
If you specify OLE DB Services=-2 in your connection string, then connection pooling is disabled. Here is a
sample connection string:
Provider=SAOLEDB;OLE DB Services=-2;...
If you specify OLE DB Services=-4 in your connection string, then connection pooling and transaction
enlistment are disabled. Here is a sample connection string:
Provider=SAOLEDB;OLE DB Services=-4;...
If you disable connection pooling, there is a performance penalty if your application frequently opens and
closes connections using the same connection string.
Related Information
The OLE DB provider supports additional isolation levels. In the list below, the first 4 are standard OLE DB
isolation levels. The last three are supported by the database server.
Related Information
A Microsoft Linked Server can be created that uses the OLE DB provider to obtain access to a database. SQL
queries can be issued using either the Microsoft four-part table referencing syntax or the Microsoft
OPENQUERY SQL function.
In this example, SADATABASE is the name of the Linked Server, demo is the catalog or database name,
GROUPO is the table owner in the database, and Customers is the table name in the database.
In the OPENQUERY syntax, the second SELECT statement ( 'SELECT * FROM Customers' ) is passed to the
database server for execution.
For complex queries, OPENQUERY may be the better choice since the entire query is evaluated on the
database server. With the four-part syntax, SQL Server may retrieve the contents of all tables referenced by the
query before it can evaluate it (for example, queries with WHERE, JOIN, nested queries, and so on). For queries
involving very large tables, processing time may be very poor when using four-part syntax. In the following four-
part query example, SQL Server passes a simple SELECT on the entire table (no WHERE clause) to the
database server via the OLE DB provider and then evaluates the WHERE condition itself.
Instead of returning one row in the result set to SQL Server, all rows are returned and then this result set is
reduced to one row by SQL Server. The following example produces an identical result but only one row is
returned to SQL Server.
You can set up a Linked Server that uses the OLE DB provider using a Microsoft SQL Server interactive
application or a SQL Server script.
Use a Microsoft SQL Server interactive application to create a Microsoft Linked Server that uses the OLE DB
provider to obtain access to a database.
Prerequisites
Before setting up a Linked Server, there are a few things to consider when using Windows 7 or later versions of
Windows. Microsoft SQL Server runs as a service on your system. Depending on how the service is set up on
Windows 7 or later versions, a service may not be able to use shared memory connections, it may not be able
to start a server, and it may not be able to access User Data Source definitions. For example, a service logged in
as a Network Service cannot start servers, connect via shared memory, or access User Data Sources. For these
situations, the database server must be started ahead of time and the TCPIP communication protocol must be
used. Also, if a data source is to be used, it must be a System Data Source.
Procedure
1. For Microsoft SQL Server 2005/2008, start Microsoft SQL Server Management Studio. For other versions
of Microsoft SQL Server, the name of this application and the steps to setting up a Linked Server may vary.
In the Object Explorer pane, expand Server Objects Linked Servers . Right-click Linked Servers and
then click New Linked Server.
2. Fill in the General page.
The Linked Server field on the General page should contain a Linked Server name (like SADATABASE used
in an earlier example).
The Other Data Source option should be chosen, and SQL Anywhere OLE DB Provider 17 should be chosen
from the Provider list.
The Product Name field can be anything you like (for example, your application name).
The Provider String field can contain additional connection parameters such as UserID (UID), ServerName
(Server), and DatabaseFile (DBF).
The Location field can contain the equivalent of the Host connection parameter (for example, localhost:
4444 or 10.25.99.253:2638).
Location: AppServer-pc:2639
The Initial Catalog field can contain the name of the database to connect to (for example, demo). The
database must have been previously started.
The combination of these last four fields and the user ID and password from the Security page must
contain enough information to successfully connect to a database server.
3. Instead of specifying the database user ID and password as a connection parameter in the Provider String
field where it would be exposed in plain text, you can fill in the Security page.
In Microsoft SQL Server 2005/2008, click the Be made using this security context option and fill in the
Remote login and With password fields (the password is displayed as asterisks).
4. Go to the Server Options page.
The technique for doing this varies with different versions of Microsoft SQL Server. In Microsoft SQL Server
2000, there are two checkboxes that must be checked for these two options. In Microsoft SQL Server
2005/2008, the options are True/False settings. Make sure that they are set True. The Remote Procedure
Call (RPC) options must be set to execute stored procedure/function calls in a database and pass
parameters in and out successfully.
5. Choose the Allow Inprocess provider option.
The technique for doing this varies with different versions of Microsoft SQL Server. In Microsoft SQL Server
2000, there is a Provider Options button that takes you to the page where you can choose this option. For
Microsoft SQL Server 2005/2008, right-click the SAOLEDB.17 provider name under Linked Servers
Providers and click Properties. Make sure the Allow Inprocess checkbox is checked. If the Inprocess
option is not chosen, queries fail.
6. Other provider options can be ignored. Several of these options pertain to Microsoft SQL Server backwards
compatibility and have no effect on the way Microsoft SQL Server interacts with the OLE DB provider.
Examples are Nested queries and Supports LIKE operator. Other options, when selected, may result in
syntax errors or degraded performance.
Set up a Microsoft Linked Server definition using a Microsoft SQL Server script.
Prerequisites
Before setting up a Linked Server, there are a few things to consider when using Microsoft Windows 7 or later
versions of Microsoft Windows. Microsoft SQL Server runs as a service on your system. Depending on how the
service is set up on Microsoft Windows 7 or later versions, a service may not be able to use shared memory
connections, it may not be able to start a server, and it may not be able to access User Data Source definitions.
For example, a service logged in as a Network Service cannot start servers, connect via shared memory, or
access User Data Sources. For these situations, the database server must be started ahead of time and the
TCPIP communication protocol must be used. Also, if a data source is to be used, it must be a System Data
Source.
Context
Make the appropriate changes to the following script using the steps below before running it on Microsoft SQL
Server.
USE [master]
GO
EXEC master.dbo.sp_addlinkedserver @server=N'SADATABASE',
@srvproduct=N'SAP DBMS', @provider=N'SAOLEDB.17',
@datasrc=N'SQL Anywhere 17 Demo',
@provstr=N'host=localhost:4444;server=myserver;dbn=demo'
GO
EXEC master.dbo.sp_serveroption @server=N'SADATABASE',
@optname=N'rpc', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'SADATABASE',
@optname=N'rpc out', @optvalue=N'true'
GO
-- Set remote login
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'SADATABASE',
@locallogin = NULL , @useself = N'False',
@rmtuser = N'DBA', @rmtpassword = N'password'
GO
-- Set global provider "allow in process" flag
EXEC master.dbo.sp_MSset_oledb_prop N'SAOLEDB.17', N'AllowInProcess', 1
Results
Your modified script can be run under Microsoft SQL Server to create a new Linked Server.
The following table describes the support for each interface by the OLE DB driver.
IAlterTable
IChapteredRowset A chaptered rowset allows rows of a Not supported. The database server
rowset to be accessed in separate does not support chaptered rowsets.
chapters.
ICommandText Set the SQL statement text for ICom Only the DBGUID_DEFAULT SQL dialect
mand. is supported.
ICommandWithParameters Set or get parameter information for a No support for parameters stored as
command.
vectors of scalar values.
IConvertType Supported.
IErrorLookup
IErrorRecords
IRowsetRefresh Get the latest value of data that is visi Not supported.
ble to a transaction.
ISupportErrorInfo
ITransaction Commit or abort transactions. Not all the flags are supported.
ITransactionJoin Support distributed transactions. Not all the flags are supported.
Support for TIME and TIMESTAMP WITH TIME ZONE Data Types
The data provider supports two different OLE DB data types for handling of the SQL TIME data type. These are
DBTYPE_DBTIME and DBTYPE_DBTIME2.
The DBTYPE_DBTIME data structure does not contain fractional seconds. There is a loss of precision when
SQL TIME data is returned to the client application using the DBTYPE_DBTIME data type. The ordinal value for
DBTYPE_DBTIME is 134.
The DBTYPE_DBTIME2 data type, introduced by Microsoft with the release of SQL Server 2008, supports
fractional seconds. There is no loss of precision when SQL TIME data is returned to the client application using
the DBTYPE_DBTIME2 data type. The ordinal value for DBTYPE_DBTIME2 is 145.
By default, the data provider maps the TIME data type to DBTYPE_DBTIME2.
You can also use the data provider as a Microsoft Linked Server to fetch TIME data without the loss of precision.
If you require that the TIME data type be mapped to DBTYPE_DBTIME, then use the OLE DB connection
parameter Downlevel=TRUE for backwards compatibility.
Also, if you use Borland Delphi, then use the connection parameter Delphi=Yes for backwards compatibility.
Old versions of Borland Delphi do not handle DBTYPE_DBTIME2. When the Delphi=Yes option is specified in
a connection string, the provider maps the TIME data type to DBTYPE_DBTIME. This option is labeled Delphi
applications on the ODBC tab of the ODBC Configuration for SQL Anywhere dialog that appears when using the
Microsoft ODBC Data Source Administrator.
The data provider maps the SQL TIMESTAMP WITH TIME ZONE data type to DBTYPE_DBTIMESTAMPOFFSET.
There is no backwards compatibility option since there is no other OLE DB data type that corresponds to this
SQL data type. The DBTYPE_DBTIMESTAMPOFFSET data type, introduced by Microsoft with the release of
SQL Server 2008, supports fractional seconds and a time zone offset. The ordinal value for
DBTYPE_DBTIMESTAMPOFFSET is 146.
Old versions of Borland Delphi do not support the DBTYPE_DBTIMESTAMPOFFSET data type. If you require
the ability to fetch a TIMESTAMP WITH TIME ZONE column in a Borland Delphi application, the column must
be cast as CHAR or VARCHAR data.
The underlying C/C++ data structures are described in the Microsoft SQL Server SDK header file called
sqlncli.h.
Note that OLE DB schema rowset information is not affected by connection parameters. For example, schema
rowsets such as DBSCHEMA_PROVIDER_TYPES and DBSCHEMA_COLUMNS always return ordinal 145
(DBTYPE_DBTIME2) for the TIME data type. Schema rowset information is implemented by the stored
procedures listed in the table below and these can be modified if required.
DBSCHEMA_COLUMNS dbo.sa_oledb_columns
DBSCHEMA_PROCEDURE_COLUMNS dbo.sa_oledb_procedure_columns
DBSCHEMA_PROCEDURE_PARAMETERS dbo.sa_oledb_procedure_parameters
DBSCHEMA_PROVIDER_TYPES dbo.sa_oledb_provider_types
This registration process includes making registry entries in the COM section of the registry, so that ADO can
locate the DLL when the SAOLEDB provider is called. If you change the location of your DLL, you must re-
register it.
The following commands register the OLE DB provider when run from the directory where the provider is
installed:
regsvr32 dboledb17.dll
regsvr32 dboledba17.dll
If you are using 64-bit Windows, the commands shown above register the 64-bit provider. The following
commands can be used to register the 32-bit OLE DB provider:
c:\Windows\SysWOW64\regsvr32 dboledb17.dll
c:\Windows\SysWOW64\regsvr32 dboledba17.dll
ODBC (Open Database Connectivity) is a standard call level interface (CLI) developed by Microsoft
Corporation. It is based on the SQL Access Group CLI specification.
ODBC applications can run against any data source that provides an ODBC driver. ODBC is a good choice for a
programming interface if you want your application to be portable to other data sources that have ODBC
drivers.
ODBC is a low-level interface. Almost all database server functionality is available with this interface. ODBC is
available as a DLL under Microsoft Windows operating systems. It is provided as a shared object library for
UNIX and Linux.
In this section:
You can develop applications using a variety of development tools and programming languages, as shown in
the figure below, and access the database server using the ODBC API.
Note
Some application development tools, which already have ODBC support, provide their own programming
interface that hides the ODBC interface. This documentation does not describe how to use those tools.
Microsoft Windows includes an ODBC driver manager. For UNIX, Linux, and macOS, a driver manager is
included with the database server software.
In this section:
The ODBC driver supports ODBC 3.5, which is supplied as part of the Microsoft Data Access Kit 2.7.
ODBC features are arranged according to level of conformance. Features are either Core, Level 1, or Level 2,
with Level 2 being the most complete level of ODBC support.
Core conformance
All Level 1 features are supported, except for asynchronous execution of ODBC functions.
Multiple threads sharing a single connection are supported. The requests from the different threads are
serialized by the database server.
Level 2 conformance
All Level 2 features are supported, except for the following ones:
● Three part names of tables and views. This is not applicable to the database server.
● Asynchronous execution of ODBC functions for specified individual statements.
● Ability to time out login requests and SQL queries.
Related Information
Every C/C++ source file that calls ODBC functions must include a platform-specific ODBC header file. Each
platform-specific header file includes the main ODBC header file odbc.h, which defines all the functions, data
types, and constant definitions required to write an ODBC program.
Perform the following tasks to include the ODBC header file in a C/C++ source file:
1. Add an include line referencing the appropriate platform-specific header file to your source file. The lines to
use are as follows:
2. Add the directory containing the header file to the include path for your compiler.
Both the platform-specific header files and odbc.h are installed in the SDK\Include subdirectory of the
database server software directory.
3. When building ODBC applications for UNIX or Linux, you might have to define the macro "UNIX" for 32-bit
applications or "UNIX64" for 64-bit applications to obtain the correct data alignment and sizes. This step is
not required if you are using one of the following supported compilers:
○ GNU C/C++ compiler on any supported platform
○ Intel C/C++ compiler for Linux (icc)
○ SunPro C/C++ compiler for Linux or Solaris
○ VisualAge C/C++ compiler for AIX
○ C/C++ compiler (cc/aCC) for HP-UX
Once your source code has been written, you are ready to compile and link the application.
In this section:
The SQL Anywhere ODBC Driver Manager for UNIX/Linux [page 158]
An ODBC driver manager for UNIX and Linux is included with the database server software.
When linking your application, you must link against the appropriate import library file to have access to the
ODBC functions.
The import library defines entry points for the ODBC driver manager odbc32.dll. The driver manager in turn
loads the ODBC driver dbodbc17.dll.
Typically, the import library is stored under the Lib directory structure of the Microsoft platform SDK:
Example
The following command illustrates how to add the directory containing the platform-specific import library to
the list of library directories in your LIB environment variable:
set LIB=%LIB%;c:\mssdk\v7.0\lib
The following command illustrates how to compile and link the application stored in odbc.c using the
Microsoft compile and link tool:
An ODBC driver manager for UNIX and Linux is included with the database server software and there are third
party driver managers available. The following information describes how to build ODBC applications that do
not use an ODBC driver manager.
ODBC Driver
The ODBC driver is a shared object or shared library. Separate versions of the ODBC driver are supplied for
single-threaded and multithreaded applications. A generic ODBC driver is supplied that will detect the
threading model in use and direct calls to the appropriate single-threaded or multithreaded library.
The libraries are installed as symbolic links to the shared library with a version number (shown in parentheses).
When linking an ODBC application on UNIX or Linux, link your application against the generic ODBC driver
libdbodbc17. When deploying your application, ensure that the appropriate (or all) ODBC driver versions
(non-threaded or threaded) are available in the user's library path.
If the presence of an ODBC driver manager is not detected, the ODBC driver uses the system information file
for data source information.
An ODBC driver manager for UNIX and Linux is included with the database server software.
The libdbodm17 shared object can be used on all supported UNIX and Linux platforms as an ODBC driver
manager. The driver manager can be used to load any version 3.0 or later ODBC driver. The driver manager will
not perform mappings between ODBC 1.0/2.0 calls and ODBC 3.x calls; therefore, applications using the driver
manager must restrict their use of the ODBC feature set to version 3.0 and later. Also, the driver manager can
be used by both threaded and non-threaded applications.
The driver manager can perform tracing of ODBC calls for any given connection. To turn on tracing, use the
TraceLevel and TraceLog directives. These directives can be part of a connection string (in the case where
SQLDriverConnect is being used) or within a DSN entry. The TraceLog directive identifies the tracing log file to
contain the trace output for the connection. The TraceLevel directive governs the amount of tracing
information wanted. The trace levels are:
NONE
In addition to the above, the date and time of execution are included in the output.
HIGH
Third-party ODBC driver managers for UNIX and Linux are available also. Consult the documentation that
accompanies these driver managers for information about their use.
Related Information
Versions of the unixODBC release before version 2.2.14 have incorrectly implemented some aspects of the 64-
bit ODBC specification as defined by Microsoft. These differences will cause problems when using the
unixODBC driver manager with the 64-bit ODBC driver.
To avoid these problems, be aware of the differences. One of them is the definition of SQLLEN and SQLULEN.
These are 64-bit types in the Microsoft 64-bit ODBC specification, and are expected to be 64-bit quantities by
There are three things that you must do to avoid problems on 64-bit platforms.
1. Instead of including the unixODBC headers like sql.h and sqlext.h, include the unixodbc.h header file.
This will guarantee that you have the correct definitions for SQLLEN and SQLULEN. The header files in
unixODBC 2.2.14 or later versions correct this problem.
2. You must ensure that you have used the correct types for all parameters. Use of the correct header file and
the strong type checking of your C/C++ compiler should help in this area. You must also ensure that you
have used the correct types for all variables that are set by the ODBC driver indirectly through pointers.
3. Do not use versions of the unixODBC driver manager before release 2.2.14. Link directly to the ODBC driver
instead. For example, ensure that the libodbc shared object is linked to the ODBC driver shared object.
Alternatively, you can use the driver manager included with the database server software on platforms
where it is available.
Related Information
The SQL Anywhere ODBC Driver Manager for UNIX/Linux [page 158]
ODBC Applications on UNIX/Linux [page 156]
64-bit ODBC Considerations [page 177]
Versions of ODBC driver managers that define SQLWCHAR as 32-bit (UTF-32) quantities cannot be used with
the ODBC driver that supports wide calls since this driver is built for 16-bit SQLWCHAR.
For these cases, an ANSI-only version of the ODBC driver is provided. This version of the ODBC driver does not
support the wide call interface (for example, SQLConnectW).
The shared object name of the driver is libdbodbcansi17_r. Only a threaded variant of the driver is provided.
On macOS, in addition to the dylib, the driver is also available in bundle form (dbodbcansi17_r.bundle).
Certain frameworks, such as Real Basic, do not work with the dylib and require the bundle.
The regular ODBC driver treats SQLWCHAR strings as UTF-16 strings. This driver cannot be used with some
ODBC driver managers, such as iODBC, which treat SQLWCHAR strings as UTF-32 strings. When dealing with
Unicode-enabled drivers, these driver managers translate narrow calls from the application to wide calls into
the driver. An ANSI-only driver gets around this behavior, allowing the driver to be used with such driver
managers, as long as the application does not make any wide calls. Wide calls through iODBC, or any other
driver manager with similar semantics, remain unsupported.
You can find the samples in the %SQLANYSAMP17%\SQLAnywhere subdirectories (Microsoft Windows) and
$SQLANYSAMP17/sqlanywhere subdirectories (UNIX and Linux).
The sample programs in directories starting with the 4 letters ODBC illustrate separate and simple ODBC
tasks, such as connecting to a database and executing statements. A complete sample ODBC program is
supplied in the file %SQLANYSAMP17%\SQLAnywhere\C\odbc.c (Microsoft Windows) and
$SQLANYSAMP17/sqlanywhere/c/odbc.c (UNIX and Linux). This program performs the same actions as
the Embedded SQL dynamic cursor example program that is in the same directory.
In this section:
Related Information
Build and run a sample ODBC program to see how it performs ODBC tasks, such as connecting to a database
and executing statements.
Prerequisites
For x86/x64 platform builds with Microsoft Visual Studio, you must set up the correct environment for
compiling and linking. This is typically done using the Microsoft Visual Studio vcvars32.bat or
vcvars64.bat (called vcvarsamd64.bat in older versions of Microsoft Visual Studio).
A batch file located in the %SQLANYSAMP17%\SQLAnywhere\C directory can be used to compile and link all
the sample applications.
Procedure
If you are getting build errors, try specifying the target platform (x86 or x64) as an argument to
build.bat. Here is an example.
build x64
Results
Build and run a sample ODBC program to see how it performs ODBC tasks, such as connecting to a database
and executing statements.
Context
A shell script located in the $SQLANYSAMP17/sqlanywhere/c directory can be used to compile and link all
the sample applications.
Procedure
1. Open a command shell and change the directory to the $SQLANYSAMP17/sqlanywhere/c directory.
Results
You can load the sample ODBC program by running the file on the appropriate platform.
After running the file, choose one of the tables in the sample database. For example, you can enter Customers
or Employees.
ODBC applications use a small set of handles to track the ODBC context, database connections, and SQL
statements. A handle is a pointer type and is a 64-bit value in 64-bit applications and a 32-bit value in 32-bit
applications.
Environment
The environment handle provides a global context in which to access data. Every ODBC application must
allocate exactly one environment handle upon starting, and must free it at the end.
SQLRETURN rc;
SQLHENV env;
rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env );
Connection
A connection is the link between an application and a data source. An application can have several
connections associated with its environment. Allocating a connection handle does not establish a
connection; a connection handle must be allocated first and then used to establish a connection.
SQLRETURN rc;
SQLHDBC dbc;
rc = SQLAllocHandle( SQL_HANDLE_DBC, env, &dbc );
Statement
SQLRETURN rc;
SQLHSTMT stmt;
rc = SQLAllocHandle( SQL_HANDLE_STMT, dbc, &stmt );
In this section:
ODBC defines four types of objects (environment, connection, statement, and descriptor) that are referenced
in applications by using handles.
Environment SQLHENV
Connection SQLHDBC
Statement SQLHSTMT
Descriptor SQLHDESC
SQLRETURN rc;
SQLHENV env;
rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env );
if( rc == SQL_SUCCESS || rc == SQL_SUCCESS_WITH_INFO )
{
.
.
.
}
SQLFreeHandle( SQL_HANDLE_ENV, env );
Related Information
A simple ODBC program that connects to the sample database and immediately disconnects can be found in
%SQLANYSAMP17%\SQLAnywhere\ODBCConnect\odbcconnect.cpp.
This example shows the steps required in setting up the environment to make a connection to a database
server, as well as the steps required in disconnecting from the server and freeing up resources.
Which one you use depends on how you expect your application to be deployed and used:
SQLConnect
SQLConnect takes a data source name and optional user ID and password. Use SQLConnect if you hard-
code a data source name into your application.
SQLDriverConnect
SQLDriverConnect allows the application to use connection information that is external to the data source
definition. Also, you can use SQLDriverConnect to request that the ODBC driver prompt for connection
information.
SQLSMALLINT cso;
SQLCHAR scso[2048];
SQLDriverConnect( hdbc, NULL,
"Driver=SQL Anywhere 17;UID=DBA;PWD=passwd", SQL_NTS,
scso, sizeof(scso)-1,
&cso, SQL_DRIVER_NOPROMPT );
SQLBrowseConnect
SQLBrowseConnect allows your application to build its own windows to prompt for connection information
and to browse for data sources used by a particular driver.
In this section:
Related Information
SQLConnect Function
SQLDriverConnect Function
SQLBrowseConnect Function
Context
Procedure
SQLRETURN rc;
SQLHENV env;
rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env );
By declaring that the application follows ODBC version 3, SQLSTATE values and some other version-
dependent features are set to the proper behavior.
3. Allocate an ODBC connection handle. For example:
Some connection attributes must be set before establishing a connection or after establishing a
connection, while others can be set either before or after. The SQL_AUTOCOMMIT attribute is one that can
be set before or after:
By default, ODBC operates in autocommit mode. This mode is turned off by setting SQL_AUTOCOMMIT to
false.
5. If necessary, assemble the data source or connection string.
Depending on your application, you may have a hard-coded data source or connection string, or you may
store it externally for greater flexibility.
6. Establish an ODBC connection. For example:
Every string passed to ODBC has a corresponding length. If the length is unknown, you can pass SQL_NTS
indicating that it is a Null Terminated String whose end is marked by the null character (\0) for ASCII
strings or the null wide character (\0\0) for wide-character strings.
Results
In this section:
You use the SQLSetConnectAttr function to control details of the connection. For example, the following
statement disables ODBC autocommit.
Many aspects of the connection can be controlled through the connection parameters.
Related Information
SQLSetConnectAttr Function
You use the SQLGetConnectAttr function to get details of the connection. For example, the following statement
returns the connection state.
When using the SQLGetConnectAttr function to get the SQL_ATTR_CONNECTION_DEAD attribute, the value
SQL_CD_TRUE is returned if the connection has been dropped even if no request has been sent to the server
since the connection was dropped. Determining if the connection has been dropped is done without making a
request to the server, and the dropped connection is detected within a few seconds. The connection can be
dropped for several reasons, such as an idle timeout.
Related Information
SQLGetConnectAttr Function
You can develop multithreaded ODBC applications. Use a separate connection for each thread.
You can use a single connection for multiple threads. However, the database server does not allow more than
one active request for any one connection at a time. If one thread executes a statement that takes a long time,
all other threads must wait until the request is complete.
If you do not deploy all the files required for the ODBC driver, you may encounter the following error when
attempting to connect to a database.
Check that you have installed all the files required for the correct operation of the ODBC driver.
Related Information
The ODBC driver sets some temporary server options when connecting to the database server.
date_format
yyyy-mm-dd
date_order
ymd
isolation_level
SQL_TXN_READ_UNCOMMITTED
SQL_TXN_READ_COMMITTED
SQL_TXN_REPEATABLE_READ
SQL_TXN_SERIALIZABLE
SA_SQL_TXN_SNAPSHOT
SA_SQL_TXN_STATEMENT_SNAPSHOT
SA_SQL_TXN_READONLY_STATEMENT_SNAPSHOT
time_format
yyyy-mm-dd hh:nn:ss.ssssss
timestamp_with_time_zone_format
To guarantee consistent behavior of the ODBC driver, do not change the setting of these options.
Related Information
SA_REGISTER_MESSAGE_CALLBACK
Messages can be sent to the client application from the database server using the SQL MESSAGE
statement. Messages can also be generated by long running database server statements.
A message handler routine can be created to intercept these messages. The message handler callback
prototype is as follows:
MESSAGE_TYPE_INFO
The message type was PROGRESS. This type of message is generated by long running database server
statements such as BACKUP DATABASE and LOAD TABLE.
A pointer to the message is contained in message. The message string is not null-terminated. Your
application must be designed to handle this. The following is an example.
To register the message handler in ODBC, call the SQLSetConnectAttr function as follows:
rc = SQLSetConnectAttr(
hdbc,
SA_REGISTER_MESSAGE_CALLBACK,
(SQLPOINTER) &message_handler, SQL_IS_POINTER );
To unregister the message handler in ODBC, call the SQLSetConnectAttr function as follows:
rc = SQLSetConnectAttr(
hdbc,
SA_REGISTER_MESSAGE_CALLBACK,
NULL, SQL_IS_POINTER );
SA_GET_MESSAGE_CALLBACK_PARM
To retrieve the value of the SQLHDBC connection handle that is passed to message handler callback
routine, use SQLGetConnectAttr with the SA_GET_MESSAGE_CALLBACK_PARM parameter.
The returned value is the same as the parameter value that is passed to the message handler callback
routine.
SA_REGISTER_VALIDATE_FILE_TRANSFER_CALLBACK
This is used to register a file transfer validation callback function. Before allowing any transfer to take
place, the ODBC driver will invoke the validation callback, if it exists. If the client data transfer is being
requested during the execution of indirect statements such as from within a stored procedure, the ODBC
driver will not allow a transfer unless the client application has registered a validation callback. The
conditions under which a validation call is made are described more fully below.
The file_name parameter is the name of the file to be read or written. The is_write parameter is 0 if a
read is requested (transfer from the client to the server), and non-zero for a write. The callback function
should return 0 if the file transfer is not allowed, non-zero otherwise.
This is used to set an extended transaction isolation level. The following example sets a Snapshot isolation
level:
Related Information
ODBC functions should not be called directly or indirectly from the DllMain function in a Windows Dynamic Link
Library. The DllMain entry point function is intended to perform only simple initialization and termination tasks.
Calling ODBC functions like SQLFreeHandle, SQLFreeConnect, and SQLFreeEnv can create deadlocks and
circular dependencies.
The following code example illustrates a bad programming practice. When the Microsoft ODBC Driver Manager
detects that the last access to the ODBC driver has completed, it will do a driver unload. When the ODBC driver
shuts down, it stops any active threads. Thread termination results in a recursive thread detach call into
DllMain. Since the call into DllMain is serialized, and a call is underway, the new thread detach call will never get
started. The ODBC driver will wait forever for its threads to terminate and your application will hang.
Related Information
Direct execution
The database server parses the SQL statement, prepares an access plan, and executes the statement.
Parsing and access plan preparation are called preparing the statement.
Prepared execution
The statement preparation is carried out separately from the execution. For statements that are to be
executed repeatedly, this avoids repeated preparation and so improves performance.
In this section:
To execute a SQL statement in an ODBC application, allocate a handle for the statement using SQLAllocHandle
and then call the SQLExecDirect function to execute the statement.
Any parameters must be included as part of the statement (for example, a WHERE clause must specify its
arguments). Alternatively, you can also construct statements using bound parameters.
The SQLExecDirect function takes a statement handle, a SQL string, and a length or termination indicator,
which in this case is a null-terminated string indicator. The statement may include parameters.
Example
The following example illustrates how to allocate a handle of type SQL_HANDLE_STMT named stmt on a
connection with handle dbc:
The following example illustrates how to declare a statement and execute it:
The deletestmt declaration should usually occur at the beginning of the function.
Related Information
Construct and execute SQL statements using bound parameters to set values for statement parameters at
runtime.
Prerequisites
To run the example successfully, you need the following system privileges.
Context
Bound parameters are used with prepared statements to provide performance benefits for statements that are
executed more than once.
Procedure
For example, the following statement allocates a handle of type SQL_HANDLE_STMT with name stmt, on a
connection with handle dbc:
For example, the following lines declare variables to hold the values for the department ID, department
name, and manager ID, and for the statement string. They then bind parameters to the first, second, and
third parameters of a statement executed using the stmt statement handle.
#defined DEPT_NAME_LEN 40
SQLLEN cbDeptID = 0, cbDeptName = SQL_NTS, cbManagerID = 0;
SQLSMALLINT deptID, managerID;
SQLCHAR deptName[ DEPT_NAME_LEN + 1 ];
SQLCHAR insertstmt[ STMT_LEN ] =
"INSERT INTO Departments "
"( DepartmentID, DepartmentName, DepartmentHeadID ) "
"VALUES (?, ?, ?)";
SQLBindParameter( stmt, 1, SQL_PARAM_INPUT,
SQL_C_SSHORT, SQL_INTEGER, 0, 0,
&deptID, 0, &cbDeptID );
SQLBindParameter( stmt, 2, SQL_PARAM_INPUT,
SQL_C_CHAR, SQL_CHAR, DEPT_NAME_LEN, 0,
deptName, 0,&cbDeptName );
SQLBindParameter( stmt, 3, SQL_PARAM_INPUT,
SQL_C_SSHORT, SQL_INTEGER, 0, 0,
&managerID, 0, &cbManagerID );
For example, the following lines assign values to the parameters for the fragment of step 2.
deptID = 201;
strcpy( (char * ) deptName, "Sales East" );
managerID = 902;
Results
When built and run, the application executes the SQL statement.
Next Steps
The above code fragments do not include error checking. For a complete sample, including error checking, see
%SQLANYSAMP17%\SQLAnywhere\ODBCExecute\odbcexecute.cpp.
Related Information
Execute prepared statements to provide performance advantages for statements that are used repeatedly.
Prerequisites
To run the example successfully, you need the following system privileges.
Procedure
For example, the following code fragment illustrates how to prepare an INSERT statement:
rc = SQLPrepare( stmt,
"INSERT INTO Departments( DepartmentID, DepartmentName,
DepartmentHeadID ) "
In this example:
rc
Receives a return code that should be tested for success or failure of the operation.
stmt
The question marks are placeholders for statement parameters. A placeholder is put in the statement
to indicate where host variables are to be accessed. A placeholder is either a question mark (?) or a
host variable reference (a host variable name preceded by a colon). In the latter case, the host variable
name used in the actual text of the statement serves only as a placeholder indicating that a
corresponding parameter is to be bound to it. It need not match the actual parameter name.
2. Bind statement parameter values using SQLBindParameter.
For example, the following function call binds the value of the DepartmentID variable:
rc = SQLBindParameter( stmt,
1,
SQL_PARAM_INPUT,
SQL_C_SSHORT,
SQL_INTEGER,
0,
0,
&sDeptID,
0,
&cbDeptID );
In this example:
rc
Holds a return code that should be tested for success or failure of the operation.
stmt
indicates that this call sets the value of the first placeholder.
SQL_PARAM_INPUT
The next two parameters indicate the column precision and the number of decimal digits: both zero for
integers.
&sDeptID
rc = SQLExecute( stmt );
Dropping the statement frees resources associated with the statement itself. You drop statements using
SQLFreeHandle.
Results
When built and run, the application executes the prepared statements.
Next Steps
The above code fragments do not include error checking. For a complete sample, including error checking, see
%SQLANYSAMP17%\SQLAnywhere\ODBCPrepare\odbcprepare.cpp.
Related Information
When you use an ODBC function like SQLBindCol, SQLBindParameter, or SQLGetData, some of the
parameters are typed as SQLLEN or SQLULEN in the function prototype.
Depending on the date of the Microsoft ODBC API Reference documentation that you are looking at, you might
see the same parameters described as SQLINTEGER or SQLUINTEGER.
SQLLEN and SQLULEN data items are 64 bits in a 64-bit ODBC application and 32 bits in a 32-bit ODBC
application. SQLINTEGER and SQLUINTEGER data items are 32 bits on all platforms.
SQLRETURN SQLGetData(
SQLHSTMT StatementHandle,
SQLUSMALLINT ColumnNumber,
SQLSMALLINT TargetType,
SQLPOINTER TargetValuePtr,
SQLINTEGER BufferLength,
SQLINTEGER *StrLen_or_IndPtr);
Compare this with the actual function prototype found in sql.h in Microsoft Visual Studio version 8.
As you can see, the BufferLength and StrLen_or_Ind parameters are now typed as SQLLEN, not SQLINTEGER.
For the 64-bit platform, these are 64-bit quantities, not 32-bit quantities as indicated in the Microsoft
documentation.
To avoid issues with cross-platform compilation, the database server software provides its own ODBC header
files. For Windows platforms, include the ntodbc.h header file. For UNIX and Linux platforms, include the
unixodbc.h header file. Use of these header files ensures compatibility with the corresponding ODBC driver
for the target platform.
The following table lists some common ODBC types that have the same or different storage sizes on 64-bit and
32-bit platforms.
If you declare data variables and parameters incorrectly, then you may encounter incorrect software behavior.
The following table summarizes the ODBC API function prototypes that have changed with the introduction of
64-bit support. The parameters that are affected are noted. The parameter name as documented by Microsoft
is shown in parentheses when it differs from the actual parameter name used in the function prototype. The
parameter names are those used in the Microsoft Visual Studio version 8 header files.
SQLLEN *Strlen_or_Ind
SQLLEN *Strlen_or_Ind
SQLULEN *pirow
Some values passed into and returned from ODBC API calls through pointers have changed to accommodate
64-bit applications. For example, the following values for the SQLSetStmtAttr and SQLSetDescField functions
Related Information
On certain platforms, the storage (memory) provided for each column must be properly aligned to fetch or
store a value of the specified type. The ODBC driver checks for proper data alignment. When an object is not
properly aligned, the ODBC driver will issue an "Invalid string or buffer length" message (SQLSTATE HY090 or
S1090).
The following table lists memory alignment requirements for processors such as Sun Sparc, Itanium-IA64, and
ARM-based devices. The memory address of the data value must be a multiple of the indicated value.
SQL_C_CHAR none
SQL_C_BINARY none
SQL_C_GUID none
SQL_C_BIT none
SQL_C_STINYINT none
SQL_C_UTINYINT none
SQL_C_TINYINT none
SQL_C_NUMERIC none
SQL_C_DEFAULT none
SQL_C_SSHORT 2
SQL_C_USHORT 2
SQL_C_SHORT 2
SQL_C_DATE 2
SQL_C_TIME 2
SQL_C_TIMESTAMP 2
SQL_C_TYPE_DATE 2
SQL_C_TYPE_TIME 2
SQL_C_TYPE_TIMESTAMP 2
SQL_C_SLONG 4
SQL_C_ULONG 4
SQL_C_LONG 4
SQL_C_FLOAT 4
SQL_C_SBIGINT 8
SQL_C_UBIGINT 8
The x86, x64, and PowerPC platforms do not require memory alignment. The x64 platform includes Advanced
Micro Devices (AMD) AMD64 processors and Intel Extended Memory 64 Technology (EM64T) processors.
ODBC applications use cursors to manipulate and update result sets. The software provides extensive support
for different kinds of cursors and cursor operations.
In this section:
Related Information
You can use SQLSetConnectAttr to set the transaction isolation level for a connection.
The characteristics that determine the transaction isolation level that the software provides include the
following:
SQL_TXN_READ_UNCOMMITTED
Set isolation level to 0. When this attribute value is set, it isolates any data read from changes by others
and changes made by others cannot be seen. The re-execution of the read statement is affected by others.
This does not support a repeatable read. This is the default value for isolation level.
SQL_TXN_READ_COMMITTED
Set isolation level to 1. When this attribute value is set, it does not isolate data read from changes by
others, and changes made by others can be seen. The re-execution of the read statement is affected by
others. This does not support a repeatable read.
SQL_TXN_REPEATABLE_READ
Set isolation level to 2. When this attribute value is set, it isolates any data read from changes by others,
and changes made by others cannot be seen. The re-execution of the read statement is affected by others.
This supports a repeatable read.
SQL_TXN_SERIALIZABLE
Set isolation level to Snapshot. When this attribute value is set, it provides a single view of the database for
the entire transaction.
SA_SQL_TXN_STATEMENT_SNAPSHOT
Set isolation level to Statement-snapshot. When this attribute value is set, it provides less consistency than
Snapshot isolation, but may be useful when long running transactions result in too much space being used
in the temporary file by the version store.
SA_SQL_TXN_READONLY_STATEMENT_SNAPSHOT
Set isolation level to Readonly-statement-snapshot. When this attribute value is set, it provides less
consistency than Statement-snapshot isolation, but avoids the possibility of update conflicts. Therefore, it
is most appropriate for porting applications originally intended to run under different isolation levels.
The allow_snapshot_isolation database option must be set to On to use the Snapshot, Statement-snapshot, or
Readonly-statement-snapshot settings.
Example
Related Information
SQLSetConnectAttr Function
Microsoft Open Database Connectivity (ODBC)
ODBC functions that execute statements and manipulate result sets, use cursors to perform their tasks.
Applications open a cursor implicitly whenever they execute a SQLExecute or SQLExecDirect function.
For applications that move through a result set only in a forward direction and do not update the result set,
cursor behavior is relatively straightforward. By default, ODBC applications request this behavior. ODBC
defines a read-only, forward-only cursor, and the database server provides a cursor optimized for performance
in this case.
For applications that scroll both forward and backward through a result set, such as many graphical user
interface applications, cursor behavior is more complex. ODBC defines a variety of scrollable cursors to allow
You set the required ODBC cursor characteristics by calling the SQLSetStmtAttr function that defines
statement attributes. You must call SQLSetStmtAttr before executing a statement that creates a result set.
You can use SQLSetStmtAttr to set many cursor characteristics. The characteristics that determine the cursor
type that the database server supplies include the following:
SQL_ATTR_CURSOR_SCROLLABLE
Set to SQL_SCROLLABLE for a scrollable cursor and SQL_NONSCROLLABLE for a forward-only cursor.
SQL_NONSCROLLABLE is the default.
SQL_ATTR_CONCURRENCY
SQL_CONCUR_READ_ONLY
Use the lowest level of locking sufficient to ensure that the row can be updated.
SQL_CONCUR_ROWVER
Use optimistic concurrency control, employing a keyset-driven cursor to enable the application to be
informed when rows have been modified or deleted as the result set is scrolled.
SQL_CONCUR_VALUES
Use optimistic concurrency control, employing a keyset-driven cursor to enable the application to be
informed when rows have been modified or deleted as the result set is scrolled.
Example
Related Information
To retrieve rows from a database, you execute a SELECT statement using SQLExecute or SQLExecDirect. This
opens a cursor on the statement.
You then use SQLFetch or SQLFetchScroll to fetch rows through the cursor. These functions fetch the next
rowset of data from the result set and return data for all bound columns. Using SQLFetchScroll, rowsets can be
specified at an absolute or relative position or by bookmark. SQLFetchScroll replaces the older
SQLExtendedFetch from the ODBC 2.0 specification.
When an application frees the statement using SQLFreeHandle, it closes the cursor.
To fetch values from a cursor, your application can use either SQLBindCol or SQLGetData. If you use
SQLBindCol, values are automatically retrieved on each fetch. If you use SQLGetData, you must call it for each
column after each fetch.
SQLGetData is used to fetch values in pieces for columns such as LONG VARCHAR or LONG BINARY. As an
alternative, you can set the SQL_ATTR_MAX_LENGTH statement attribute to a value large enough to hold the
entire value for the column. The default value for SQL_ATTR_MAX_LENGTH is 256 KB.
The ODBC driver implements SQL_ATTR_MAX_LENGTH in a different way than intended by the ODBC
specification. The intended meaning for SQL_ATTR_MAX_LENGTH is that it be used as a mechanism to
truncate large fetches. This might be done for a "preview" mode where only the first part of the data is
displayed. For example, instead of transmitting a 4 MB blob from the server to the client application, only the
first 500 bytes of it might be transmitted (by setting SQL_ATTR_MAX_LENGTH to 500). The ODBC driver does
not support this implementation.
When you use SQLBindCol to bind a NUMERIC or DECIMAL column to a SQL_C_NUMERIC target type, the data
value is stored in a 128-bit field (val) of a SQL_NUMERIC_STRUCT. This field can only accommodate a
maximum precision of 38. The database server supports a maximum NUMERIC precision of 127. When the
precision of a NUMERIC or DECIMAL column is greater than 38, the column should be bound as SQL_C_CHAR
to avoid loss of precision.
The following code fragment opens a cursor on a query and retrieves data through the cursor. Error checking
has been omitted to make the example easier to read. The fragment is taken from a complete sample, which
can be found in %SQLANYSAMP17%\SQLAnywhere\ODBCSelect\odbcselect.cpp.
The number of row positions you can fetch in a cursor is governed by the size of an integer. You can fetch rows
numbered up to number 2147483646, which is one less than the value that can be held in a 32-bit integer.
When using negative numbers (rows from the end) you can fetch down to one more than the largest negative
value that can be held in an integer.
When you use positioned update statements, you do not need to execute a SELECT ... FOR UPDATE statement.
Cursors are automatically updatable as long as the following conditions are met:
ODBC provides two alternatives for carrying out positioned updates and deletes:
1.5.12.5 Bookmarks
ODBC provides bookmarks, which are values used to identify rows in a cursor. The ODBC driver supports
bookmarks for value-sensitive and insensitive cursors. For example, the ODBC cursor types
SQL_CURSOR_STATIC and SQL_CURSOR_KEYSET_DRIVEN support bookmarks while cursor types
SQL_CURSOR_DYNAMIC and SQL_CURSOR_FORWARD_ONLY do not.
Before ODBC 3.0, a database could specify only whether it supported bookmarks or not: there was no interface
to provide this information for each cursor type. There was no way for a database server to indicate for what
kind of cursor bookmarks were supported. For ODBC 2 applications, the ODBC driver returns that it does
support bookmarks. There is therefore nothing to prevent you from trying to use bookmarks with dynamic
cursors; however, do not use this combination.
You can create and call stored procedures and process the results from an ODBC application.
There are two types of procedures: those that return result sets and those that do not. You can use
SQLNumResultCols to tell the difference: the number of result columns is zero if the procedure does not return
a result set. If there is a result set, you can fetch the values using SQLFetch or SQLExtendedFetch just like any
other cursor.
Parameters to procedures should be passed using parameter markers (question marks). Use
SQLBindParameter to assign a storage area for each parameter marker, whether it is an INPUT, OUTPUT, or
INOUT parameter.
To handle multiple result sets, ODBC must describe the currently executing cursor, not the procedure-defined
result set. Therefore, ODBC does not always describe column names as defined in the RESULT clause of the
stored procedure definition. To avoid this problem, you can use column aliases in your procedure result set
cursor.
Example
Example 1
This example creates and calls a procedure that does not return a result set. The procedure takes one
INOUT parameter, and increments its value. In the example, the variable num_columns has the value zero,
HDBC dbc;
SQLHSTMT stmt;
SQLINTEGER I;
SQLSMALLINT num_columns;
SQLAllocStmt( dbc, &stmt );
SQLExecDirect( stmt,
"CREATE PROCEDURE Increment( INOUT a INT )"
"BEGIN "
" SET a = a + 1 "
"END", SQL_NTS );
/* Call the procedure to increment 'I' */
I = 1;
SQLBindParameter( stmt, 1, SQL_C_LONG, SQL_INTEGER, 0, 0, &I, NULL );
SQLExecDirect( stmt, "CALL Increment( ? )", SQL_NTS );
SQLNumResultCols( stmt, &num_columns );
Example 2
This example calls a procedure that returns a result set. In the example, the variable num_columns will
have the value 2 since the procedure returns a result set with two columns. Again, error checking has been
omitted to make the example easier to read.
SQLRETURN rc;
SQLHDBC dbc;
SQLHSTMT stmt;
SQLSMALLINT num_columns;
SQLCHAR ID[ 10 ];
SQLCHAR Surname[ 20 ];
SQLExecDirect( stmt,
"CREATE PROCEDURE EmployeeList() "
"RESULT( ID CHAR(10), Surname CHAR(20) ) "
"BEGIN "
" SELECT EmployeeID, Surname FROM Employees "
"END", SQL_NTS );
/* Call the procedure - print the results */
SQLExecDirect( stmt, "CALL EmployeeList()", SQL_NTS );
SQLNumResultCols( stmt, &num_columns );
SQLBindCol( stmt, 1, SQL_C_CHAR, &ID, sizeof(ID), NULL );
SQLBindCol( stmt, 2, SQL_C_CHAR, &Surname, sizeof(Surname), NULL );
for( ;; )
{
rc = SQLFetch( stmt );
if( rc == SQL_NO_DATA )
{
rc = SQLMoreResults( stmt );
if( rc == SQL_NO_DATA ) break;
}
else
{
do_something( ID, Surname );
}
}
SQLCloseCursor( stmt );
SQLFreeHandle( SQL_HANDLE_STMT, stmt );
You can use ODBC escape syntax from any ODBC application. This escape syntax allows you to call a set of
common functions regardless of the database management system you are using.
Note
If you do not use escape syntax, then turn off escape syntax parsing in your ODBC application by setting
the NOSCAN statement attribute. This improves performance by stopping the ODBC driver from scanning
all SQL statements before sending them to the database server for execution. The following statement sets
the NOSCAN statement attribute:
{ keyword parameters }
{d date-string}
The date string is any date value accepted by the database server.
{t time-string}
The time string is any time value accepted by the database server.
{ts date-string time-string}
The date/time string is any timestamp value accepted by the database server.
{guid uuid-string}
The outer-join-expr is a valid OUTER JOIN expression accepted by the database server.
{? = call func(p1, ...)}
The function is any valid function call accepted by the database server.
{call proc(p1, ...)}
The procedure is any valid stored procedure call accepted by the database server.
{fn func(p1, ...)}
You can use the escape syntax to access a library of functions implemented by the ODBC driver that includes
number, string, time, date, and system functions.
For example, to obtain the current date in a database management system-neutral way, you would execute the
following:
SELECT { FN CURDATE() }
The following tables list the functions that are supported by the ODBC driver.
PI POSITION NOW
SQRT SUBSTRING
TAN UCASE
TRUNCATE
The ODBC driver maps the TIMESTAMPADD and TIMESTAMPDIFF functions to the corresponding database
server DATEADD and DATEDIFF functions. The syntax for the TIMESTAMPADD and TIMESTAMPDIFF functions
is as follows.
Returns the integer number of intervals of type interval by which timestamp-expr2 is greater than
timestamp-expr1. Valid values of interval are shown below.
DATEADD/DATEDIFF Date-
interval Part Mapping
SQL_TSI_YEAR YEAR
SQL_TSI_QUARTER QUARTER
SQL_TSI_MONTH MONTH
SQL_TSI_WEEK WEEK
SQL_TSI_DAY DAY
SQL_TSI_HOUR HOUR
SQL_TSI_MINUTE MINUTE
SQL_TSI_SECOND SECOND
Interactive SQL
The ODBC escape syntax is identical to the JDBC escape syntax. In Interactive SQL, which uses JDBC, the
braces must be doubled. There must not be a space between successive braces: "{{" is acceptable, but "{ {" is
not. As well, you cannot use newline characters in the statement. The escape syntax cannot be used in stored
procedures because they are not parsed by Interactive SQL.
For example, to obtain the number of weeks in February 2013, execute the following in Interactive SQL:
Errors in ODBC are reported using the return value from each of the ODBC function calls and either the
SQLError function or the SQLGetDiagRec function.
The SQLError function was used in ODBC versions up to, but not including, version 3. As of version 3 the
SQLError function has been deprecated and replaced by the SQLGetDiagRec function.
SQL_SUCCESS No error.
The most common case for this status is that a value being
returned is too long for the buffer provided by the applica
tion.
The most common use for this status is when fetching from
a cursor; it indicates that there are no more rows in the cur
sor.
Every environment, connection, and statement handle can have one or more errors or warnings associated
with it. Each call to SQLError or SQLGetDiagRec returns the information for one error and removes the
information for that error. If you do not call SQLError or SQLGetDiagRec to remove all errors, the errors are
removed on the next function call that passes the same handle as a parameter.
Each call to SQLError passes three handles for an environment, connection, and statement. The first call uses
SQL_NULL_HSTMT to get the error associated with a connection. Similarly, a call with both SQL_NULL_DBC
and SQL_NULL_HSTMT get any error associated with the environment handle.
Each call to SQLGetDiagRec can pass either an environment, connection or statement handle. The first call
passes in a handle of type SQL_HANDLE_DBC to get the error associated with a connection. The second call
passes in a handle of type SQL_HANDLE_STMT to get the error associated with the statement that was just
executed.
SQLError and SQLGetDiagRec return SQL_SUCCESS if there is an error to report (not SQL_ERROR), and
SQL_NO_DATA_FOUND if there are no more errors to report.
Example
Example 1
// ODBC 2.0
RETCODE rc;
HENV env;
HDBC dbc;
HSTMT stmt;
SDWORD err_native;
UCHAR err_state[6];
UCHAR err_msg[ 512 ];
SWORD err_ind;
rc = SQLAllocHandle( SQL_HANDLE_STMT, dbc, &stmt );
if( rc == SQL_ERROR )
{
printf( "Allocation failed\n" );
for(;;)
{
rc = SQLError( env, dbc, SQL_NULL_HSTMT, err_state, &err_native,
err_msg, sizeof(err_msg), &err_ind );
if( rc < SQL_SUCCESS ) break;
if( rc == SQL_NO_DATA_FOUND ) break;
printf( "[%s:%d] %s\n", err_state, err_native, err_msg );
}
return;
}
rc = SQLExecDirect( stmt,
"DELETE FROM SalesOrderItems WHERE ID=2015",
SQL_NTS );
if( rc == SQL_ERROR )
{
printf( "Failed to delete items\n" );
for(;;)
{
rc = SQLError( env, dbc, stmt, err_state, &err_native,
err_msg, sizeof(err_msg), &err_ind );
if( rc < SQL_SUCCESS ) break;
if( rc == SQL_NO_DATA_FOUND ) break;
printf( "[%s:%d] %s\n", err_state, err_native, err_msg );
}
return;
}
Example 2
// ODBC 3.0
SQLRETURN rc;
SQLHDBC dbc;
SQLHSTMT stmt;
SQLSMALLINT rec;
SQLINTEGER err_native;
SQLCHAR err_state[6];
SQLCHAR err_msg[ 512 ];
SQLSMALLINT err_ind;
rc = SQLAllocHandle( SQL_HANDLE_STMT, dbc, &stmt );
if( rc == SQL_ERROR )
{
printf( "Failed to allocate handle\n" );
for( rec = 1; ; rec++ )
{
rc = SQLGetDiagRec( SQL_HANDLE_DBC, dbc, rec, err_state, &err_native,
err_msg, sizeof(err_msg), &err_ind );
if( rc < SQL_SUCCESS ) break;
if( rc == SQL_NO_DATA_FOUND ) break;
printf( "[%s:%d] %s\n", err_state, err_native, err_msg );
}
The database server supports a mechanism for executing Java classes from within the database environment.
Using Java methods from the database server provides powerful ways of adding programming logic to a
database.
● Reuse Java components in the different layers of your application (client, middle-tier, or server) and use
them wherever it makes the most sense to you.
● Java provides a more powerful language than the SQL stored procedure language for building logic into the
database.
● Java can be used in the database server without jeopardizing the integrity, security, or robustness of the
database and the server.
Java in the database is based on the SQLJ Part 1 proposed standard (ANSI/INCITS 331.1-1999). SQLJ Part 1
provides specifications for calling Java static methods as SQL stored procedures and functions.
In this section:
SQL stored procedure syntax is extended to permit the calling of Java methods from SQL.
An external Java VM runs your Java code on behalf of the database server.
You can access data from Java
The use of Java does not alter the behavior of existing SQL statements or other aspects of non-Java
relational database behavior.
Java provides several features that make it ideal for use in database applications:
The Java language is more powerful than SQL. Java is an object-oriented language, so its instructions (source
code) come in the form of classes. To execute Java in a database, you write the Java instructions outside the
database and compile them outside the database into compiled classes (byte code), which are binary files
holding Java instructions.
Compiled classes can be called from client applications as easily and in the same way as stored procedures.
Java classes can contain both information about the subject and some computational logic. For example, you
could design, write, and compile Java code to create an Employees class complete with various methods that
perform operations on an Employees table. You install your Java classes as objects into a database and write
SQL cover functions or procedures to invoke the methods in the Java classes.
The database server facilitates a runtime environment for Java classes, not a Java development environment.
You need a Java development environment, such as the Java Development Kit (JDK), to write and compile Java.
You also need a Java Runtime Environment to execute Java classes.
You can use many of the classes that are part of the Java API as included in the Java Development Kit. You can
also use classes created and compiled by Java developers.
The database server launches a Java VM. The Java VM interprets compiled Java instructions and runs them on
behalf of the database server. The database server starts the Java VM automatically when needed: you do not
have to take any explicit action to start or stop the Java VM.
The SQL request processor in the database server has been extended so it can call into the Java VM to execute
Java instructions. It can also process requests from the Java VM to enable data access from Java.
Related Information
Errors in Java applications generate an exception object representing the error (called throwing an exception).
A thrown exception terminates a Java program unless it is caught and handled properly at some level of the
application.
Both Java API classes and custom-created classes can throw exceptions. In fact, users can create their own
exception classes that throw their own custom-created classes of errors.
If there is no exception handler in the body of the method where the exception occurred, then the search for an
exception handler continues up the call stack. If the top of the call stack is reached and no exception handler
has been found, the default exception handler of the Java interpreter running the application is called and the
program terminates.
If a SQL statement calls a Java method, and an unhandled exception is thrown, a SQL error is generated. The
full text of the Java exception plus the Java stack trace is displayed in the server messages window.
This tutorial describes the steps involved in creating Java methods and calling them from SQL.
Prerequisites
Context
It shows you how to compile and install a Java class into the database to make it available for use. It also shows
you how to access the class and its members and methods from SQL statements.
It is assumed that you have a Java Development Kit (JDK) installed, including the Java compiler (javac) and
Java VM.
Source code and batch files for the sample are provided in %SQLANYSAMP17%\SQLAnywhere
\JavaInvoice.
Write Java code and compile it as the first step to using Java in the database.
Prerequisites
Install a Java Development Kit (JDK), including the Java compiler (javac) and a Java Runtime Environment
(JRE).
You must have the roles and privileges listed at the beginning of this tutorial.
Context
The database server uses the CLASSPATH environment variable to locate a file during the installation of
classes.
Procedure
cd %SQLANYSAMP17%\SQLAnywhere\JavaInvoice
2. Compile the Java source code example using the following command:
javac Invoice.java
3. This step is optional. Before starting the database server, make sure that the location of your compiled
class file is included in the CLASSPATH environment variable. It is the CLASSPATH of the database server
that is used, not the CLASSPATH of the client running Interactive SQL. Here is an example:
SET CLASSPATH=%SQLANYSAMP17%\SQLAnywhere\JavaInvoice
Results
The javac command creates a class file that can be installed into the database.
Next Steps
Set up the database server to locate a Java Virtual Machine (VM). Since you can specify a different Java VM for
each database, the ALTER EXTERNAL ENVIRONMENT statement can be used to indicate the location (path) of
the Java VM.
Prerequisites
You must have the roles and privileges listed at the beginning of this tutorial.
Context
If you do not have a Java Runtime Environment (JRE) installed, you can install and use any Java JRE as long as
it is version 1.6 or later (JRE 6 or later). Most Java installers set up one of the JAVA_HOME or JAVAHOME
environment variables. If neither of these environment variables exist, you can create one manually, and point it
to the root directory of your Java VM. However, this configuration is not required if you use the ALTER
EXTERNAL ENVIRONMENT statement.
Procedure
1. Use Interactive SQL to start the personal database server and connect to the sample database.
If the location of the Java VM is specified using the LOCATION clause of the ALTER EXTERNAL
ENVIRONMENT JAVA statement and the location specified is incorrect, then the database server will not
load the Java VM.
START JAVA;
This statement attempts to preload the Java VM. If the database server is not able to locate and start the
Java VM, then an error message is issued. This statement is optional since the database server
automatically loads the Java VM when it is required.
Results
The LOCATION clause of the ALTER EXTERNAL ENVIRONMENT JAVA statement indicates the location of the
Java VM. The START JAVA statement loads the Java VM.
Next Steps
Install Java classes into a database so that they can be used from SQL.
Prerequisites
You must have the roles and privileges listed at the beginning of this tutorial.
Context
The database server uses the class path defined by the -cp database server option and the java_class_path
database option to locate a file during the installation of classes. If the file listed in the INSTALL JAVA statement
is located in a directory or ZIP file specified by the database server's class path, the server successfully locates
the file and installs the class.
Procedure
Use Interactive SQL to execute a statement like the following. The path to the location of your compiled class
file is not required if it can be located using the database server's class path. The path, if specified, must be
accessible to the database server.
If an error occurs as a result of this step, check that your JAVA VM path is set correctly. The following
statement returns the path to the JAVA VM executable that the database server will use.
SELECT db_property('JavaVM');
If this result is NULL, then you do not have your path set correctly. If you set the path using the ALTER
EXTERNAL ENVIRONMENT JAVA LOCATION statement, then you can determine the current setting as follows:
If the path is not set or it is not set correctly then return to step 2 of the previous lesson.
Results
Next Steps
Related Information
Create stored procedures or functions that act as wrappers that call the Java methods in the class.
Prerequisites
You must have the roles and privileges listed at the beginning of this tutorial.
Context
The Java class file containing the compiled methods from the Invoice example has been loaded into the
database.
1. Create the following SQL stored procedure to call the Invoice.main method in the sample class:
If you examine the database server message log, you see the message "Hello to you" written there. The
database server has redirected the output there from System.out.
3. The following stored procedures illustrate how to pass arguments to and retrieve return values from the
Java methods in the Invoice class. If you examine the Java source code, you see that the init method of
the Invoice class takes both string and double arguments. String arguments are specified using Ljava/
lang/String;. Double arguments are specified using D. The method returns void and this is specified
using V after the right parenthesis.
4. The following functions call Java methods that take no arguments and return a double (D) or a string
(Ljava/lang/String;)
6. The following SELECT statement calls several of the other methods in the Invoice class:
Results
You have created stored procedures or functions that act as wrappers for the methods in the Java class. These
lessons have taken you through the steps involved in writing Java methods and calling them from SQL.
You can install Java classes into a database as a single class or a JAR.
A single class
You can install a single class into a database from a compiled class file. Class files typically have
extension .class.
A JAR
You can install a set of classes all at once if they are in either a compressed or uncompressed JAR file. JAR
files typically have the extension .jar or .zip. The database server supports all compressed JAR files
created with the JAR utility, and some other JAR compression schemes.
In this section:
The first step to using Java in the database is to create a Java application or class file.
Although the details of each step may differ depending on whether you are using a Java development tool, the
steps involved in creating your own class generally include the following steps:
Make your Java class available within the database by installing the class into the database.
Prerequisites
To install a class, you must have the MANAGE ANY EXTERNAL OBJECT system privilege.
You must know the path and file name of the class file you want to install.
Procedure
1. Use the SQL Central SQL Anywhere 17 plug-in to connect to the database.
2. Open the External Environments folder.
3. Under this folder, open the Java folder.
Results
The class is installed into the database and is ready for use.
Install the JAR file into the database to make it available within the database.
Prerequisites
To install a JAR, you must have the MANAGE ANY EXTERNAL OBJECT system privilege.
You must know the path and file name of the JAR file you want to install. A JAR file can have the extension JAR
or ZIP. Each JAR file must have a name in the database. Usually, you use the same name as the JAR file, without
the extension. For example, if you install a JAR file named myjar.zip, you would generally give it a JAR name
of myjar.
Context
It is useful and common practice to collect sets of related classes together in packages, and to store one or
more packages in a JAR file.
Procedure
A JAR file has been installed into the database and is ready for use.
Replace classes and JAR files with updated copies by using SQL Central.
Prerequisites
To update a class or JAR, you must have the MANAGE ANY EXTERNAL OBJECT system privilege.
You must have a newer version of the compiled class file or JAR file available.
Context
Only new connections established after installing the class, or that use the class for the first time after installing
the class, use the new definition. Once the Java VM loads a class definition, it stays in memory until the
connection closes. If you have been using a Java class or objects based on a class in the current connection,
disconnect and reconnect to use the new class definition.
Procedure
1. Use the SQL Central SQL Anywhere 17 plug-in to connect to the database.
2. Open the External Environments folder.
3. Under this folder, open the Java folder.
4. Locate the subfolder containing the class or JAR file you want to update.
5. Click the class or JAR file and then click File Update .
6. In the Update window, specify the location and name of the class or JAR file to be updated. You can click
Browse to search for it.
Results
Only new connections established after installing the class, or that use the class for the first time after installing
the class, use the new definition. Once the Java VM loads a class definition, it stays in memory until the
connection closes. If you have been using a Java class or objects based on a class in the current connection,
disconnect and reconnect to use the new class definition.
You can also update a Java class or JAR file by right-clicking the class or JAR file name and choosing Update.
As well, you can update a Java class or JAR file by clicking Update Now on the General tab of its Properties
window.
The following material describes features of Java classes when used in the database.
In this section:
You typically start Java applications (outside the database) by running the Java VM on a class that has a main
method.
java Invoice
Perform the following steps to call the main method of a class written in Java:
import java.io.*;
public class JavaClass
{
public static void main( String[] args )
{
for ( int i = 0; i < args.length; i++ )
System.out.println( args[i] );
}
}
INSTALL JAVA
NEW
FROM FILE 'C:\\temp\\JavaClass.class';
Due to the limitations of the SQL language, only a single string can be passed.
Check the database server messages window for a "hello" message from the Java application.
With features of the java.lang.Thread package, you can use multiple threads in a Java application.
You can synchronize, suspend, resume, interrupt, or stop threads in Java applications.
If you supply an incorrect number of arguments when calling a Java method, or if you use an incorrect data
type, the Java VM responds with a java.lang.NoSuchMethodException error. Check the number and type
of arguments.
Related Information
Write a Java method that returns a result set to the calling environment, and wrap this method in a SQL stored
procedure declared with EXTERNAL NAME and LANGUAGE JAVA clauses.
Perform the following tasks to return result sets from a Java method:
1. Ensure that the Java method is declared as public and static in a public class.
2. For each result set you expect the method to return, ensure that the method has a parameter of type
java.sql.ResultSet[]. These result set parameters must all occur at the end of the parameter list.
3. In the method, first create an instance of java.sql.ResultSet and then assign it to one of the ResultSet[]
parameters.
4. Create a SQL stored procedure using EXTERNAL NAME and LANGUAGE JAVA clauses. This type of
procedure is a wrapper around a Java method. You can use a cursor on the SQL procedure result set in the
same way as any other procedure that returns result sets.
Example
The following simple class has a single method that executes a query and passes the result set back to the
calling environment.
import java.sql.*;
public class MyResultSet
{
public static void return_rset( ResultSet[] rset1 )
throws SQLException
{
Connection conn = DriverManager.getConnection( "jdbc:default:connection" );
Statement stmt = conn.createStatement( );
ResultSet rset = stmt.executeQuery(
"SELECT Surname " +
"FROM Customers" );
rset1[0] = rset;
}
}
You can expose the result set using a CREATE PROCEDURE statement that indicates the number of result sets
returned from the procedure and the signature of the Java method.
You can open a cursor on this procedure, just as you can with any SQL procedure returning result sets.
You can use stored procedures created using the EXTERNAL NAME and LANGUAGE JAVA clauses as wrappers
around Java methods. There is a special technique for returning parameter data from your Java method to SQL
OUT or INOUT parameters in the stored procedure.
Java does not have explicit support for INOUT or OUT parameters. Instead, you can use an array of the
parameter. For example, to use an integer OUT parameter, create an array of exactly one integer:
The string ([I)Z is a Java method signature, indicating that the method has a single parameter, which is an
array of integers, and returns a Boolean value. Define the method so that the method parameter you want to
use as an OUT or INOUT parameter is an array of a Java data type that corresponds to the SQL data type of the
OUT or INOUT parameter.
Java provides security managers that you can use to control user access to security-sensitive features of your
applications, such as file access and network access.
The Java VM loads automatically whenever the first Java operation is carried out. You can use the START JAVA
and STOP JAVA statements to manually start and stop the Java VM.
To load the Java VM explicitly in readiness for carrying out Java operations, you can do so by executing the
following statement:
START JAVA;
You can unload the Java VM when Java is not in use using the STOP JAVA statement. The syntax is:
STOP JAVA;
The built-in Java VM ClassLoader, which is used in providing JAVA in the database support allows applications
to install shutdown hooks.
These shutdown hooks are similar to the shutdown hooks that applications install with the JVM Runtime. When
a connection that is using Java in the database support executes a STOP JAVA statement or disconnects, the
ClassLoader for that connection runs all shutdown hooks that have been installed for that particular
connection prior to unloading. For regular Java in the database applications that install all Java classes within
the database, the installation of shutdown hooks should not be necessary. The ClassLoader shutdown hooks
should be used with extreme caution and should only be used to clean up any system-wide resources that were
allocated for the particular connection that is stopping Java. Also, jdbc:default JDBC requests are not allowed
within shutdown hooks since the jdbc:default connection is already closed prior to the ClassLoader shutdown
hook being called.
To install a shutdown hook with the Java VM ClassLoader, an application must include sajvm.jar in the Java
classpath and it needs to execute code similar to the following:
The SDHookThread class extends the standard Thread class and that the above code must be executed by a
class that was loaded by the ClassLoader for the current connection. Any class that is installed within the
database and that is later called via an external environment call is automatically executed by the correct Java
VM ClassLoader.
The above code must be executed by a class that was loaded by the ClassLoader for the current connection.
The SQL Anywhere XS JavaScript driver can be used to connect to SQL Anywhere databases, issue SQL
queries, and obtain result sets.
The SQL Anywhere XS JavaScript driver allows users to interact with the database from the JavaScript
environment. The SQL Anywhere XS API is based on the SAP HANA XS JavaScript Database API provided by
the SAP HANA XS engine. Drivers are available for various versions of Node.js.
The XS JavaScript driver also supports server-side applications written in JavaScript using SQL Anywhere
external environment support.
The SQL Anywhere .XS JavaScript API reference is available in the SQL Anywhere- XS JavaScript API Reference
at https://help.sap.com/viewer/78bac38a62a2496996d5a27467561290/LATEST/en-US.
Note
In addition to the XS JavaScript driver, a lightweight, minimally-featured Node.js driver is available that can
handle small result sets quite well. Node.js application programming using this driver is described
elsewhere in the documentation. The lightweight driver is perfect for simple web applications that need a
quick and easy connection to the database server to retrieve small result sets. However, if your JavaScript
application needs to handle large results, have greater control, or have access to a fuller application
programming interface, then you should use the XS JavaScript driver.
Node.js must be installed on your computer and the folder containing the Node.js executable should be
included in your PATH. Node.js software is available at nodejs.org .
For Node.js to locate the driver, make sure that the NODE_PATH environment variable includes the location of
the XS JavaScript driver. The following is an example for Microsoft Windows:
SET NODE_PATH=%SQLANY17%\Node
Note
On macOS 10.11 or a later version, set the SQLANY_API_DLL environment variable to the full path for
libdbcapi_r.dylib.
The following illustrates a simple Node.js application that uses the XS JavaScript driver.
This program connects to the sample database, executes a SQL SELECT statement, and displays only the first
three columns of the result set, namely the ID, GivenName, and Surname of each customer. It then disconnects
from the database.
Suppose this JavaScript code was stored in the file xs-sample.js. To run this program, open a command
prompt and execute the following statement. Make sure that the NODE_PATH environment variable is set
appropriately.
node xs-sample.js
The following JavaScript example illustrates the use of prepared statements and batches. It creates a database
table and populates it with the first 100 positive integers.
The following JavaScript example illustrates the use of prepared statements and exception handling. The
getcustomer function returns a hash corresponding to the table row for the specified customer, or an error
message.
Related Information
nodejs.org
JDBC is a call-level interface for Java applications. JDBC provides you with a uniform interface to a wide range
of relational databases, and provides a common base on which higher level tools and interfaces can be built.
The software also supports the use of jConnect, a pure Java JDBC driver available from SAP.
In addition to using JDBC as a client-side application programming interface, you can also use JDBC inside the
database server to access data by using the Java in the database feature.
In this section:
You can develop Java applications that use the JDBC API to connect to a database. Several of the applications
included with the database software use JDBC.
JDBC can be used both from client applications and inside the database. Java classes using JDBC provide a
more powerful alternative to SQL stored procedures for incorporating programming logic into the database.
JDBC provides a SQL interface for Java applications: to access relational data from Java, you do so using JDBC
calls.
The phrase client application applies both to applications running on a user's computer and to logic running
on a middle-tier application server.
The examples illustrate the distinctive features of JDBC applications. For more information about JDBC
programming, see any JDBC programming book.
Java client applications can make JDBC calls to a database server. The connection takes place through a
JDBC driver.
The SQL Anywhere JDBC driver, which is a Type 2 JDBC driver, is included with the software. The jConnect
driver for pure Java applications, which is a Type 4 JDBC driver, is also supported.
JDBC in the database
JDBC resources
You can find source code for the examples in the directory %SQLANYSAMP17%\SQLAnywhere\JDBC.
Required software
Related Information
The SQL Anywhere JDBC driver and the jConnect driver are supported.
This driver communicates with the database server using the Command Sequence client/server protocol.
Its behavior is consistent with ODBC, Embedded SQL, and OLE DB applications. The SQL Anywhere JDBC
driver is the recommended JDBC driver for connecting to databases. The driver can be used only with JRE
1.6 or later.
The driver performs automatic JDBC driver registration with the JAVA VM. It is therefore sufficient to have
the sajdbc4.jar file in the class file path and simply call DriverManager.getConnection() with a URL that
begins with jdbc:sqlanywhere.
The JDBC driver contains manifest information to allow it to be loaded as an OSGi (Open Services Gateway
initiative) bundle.
With the SQL Anywhere JDBC driver, metadata for NCHAR data now returns the column type as
java.sql.Types.NCHAR, NVARCHAR, or LONGNVARCHAR. In addition, applications can now fetch NCHAR
data using the Get/SetNString or Get/SetNClob methods instead of the Get/SetString and Get/SetClob
methods.
jConnect
This driver is a 100% pure Java driver. It communicates with the database server using the TDS client/
server protocol.
Features
The SQL Anywhere JDBC driver provides fully scrollable cursors when connected to a database. The
jConnect JDBC driver provides scrollable cursors when connected to a database, but the result set is
cached on the client side. The jConnect JDBC driver provides fully scrollable cursors when connected to an
Adaptive Server Enterprise database.
Pure Java
The jConnect driver is a Type 4 driver and hence a pure Java solution. The SQL Anywhere JDBC driver is a
Type 2 driver and hence is not a pure Java solution.
Note
The SQL Anywhere JDBC driver does not load the SQL Anywhere ODBC driver and hence is not a Type
1 driver.
Performance
The SQL Anywhere JDBC driver provides better performance for most purposes than the jConnect driver.
Compatibility
The TDS protocol used by the jConnect driver is shared with Adaptive Server Enterprise. Some aspects of
the driver's behavior are governed by this protocol, and are configured to be compatible with Adaptive
Server Enterprise.
Related Information
The SQL Anywhere JDBC driver is recommended for its performance and feature benefits when compared to
the pure Java jConnect JDBC driver. However, the SQL Anywhere JDBC driver does not provide a pure-Java
solution.
In this section:
set classpath=%SQLANY17%\java\sajdbc4.jar;%classpath%
The JDBC driver takes advantage of the new automatic driver registration. The driver is automatically loaded at
execution startup when it is in the class file path.
Required Files
The Java component of the JDBC driver is included in the sajdbc4.jar file installed into the Java
subdirectory of your software installation. For Windows, the native component is dbjdbc17.dll in the bin32
or bin64 subdirectory of your software installation; for UNIX and Linux, the native component is
libdbjdbc17.so. This component must be in the system path. For macOS 10.11 or a later version, the
java.library.path must include the full path to libdbjdbc.dylib. For example: java -Djava.library.path=
$SQLANY17/lib64.
Related Information
To connect to a database via the SQL Anywhere JDBC driver, supply a URL for the database.
The URL contains jdbc:sqlanywhere: followed by a connection string. If the sajdbc4.jar file is in your class
file path, then the SQL Anywhere JDBC driver is loaded automatically and handles the URL. As shown in the
example, an ODBC data source (DSN) may be specified for convenience, but you can also use explicit
connection parameters, separated by semicolons, in addition to or instead of the data source connection
parameter.
Connection con =
DriverManager.getConnection( "jdbc:sqlanywhere:UserID=DBA;Password=passwd;Start=.
.." );
The Driver connection parameter is not required since neither the ODBC driver nor ODBC driver manager is
used. If present, it is ignored.
The following is another example in which a connection is made to the database SalesDB on the server Acme
running on the host computer Elora using TCP/IP port 2638.
To use JDBC from an applet, you must use the jConnect JDBC driver to connect to a database.
The jConnect driver is available as a separate download from the SAP Software Download Center on the SAP
Service Marketplace. Search for the SDK FOR SAP ASE. Documentation for jConnect is included in the install.
This link may help you locate the driver: http://sqlanywhere-forum.sap.com/questions/23450/jconnect-
software-developer-kit-download .
jConnect is supplied as a JAR file named jconn4.jar. This file is located in your jConnect install location.
For your application to use jConnect, the jConnect classes must be in your class file path at compile time and
run time, so that the Java compiler and Java runtime can locate the necessary files.
The following command adds the jConnect driver to an existing CLASSPATH environment variable (where
jconnect-path is your jConnect installation directory).
set classpath=jconnect-path\classes\jconn4.jar;%classpath%
The classes in jConnect are all in com.sybase.jdbc4.jdbc. You can import these classes at the beginning of each
source file if required:
import com.sybase.jdbc4.jdbc.*
Passwords can be encrypted for jConnect connections. The following example illustrates this.
In this section:
Related Information
Downloading jConnect
Add the jConnect system objects to your database so that you can use jConnect to access system table
information (database metadata).
Prerequisites
You must have the ALTER DATABASE, BACKUP DATABASE, and SERVER OPERATOR system privileges, and
must be the only connection to the database.
Context
jConnect system objects are installed into a database by default when you use the dbinit utility. You can add the
jConnect system objects to the database when creating the database or at a later time by upgrading the
database. You can upgrade a database from SQL Central as follows.
Procedure
Results
Alternatively, use the Upgrade utility (dbupgrad) to connect to your database and install the jConnect system
objects to the database. The following is an example.
dbupgrad -c "UID=DBA;PWD=passwd;SERVER=myServer;DBN=myDatabase
Ensure that the jConnect driver is in your class file path. The driver file jconn4.jar is located in the classes
subdirectory of your jConnect installation.
set classpath=.;c:\jConnect-7_0\classes\jconn4.jar;%classpath%
The driver is automatically loaded at execution startup when it is in the class file path.
jdbc:sybase:Tds:host:port
jdbc:sybase:Tds
The IP address or name of the computer on which the server is running. If you are establishing a same-host
connection, you can use localhost, which means the computer system you are logged into.
port
The port number on which the database server listens. The default port number used by the database
server is 2638.
If you are using the personal server, make sure to include the TCP/IP support option when starting the server.
In this section:
Each database server can have one or more databases loaded at a time. If the URL you supply when connecting
via jConnect specifies a server, but does not specify a database, then the connection attempt is made to the
default database on the server.
You can specify a particular database by providing an extended form of the URL in one of the following ways.
jdbc:sybase:Tds:host:port?ServiceName=database
The question mark followed by a series of assignments is a standard way of providing arguments to a URL. The
case of ServiceName is not significant, and there must be no spaces around the = sign. The database
parameter is the database name, not the server name. The database name must not include the path or file
suffix. For example:
This technique allows you to provide additional connection parameters such as the database name, or a
database file, using the RemotePWD field. You set RemotePWD as a Properties field using the put method.
import java.util.Properties;
.
.
.
Properties props = new Properties();
props.put( "User", "DBA" );
props.put( "Password", "passwd" );
props.put( "RemotePWD", ",DatabaseFile=mydb.db" );
As shown in the example, a comma must precede the DatabaseFile connection parameter. Using the
DatabaseFile parameter, you can start a database on a server using jConnect. By default, the database is
started with AutoStop=YES. If you specify utility_db with a DatabaseFile (DBF) or DatabaseName (DBN)
connection parameter (for example, DBN=utility_db), then the utility database is started automatically.
When an application connects to the database using the jConnect driver, the sp_tsql_environment stored
procedure is called. The sp_tsql_environment procedure sets some database options for compatibility with
Adaptive Server Enterprise behavior.
JDBC applications typically connect to a database, execute one or more SQL statements, process result sets,
and then disconnect from the database.
The getConnection method of the DriverManager class creates a Connection object, and establishes a
connection with a database.
Create a Statement object
The execute method of the Statement object executes a SQL statement within the database environment.
If one or more result sets are available, the boolean result true is returned.
Process one or more result sets
The getResultSet and getMoreResults methods of the Statement object are used to obtain result sets.
The ResultSet object is used to obtain the data returned from the SQL statement, one row at a time.
Loop over the rows of each result set
The next method of the ResultSet object is used to position to the next available row in the result set. When
no more rows are available, the boolean result false is returned.
For each row, retrieve the values
Values are retrieved for each column in the ResultSet object by identifying either the name or position of
the column. You can use the getString method to get the value from a column on the current row.
Release JDBC objects at the appropriate time
The close method of each JDBC object releases resources to the system.
import java.io.*;
import java.sql.*;
public class results
{
public static void main(String[] args) throws Exception
{
Connection conn = null;
try
{
conn = DriverManager.getConnection(
"jdbc:sqlanywhere:uid=DBA;pwd=sql" );
String SQL = "BEGIN\n"
Java objects can use JDBC objects to interact with a database and get data for their own use.
There are some minor differences between client-side and server-side JDBC applications.
Client-side
When using JDBC from a client computer, a connection is establish using either the SQL Anywhere JDBC
driver or the jConnect JDBC driver. Connection details such as user ID and password are passed as
arguments to DriverManager.getConnection which establishes the connection. The database server runs
on the same or some other computer system. The JDBC application is contained in one or more class files
that are accessible from the client computer.
Server-side
When using JDBC from a database server, a connection already exists. The string
"jdbc:default:connection" is passed to DriverManager.getConnection, which allows the JDBC application to
You can write JDBC classes so that they can run both at the client and at the server by employing a single
conditional statement for constructing the URL. An external connection requires connection information such
as user ID, password, host name, and port number, while the internal connection requires only
"jdbc:default:connection".
A typical JDBC application connects to a database server, issues SQL queries, processes multiple results sets,
and then terminates.
The following complete Java application connects to a running database, issues a SQL query, processes and
displays multiple results sets, and then terminates. It uses the SQL Anywhere JDBC driver by default to
connect to the database. To use the jConnect driver, you pass in a database user ID and password, the driver
name (jConnect), and an optional SQL query (enclosed in quotation marks) on the command line.
Database metadata is always available when using the SQL Anywhere JDBC driver.
To access database system tables (database metadata) from a JDBC application that uses jConnect, you must
add a set of jConnect system objects to your database. These procedures are installed to all databases by
default. The dbinit -i option prevents this installation.
This example assumes that a database server has already been started using the sample database. The source
code for this example can be found in %SQLANYSAMP17%\SQLAnywhere\JDBC\JDBCConnect.java.
import java.io.*;
import java.sql.*;
public class JDBCConnect
{
public static void main( String args[] )
{
try
{
String userID = "";
String password = "";
String driver = "jdbc4";
String SQL =
"BEGIN"
+ " SELECT * FROM Departments"
+ " ORDER BY DepartmentID;"
+ " SELECT d.DepartmentID, GivenName, Surname, EmployeeID"
+ " FROM Employees e"
+ " JOIN Departments d"
+ " ON DepartmentHeadID = EmployeeID"
+ " ORDER BY d.DepartmentID;"
+ "END";
if( args.length > 0 ) userID = args[0];
if( args.length > 1 ) password = args[1];
if( args.length > 2 ) driver = args[2];
if( args.length > 3 ) SQL = args[3];
Connection con;
if( driver.compareToIgnoreCase( "jconnect" ) == 0 )
{
In this section:
This JDBC application connects to a database server, issues SQL queries, processes multiple result sets, and
then terminates.
Importing Packages
The application requires a couple of packages, which are imported in the first lines of JDBCConnect.java:
● The java.io package contains the Java input/output classes, which are required for printing to the
command prompt window.
● The java.sql package contains the JDBC classes, which are required for all JDBC applications.
Application Structure
Each Java application requires a class with a method named main, which is the method invoked when the
program starts. In this simple example, JDBCConnect.main is the only public method in the application.
1. Obtains a database user ID, password, JDBC driver name, and SQL query from the optional command-line
arguments. Depending on which driver is selected, the SQL Anywhere JDBC driver or the jConnect 7.0
driver is loaded if they are in the class file path.
2. Connects to a running database using the selected JDBC driver URL. The getConnection method
establishes a connection using the specified URL.
3. Creates a statement object, which is the container for the SQL statement.
4. Creates a result set object by executing a SQL query.
5. Iterates through the result set, printing the column information.
6. Checks for additional result sets and repeats the previous step if another result set is available.
7. Closes each of the result set, statement, and connection objects.
Compile and execute a sample JDBC application to learn the steps required to create a working JDBC
application.
Prerequisites
Context
Two different types of connections using JDBC can be made. One is the client-side connection and the other is
the server-side connection. The following example uses a client-side connection.
Procedure
dbsrv17 "%SQLANYSAMP17%\demo.db"
3. Set the CLASSPATH environment variable. The SQL Anywhere JDBC driver is contained in sajdbc4.jar.
set classpath=.;%SQLANY17%\java\sajdbc4.jar
If you are using the jConnect driver instead, then set the CLASSPATH as follows (where jconnect-path is
your jConnect installation directory).
set classpath=.;jconnect-path\classes\jconn4.jar
javac JDBCConnect.java
If the attempt to connect fails, an error message appears instead. Confirm that you have executed all the
steps as required. Check that your CLASSPATH is correct. An incorrect setting may result in a failure to
locate a class.
Results
A typical JDBC application connects to a database server, issues SQL queries, processes multiple results sets,
and then terminates.
The following complete Java application connects to a running database using a server-side connection, issues
a SQL query, processes and displays multiple results sets, and then terminates.
Establishing a connection from a server-side JDBC application is more straightforward than establishing an
external connection. Because the user is already connected to the database, the application simply uses the
current connection.
The source code for this example is a modified version of the JDBCConnect.java example and is located in
%SQLANYSAMP17%\SQLAnywhere\JDBC\JDBCConnect2.java.
import java.io.*;
import java.sql.*;
public class JDBCConnect2
{
public static void main( String args[] )
{
try
{
String SQL =
"BEGIN"
+ " SELECT * FROM Departments"
+ " ORDER BY DepartmentID;"
+ " SELECT d.DepartmentID, GivenName, Surname, EmployeeID"
+ " FROM Employees e"
+ " JOIN Departments d"
+ " ON DepartmentHeadID = EmployeeID"
+ " ORDER BY d.DepartmentID;"
+ "END";
if( args.length > 0 && args[0] != null && args[0].length() > 0 )
SQL = args[0];
Connection con = DriverManager.getConnection(
"jdbc:default:connection" );
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(SQL);
while( rs != null )
{
while (rs.next())
{
In this section:
The server-side JDBC application is almost identical to the sample client-side JDBC application, with the
following exceptions:
1. The user ID, password, and JDBC driver arguments are not required.
2. It connects to the default running database using the current connection. The URL in the getConnection
call has been changed as follows:
Compile and execute a sample JDBC application to learn the steps required to create a working server-side
JDBC application.
Prerequisites
Context
Two different types of connections using JDBC can be made. One is the client-side connection and the other is
the server-side connection. The following example uses a server-side connection.
Procedure
cd %SQLANYSAMP17%\SQLAnywhere\JDBC
2. For server-side JDBC, it is not necessary to set the CLASSPATH environment variable unless the server is
started from a different current working directory.
set classpath=.;%SQLANYSAMP17%\SQLAnywhere\JDBC
3. Start a database server with the sample database on your local computer using the following command:
dbsrv17 "%SQLANYSAMP17%\demo.db"
javac JDBCConnect2.java
You can also install the class using SQL Central. While connected to the sample database, open the Java
subfolder under External Environments and click File New Java Class . Then follow the instructions
in the wizard.
6. Define a stored procedure named JDBCConnect that acts as a wrapper for the JDBCConnect2.main
method in the class:
CALL JDBCConnect('');
The first time a Java class is called in a session, the Java VM must be loaded. This might take a few
seconds.
8. Confirm that some result sets appear in the database server messages window.
If the attempt to connect fails, an error message appears instead. Confirm that you have executed all the
steps as required.
9. Try executing a different SQL query as follows:
Some different result sets appear in the database server messages window.
Results
Be aware that there are differences between JDBC on the client side and on the server side. Aspects such as
autocommit behavior and isolation levels are described here.
Autocommit behavior
The JDBC specification requires that, by default, a COMMIT is performed after each data manipulation
statement. Currently, the client-side JDBC behavior is to commit (autocommit is true) and the server-side
behavior is to not commit (autocommit is false). To obtain the same behavior in both client-side and
server-side applications, you can use a statement such as the following:
con.setAutoCommit( false );
To set the transaction isolation level, the application must call the Connection.setTransactionIsolation
method with one of the following values.
● TRANSACTION_NONE
● TRANSACTION_READ_COMMITTED
● TRANSACTION_READ_UNCOMMITTED
● TRANSACTION_REPEATABLE_READ
● TRANSACTION_SERIALIZABLE
● sap.jdbc4.sqlanywhere.IConnection.SA_TRANSACTION_SNAPSHOT
● sap.jdbc4.sqlanywhere.IConnection.SA_TRANSACTION_STATEMENT_SNAPSHOT
● sap.jdbc4.sqlanywhere.IConnection.SA_TRANSACTION_STATEMENT_READONLY_SNAPSHOT
The following example sets the transaction isolation level to SNAPSHOT using the SQL Anywhere JDBC
driver.
try
{
con.setTransactionIsolation(
sap.jdbc4.sqlanywhere.IConnection.SA_TRANSACTION_SNAPSHOT
);
}
catch( Exception e )
{
System.err.println( "Error! Could not set isolation level" );
System.err.println( e.getMessage() );
printExceptions( (SQLException)e );
}
Connection defaults
Ensure that closing a connection restores the connection properties to their default values, so that
subsequent connections are obtained with standard JDBC values. The following code achieves this:
Connection con =
DriverManager.getConnection("jdbc:default:connection");
boolean oldAutoCommit = con.getAutoCommit();
try
{
// main body of code here
}
finally
{
con.setAutoCommit( oldAutoCommit );
}
This discussion applies not only to autocommit, but also to other connection properties such as
transaction isolation level and read-only mode.
Connection failure using the SQL Anywhere JDBC driver
Related Information
Database transaction logic implemented as Java methods that can be called from SQL can offer significant
advantages over traditional SQL stored procedures.
The interface to a Java method is implemented using specialized SQL stored procedure definition syntax. Calls
to Java methods, including those that use JDBC, closely parallel calls to SQL stored procedures that are
comprised entirely of SQL statements. Although the following topics demonstrate how to use JDBC from the
database (server-side JDBC), the examples also demonstrate how to write JDBC for a client-side application.
As with other programming interfaces, SQL statements in JDBC can be either static or dynamic. Static SQL
statements are constructed in the Java application and sent to the database. The database server parses the
statement, selects an execution plan, and executes the statement.
If the same or similar SQL statement is executed many times (many inserts into one table, for example), there
can be significant overhead when using static SQL because the statement preparation step has to be executed
each time.
In contrast, a dynamic SQL statement contains place holders. The statement, prepared once using these place
holders, can be executed many times without the additional expense of preparing it each time.
The following topics provide examples of both static and dynamic SQL statement execution as well as
execution of batches (wide inserts, deletes, updates, and merges).
In this section:
Using Static INSERT and DELETE Statements from JDBC [page 240]
A sample JDBC application is called from the database server to insert and delete rows in the
Departments table using static SQL statements.
How to Use Prepared Statements for More Efficient Access [page 242]
If you use the Statement interface, you parse each statement that you send to the database, generate
an access plan, and execute the statement. The steps before execution are called preparing the
statement.
Prerequisites
To install a class, you must have the MANAGE ANY EXTERNAL OBJECT system privilege.
Context
Procedure
javac JDBCExample.java
If the database server was not started from the same directory as the class file and the path to the class file
is not listed in the database server's class path, then you will have to include the path to the class file in the
INSTALL statement. The database server's class path is defined by the -cp database server option and the
java_class_path database option.
You can also install the class using SQL Central. While connected to the sample database, open the Java
subfolder under External Environments and click File New Java Class . Follow the instructions in the
wizard.
Results
The JDBCExample class file is installed in the database and ready for demonstration. This class file is used in
subsequent topics.
Static SQL statements such as INSERT, UPDATE, and DELETE, which do not return result sets, are executed
using the executeUpdate method of the Statement class. Statements, such as CREATE TABLE and other data
definition statements, can also be executed using executeUpdate.
The addBatch, clearBatch, and executeBatch methods of the Statement class may also be used. Because the
JDBC specification is unclear on the behavior of the executeBatch method of the Statement class, the
following notes should be considered when using this method with the SQL Anywhere JDBC driver:
● Processing of the batch stops immediately upon encountering a SQL exception or result set. If processing
of the batch stops, then a BatchUpdateException is thrown by the executeBatch method. Calling the
getUpdateCounts method on the BatchUpdateException will return an integer array of row counts where
the set of counts prior to the batch failure will contain a valid non-negative update count; while all counts at
the point of the batch failure and beyond will contain a -1 value. Casting the BatchUpdateException to a
SQLException will provide additional details as to why batch processing was stopped.
● The batch is only cleared when the clearBatch method is explicitly called. As a result, calling the
executeBatch method repeatedly will re-execute the batch over and over again. In addition, calling
execute(sql_query) or executeQuery(sql_query) will correctly execute the specified SQL query, but will not
clear the underlying batch. Hence, calling the executeBatch method followed by execute(sql_query)
followed by the executeBatch method again will execute the set of batched statements, then execute the
specified SQL query, and then execute the set of batched statements again.
The following code fragment illustrates how to execute an INSERT statement. It uses a Statement object that
has been passed to the InsertStatic method as an argument.
Notes
● This code fragment is part of the JDBCExample.java file included in the %SQLANYSAMP17%
\SQLAnywhere\JDBC directory.
● The executeUpdate method returns an integer that reflects the number of rows affected by the operation.
In this case, a successful INSERT would return a value of one (1).
● When run as a server-side class, the output from System.out.println goes to the database server
messages window.
A sample JDBC application is called from the database server to insert and delete rows in the Departments
table using static SQL statements.
Prerequisites
To create an external procedure, you must have the CREATE PROCEDURE and CREATE EXTERNAL
REFERENCE system privileges. You must also have SELECT, DELETE, and INSERT privileges on the database
object you are modifying.
Procedure
The example program displays the updated contents of the Departments table in the database server
messages window.
6. There is a similar method in the example class called DeleteStatic that shows how to delete the row that
has just been added. Call the JDBCExample.main method as follows:
Results
Rows are inserted and deleted from a table using static SQL statements in a server-side JDBC application. The
updated contents of the Departments table are displayed in the database server messages window.
Related Information
If you use the Statement interface, you parse each statement that you send to the database, generate an
access plan, and execute the statement. The steps before execution are called preparing the statement.
You can achieve performance benefits if you use the PreparedStatement interface. This allows you to prepare a
statement using placeholders, and then assign values to the placeholders when executing the statement.
Using prepared statements is particularly useful when carrying out many similar actions, such as inserting
many rows.
Example
The following example illustrates how to use the PreparedStatement interface, although inserting a single row
is not a good use of prepared statements.
The following InsertDynamic method of the JDBCExample class carries out a prepared statement:
● This code fragment is part of the JDBCExample.java file included in the %SQLANYSAMP17%
\SQLAnywhere\JDBC directory.
● The executeUpdate method returns an integer that reflects the number of rows affected by the operation.
In this case, a successful INSERT would return a value of one (1).
● When run as a server-side class, the output from System.out.println goes to the database server
messages window.
Call a sample JDBC application from the database server to insert and delete rows in the Departments table
using prepared statements.
Prerequisites
To create an external procedure, you must have the CREATE PROCEDURE and CREATE EXTERNAL
REFERENCE system privileges. You must also have SELECT, DELETE, and INSERT privileges on the database
object you are modifying.
Procedure
The example program displays the updated contents of the Departments table in the database server
messages window.
Define a stored procedure named JDBCDelete that acts as a wrapper for the JDBCExample.Delete method
in the class:
Results
Rows are inserted and deleted from a table using prepared SQL statements in a server-side JDBC application.
The updated contents of the Departments table are displayed in the database server messages window.
Related Information
The addBatch method of the PreparedStatement class is used for performing batched (or wide) inserts,
deletes, updates, and merges. The following are some guidelines to using this method.
1. A SQL statement should be prepared using one of the prepareStatement methods of the Connection class.
2. The parameters for the prepared statement should be set and then added as a batch. The following outline
creates n batches with m parameters in each batch:
The following example creates 5 batches with 2 parameters in each batch. The first parameter in an integer
and the second parameter is a string.:
3. The batch must be executed using the executeBatch method of the PreparedStatement class.
When using the SQL Anywhere JDBC driver to perform batched inserts, use a small column size. Using batched
inserts to insert large binary or character data into long binary or long varchar columns is not recommended
and may degrade performance. The performance can decrease because the JDBC driver must allocate large
amounts of memory to hold each of the batched insert rows.
In all other cases, batched inserts, deletes, updates, and merges should provide better performance than using
individual operations.
You must write a Java method that returns one or more result sets to the calling environment, and wrap this
method in a SQL stored procedure.
The following code fragment illustrates how multiple result sets can be returned to the caller of this Java
procedure. It uses three executeQuery statements to obtain three different result sets.
Notes
● This server-side JDBC example is part of the JDBCExample.java file included in the %SQLANYSAMP17%
\SQLAnywhere\JDBC directory.
● It obtains a connection to the default running database by using getConnection.
● The executeQuery methods return result sets.
Call a sample JDBC application from the database server to return several result sets.
Prerequisites
To create an external procedure, you must have the CREATE PROCEDURE and CREATE EXTERNAL
REFERENCE system privileges. You must also have SELECT, DELETE, and INSERT privileges on the database
object you are modifying.
Procedure
For example:
CALL JDBCResults();
6. Check each of the three results tabs, Result Set 1, Result Set 2, and Result Set 3.
Results
Three different result sets are returned from a server-side JDBC application.
Related Information
With the correct set of privileges, users can execute methods in Java classes.
Access privileges
Like all Java classes in the database, classes containing JDBC statements can be accessed by any user if
the GRANT EXECUTE statement has granted them privilege to execute the stored procedure that is acting
as a wrapper for the Java method.
Execution privileges
Java classes execute with the privileges of the stored procedure that is acting as a wrapper for the Java
method (by default, this is SQL SECURITY DEFINER).
The SQL Anywhere JDBC driver supports two asynchronous callbacks, one for handling the SQL MESSAGE
statement and the other for validating requests for file transfers. These are similar to the callbacks supported
by the ODBC driver.
Messages can be sent to the client application from the database server using the SQL MESSAGE statement.
Messages can also be generated by long running database server statements.
switch( sqe.getErrorCode() ) {
case MSG_INFO: msg_type = "INFO "; break;
case MSG_WARNING: msg_type = "WARNING"; break;
case MSG_ACTION: msg_type = "ACTION "; break;
case MSG_STATUS: msg_type = "STATUS "; break;
}
A client file transfer request can be validated. Before allowing any transfer to take place, the JDBC driver will
invoke the validation callback, if it exists. If the client data transfer is being requested during the execution of
indirect statements such as from within a stored procedure, the JDBC driver will not allow a transfer unless the
client application has registered a validation callback. The conditions under which a validation call is made are
described more fully below. The following is an example of a file transfer validation callback routine.
The filename argument is the name of the file to be read or written. The is_write parameter is 0 if a read is
requested (transfer from the client to the server), and non-zero for a write. The callback function should return
0 if the file transfer is not allowed, non-zero otherwise.
For data security, the server tracks the origin of statements requesting a file transfer. The server determines if
the statement was received directly from the client application. When initiating the transfer of data from the
client, the server sends the information about the origin of the statement to the client software. On its part, the
JDBC driver allows unconditional transfer of data only if the data transfer is being requested due to the
execution of a statement sent directly by the client application. Otherwise, the application must have registered
the validation callback described above, in the absence of which the transfer is denied and the statement fails
The following sample Java application demonstrates the use of the callbacks supported by the SQL Anywhere
JDBC driver. Place the file %SQLANY17%\java\sajdbc4.jar in your classpath.
import java.io.*;
import java.sql.*;
import java.util.*;
public class callback
{
public static void main (String args[]) throws IOException
{
Connection con = null;
Statement stmt;
System.out.println ( "Starting... " );
con = connect();
if( con == null )
{
return; // exception should already have been reported
}
System.out.println ( "Connected... " );
try
{
// create and register message handler callback
T_message_handler message_worker = new T_message_handler();
((sap.jdbc4.sqlanywhere.IConnection)con).setASAMessageHandler( message_worker );
((sap.jdbc4.sqlanywhere.IConnection)con).setSAValidateFileTransferCallback( filet
ran_worker );
stmt = con.createStatement();
System.out.println( "\n==================\n" );
You can use JDBC escape syntax from any JDBC application, including Interactive SQL. This escape syntax
allows you to call stored procedures regardless of the database management system you are using.
{ keyword parameters }
{d date-string}
The date string is any date value accepted by the database server.
{t time-string}
The time string is any time value accepted by the database server.
{ts date-string time-string}
The date/time string is any timestamp value accepted by the database server.
{guid uuid-string}
The outer-join-expr is a valid OUTER JOIN expression accepted by the database server.
{? = call func(p1, ...)}
The function is any valid function call accepted by the database server.
{call proc(p1, ...)}
The procedure is any valid stored procedure call accepted by the database server.
{fn func(p1, ...)}
You can use the escape syntax to access a library of functions implemented by the JDBC driver that includes
number, string, time, date, and system functions.
For example, to obtain the current date in a database management system-neutral way, you would execute the
following:
SELECT { FN CURDATE() }
The functions that are available depend on the JDBC driver that you are using. The following tables list the
functions that are supported by the SQL Anywhere JDBC driver and by the jConnect driver.
PI POSITION NOW
SIGN SOUNDEX
SIN SPACE
SQRT SUBSTRING
TAN UCASE
TRUNCATE
LOG10 TIMESTAMPADD
PI TIMESTAMPDIFF
POWER YEAR
RADIANS
RAND
ROUND
SIGN
SIN
SQRT
TAN
The JDBC driver maps the TIMESTAMPADD and TIMESTAMPDIFF functions to the corresponding DATEADD
and DATEDIFF functions. The syntax for the TIMESTAMPADD and TIMESTAMPDIFF functions is as follows.
Returns the timestamp calculated by adding integer-expr intervals of type interval to timestamp-expr.
Valid values of interval are shown below.
Returns the integer number of intervals of type interval by which timestamp-expr2 is greater than
timestamp-expr1. Valid values of interval are shown below.
SQL_TSI_YEAR YEAR
SQL_TSI_QUARTER QUARTER
SQL_TSI_MONTH MONTH
SQL_TSI_WEEK WEEK
SQL_TSI_DAY DAY
SQL_TSI_HOUR HOUR
SQL_TSI_MINUTE MINUTE
SQL_TSI_SECOND SECOND
Interactive SQL
In Interactive SQL, the braces must be doubled. There must not be a space between successive braces: "{{" is
acceptable, but "{ {" is not. As well, you cannot use newline characters in the statement. The escape syntax
cannot be used in stored procedures because they are not parsed by Interactive SQL.
For example, to obtain the number of weeks in February 2013, execute the following in Interactive SQL:
Some optional methods of the java.sql.Blob interface are not supported by the SQL Anywhere JDBC driver.
The Node.js API can be used to connect to SQL Anywhere databases, issue SQL queries, and obtain result sets.
The Node.js driver allows users to connect and perform queries on the database using JavaScript on Joyent's
Node.js software platform. Drivers are available for various versions of Node.js.
The API interface is very similar to the SAP HANA Node.js Client, and allows users to connect, disconnect,
execute, and prepare statements.
The driver is available for install through the NPM (Node Packaged Modules) web site: https://npmjs.org/ .
The SQL Anywhere Node.js API reference is available in the SQL Anywhere- Node.js API Reference at https://
help.sap.com/viewer/09fbca22f0344633b8951c3e9d624d28/LATEST/en-US.
Embedded SQL is portable to other databases and other environments, and is functionally equivalent in all
operating environments. It is a comprehensive, low-level interface that provides all the functionality available in
the product. Embedded SQL requires knowledge of C or C++ programming languages.
You can develop C or C++ applications that access the database server using the Embedded SQL interface. The
command line database tools are examples of applications developed in this manner.
Embedded SQL is a database programming interface for the C and C++ programming languages. It consists of
SQL statements intermixed with (embedded in) C or C++ source code. These SQL statements are translated
by a Embedded SQL preprocessor into C or C++ source code, which you then compile.
At runtime, Embedded SQL applications use an interface library called DBLIB to communicate with a
database server. DBLIB is a dynamic link library (DLL) or shared object on most platforms.
Two flavors of Embedded SQL are provided. Static Embedded SQL is simpler to use, but is less flexible than
dynamic Embedded SQL.
In this section:
How to Send and Retrieve Long Values Using Embedded SQL [page 320]
The method for sending and retrieving LONG VARCHAR, LONG NVARCHAR, and LONG BINARY values
in Embedded SQL applications is different from that for other data types.
Once the program has been successfully preprocessed and compiled, it can be linked with an import library for
DBLIB to form an executable file. When the database server is running, this executable file uses DBLIB to
interact with the database server.
The database server does not have to be running when the program is preprocessed.
Related Information
The preprocessor translates the Embedded SQL statements in a C or C++ source file into C code and places
the result in an output file. A C or C++ compiler is then used to process the output file. The normal extension
for source programs with Embedded SQL is .sqc. The default output file name is the sql-filename with an
extension of .c. If the sql-filename has a .c extension, then the default output file name extension will
change to .cc.
Note
When an application is rebuilt to use a new major version of the database interface library, the Embedded
SQL files must be preprocessed with the same version's Embedded SQL preprocessor.
Option Description
-e level Flag as an error any static Embedded SQL that is not part of
a specified standard. The level value indicates the stand
ard to use. For example, sqlpp -e c03 ... flags any
syntax that is not part of the core ANSI/ISO SQL Standard.
The supported level values are:
c08
HISTORICAL
WINDOWS
Microsoft Windows.
UNIX
-s len Set the maximum size string that the preprocessor puts into
the C file. Strings longer than this value are initialized using a
list of characters ('a', 'b', 'c', and so on). Most C compilers
have a limit on the size of string literal they can handle. This
option is used to set that upper limit. The default value is
500.
-w level Flag as a warning any static Embedded SQL that is not part
of a specified standard. The level value indicates the
standard to use. For example, sqlpp -w c08 ... flags
any SQL syntax that is not part of the core SQL/2008 syn
tax. The supported level values are:
c08
Related Information
Both Windows and UNIX/Linux compilers have been used with the Embedded SQL preprocessor.
All header files are installed in the SDK\Include subdirectory of your software installation directory.
pshpk1.h, pshpk4.h, poppk.h These headers ensure that structure packing is handled cor
rectly.
On Windows platforms, all import libraries are installed in the SDK\Lib subdirectory of your software
installation directory. Windows import libraries are stored in the SDK\Lib\x86 and SDK\Lib\x64
subdirectories. An export definition list is stored in SDK\Lib\Def\dblib.def.
On UNIX and Linux platforms, all import libraries are installed in the lib32 and lib64 subdirectories, under
the software installation directory.
On macOS platforms, all import libraries are installed in the System/lib32 and System/lib64
subdirectories, under the software installation directory.
The libdbtasks17 libraries are called by the libdblib17 libraries. Some compilers locate libdbtasks17
automatically. For others, you must specify it explicitly.
#include <stdio.h>
EXEC SQL INCLUDE SQLCA;
main()
{
db_init( &sqlca );
EXEC SQL WHENEVER SQLERROR GOTO error;
EXEC SQL CONNECT "DBA" IDENTIFIED BY "sql";
EXEC SQL UPDATE Employees
SET Surname = 'Plankton'
WHERE EmployeeID = 195;
EXEC SQL COMMIT WORK;
EXEC SQL DISCONNECT;
db_fini( &sqlca );
return( 0 );
error:
printf( "update unsuccessful -- sqlcode = %ld\n",
sqlca.sqlcode );
db_fini( &sqlca );
return( -1 );
}
This example connects to the database, updates the last name of employee number 195, commits the change,
and exits. There is virtually no interaction between the Embedded SQL code and the C code. The only thing the
C code is used for in this example is control flow. The WHENEVER statement is used for error checking. The
error action (GOTO in this example) is executed after any SQL statement that causes an error.
Related Information
SQL statements are placed (embedded) within regular C or C++ code. All Embedded SQL statements start
with the words EXEC SQL and end with a semicolon (;).
Normal C language comments are allowed in the middle of Embedded SQL statements.
Every C program using Embedded SQL must contain the following statement before any other Embedded SQL
statements in the source file.
db_init( &sqlca );
Some Embedded SQL statements do not generate any C code, or do not involve communication with the
database. These statements are allowed before the CONNECT statement. Most notable are the INCLUDE
statement and the WHENEVER statement for specifying error processing.
Every C program using Embedded SQL must finalize any SQLCA that has been initialized.
db_fini( &sqlca );
Load DBLIB dynamically from your Embedded SQL application using the esqldll.c module in the SDK\C
subdirectory of your software installation directory so that you do not need to link against the import library.
Context
This task is an alternative to the usual technique of linking an application against a static import library for a
Dynamic Link Library (DLL) that contains the required function definitions.
A similar task can be used to dynamically load DBLIB on UNIX and Linux platforms.
Procedure
1. Your application must call db_init_dll to load the DBLIB DLL, and must call db_fini_dll to free the DBLIB
DLL. The db_init_dll call must be before any function in the database interface, and no function in the
interface can be called after db_fini_dll.
You must still call the db_init and db_fini library functions.
2. You must include the esqldll.h header file before the EXEC SQL INCLUDE SQLCA statement or include
sqlca.h in your Embedded SQL program. The esqldll.h header file includes sqlca.h.
3. A SQL OS macro must be defined. The header file sqlos.h, which is included by sqlca.h, attempts to
determine the appropriate macro and define it. However, certain combinations of platforms and compilers
may cause this to fail. In this case, you must add a #define to the top of this file, or make the definition
using a compiler option. Define the _SQL_OS_WINDOWS for all Windows operating systems.
4. Compile esqldll.c.
5. Instead of linking against the import library, link the object module esqldll.obj with your Embedded
SQL application objects.
The DBLIB interface DLL loads dynamically when you run your Embedded SQL application.
Example
You can find a sample program illustrating how to load the interface library dynamically in the
%SQLANYSAMP17%\SQLAnywhere\ESQLDynamicLoad directory. The source code is in sample.sqc.
The following example compiles and links sample.sqc with the code from esqldll.c on Windows.
sqlpp sample.sqc
cl sample.c %SQLANY17%\sdk\c\esqldll.c /I%SQLANY17%\sdk\include Advapi32.lib
● The static cursor Embedded SQL example, cur.sqc, demonstrates the use of static SQL statements.
● The dynamic cursor Embedded SQL example, dcur.sqc, demonstrates the use of dynamic SQL
statements.
To reduce the amount of code that is duplicated by the sample programs, the mainlines and the data printing
functions have been placed into a separate file. This is mainch.c for character mode systems and mainwin.c
for windowing environments.
The sample programs each supply the following three routines, which are called from the mainlines:
WSQLEX_Init
Connecting to the database is done with the Embedded SQL CONNECT statement supplying the appropriate
user ID and password.
In addition to these samples, you may find other programs and source files included with the software that
demonstrate features available for particular platforms.
The particular cursor used here retrieves certain information from the Employees table in the sample database.
The cursor is declared statically, meaning that the actual SQL statement to retrieve the information is hard
coded into the source program. This is a good starting point for learning how cursors work. The Dynamic
Cursor sample takes this first example and converts it to use dynamic SQL statements.
The open_cursor routine both declares a cursor for the specific SQL query and also opens the cursor.
Printing a page of information is done by the print routine. It loops pagesize times, fetching a single row from
the cursor and printing it out. The fetch routine checks for warning conditions, such as rows that cannot be
found (SQLCODE 100), and prints appropriate messages when they arise. In addition, the cursor is
repositioned by this program to the row before the one that appears at the top of the current page of data.
The move, top, and bottom routines use the appropriate form of the FETCH statement to position the cursor.
This form of the FETCH statement doesn't actually get the data. It only positions the cursor. Also, a general
relative positioning routine, move, has been implemented to move in either direction depending on the sign of
the parameter.
When the user quits, the cursor is closed and the database connection is also released. The cursor is closed by
a ROLLBACK WORK statement, and the connection is released by a DISCONNECT.
Related Information
Prerequisites
For x86/x64 platform builds with Microsoft Visual Studio, you must set up the correct environment for
compiling and linking. This is typically done using the Microsoft Visual Studio vcvars32.bat or
vcvars64.bat (called vcvarsamd64.bat in older versions of Microsoft Visual Studio).
Context
The executable files and corresponding source code are located in the %SQLANYSAMP17%\SQLAnywhere\C
directory.
Procedure
If you are getting build errors, try specifying the target platform (x86 or x64) as an argument to
build.bat. Here is an example.
build x64
For UNIX and Linux, use the shell script build.sh to build the example.
3. For the 32-bit Windows example, run the file curwin.exe.
Results
The various commands manipulate a database cursor and print the query results on the screen. Enter the
letter of the command that you want to perform. Some systems may require you to press Enter after the letter.
This sample demonstrates the use of cursors for a dynamic SQL SELECT statement.
The dynamic cursor sample program (dcur) allows the user to select a table to look at with the N command.
The program then presents as much information from that table as fits on the screen.
When this program is run, it prompts for a connection string. The following is an example.
UID=DBA;PWD=sql;DBF=demo.db
The C program with the Embedded SQL is located in the %SQLANYSAMP17%\SQLAnywhere\C directory.
The dcur program uses the Embedded SQL interface function db_string_connect to connect to the database.
This function provides the extra functionality to support the connection string that is used to connect to the
database.
where table-name is a parameter passed to the routine. It then prepares a dynamic SQL statement using this
string.
The Embedded SQL DESCRIBE statement is used to fill in the SQLDA structure with the results of the SELECT
statement.
Note
An initial guess is taken for the size of the SQLDA (3). If this is not big enough, the actual size of the SELECT
list returned by the database server is used to allocate a SQLDA of the correct size.
The SQLDA structure is then filled with buffers to hold strings that represent the results of the query. The
fill_s_sqlda routine converts all data types in the SQLDA to DT_STRING and allocates buffers of the
appropriate size.
A cursor is then declared and opened for this statement. The rest of the routines for moving and closing the
cursor remain the same.
The fetch routine is slightly different: it puts the results into the SQLDA structure instead of into a list of host
variables. The print routine has changed significantly to print results from the SQLDA structure up to the width
of the screen. The print routine also uses the name fields of the SQLDA to print headings for each column.
Related Information
Prerequisites
For x86/x64 platform builds with Microsoft Visual Studio, you must set up the correct environment for
compiling and linking. This is typically done using the Microsoft Visual Studio vcvars32.bat or
vcvars64.bat (called vcvarsamd64.bat in older versions of Microsoft Visual Studio).
Context
The executable files and corresponding source code are located in the %SQLANYSAMP17%\SQLAnywhere\C
directory.
Procedure
If you are getting build errors, try specifying the target platform (x86 or x64) as an argument to
build.bat. Here is an example.
build x64
For UNIX and Linux, use the shell script build.sh to build the example.
3. For the 32-bit Windows example, run the file dcurwin.exe.
You can call a function in an external library from a stored procedure or function. You can call functions in a
DLL under Windows operating systems and in a shared object on UNIX and Linux.
5. Each sample program prompts you for a table. Choose one of the tables in the sample database. For
example, you can enter Customers or Employees.
Results
The various commands manipulate a database cursor and print the query results on the screen. Enter the
letter of the command that you want to perform. Some systems may require you to press Enter after the letter.
To transfer information between a program and the database server, every piece of data must have a data type.
The Embedded SQL data type constants are prefixed with DT_, and can be found in the sqldef.h header file.
You can create a host variable of any one of the supported types. You can also use these types in a SQLDA
structure for passing data to and from the database.
You can define variables of these data types using the DECL_ macros listed in sqlca.h. For example, a variable
holding a BIGINT value could be declared with DECL_BIGINT.
The following data types are supported by the Embedded SQL programming interface:
DT_BIT
DT_STRING
Null-terminated character string, in the CHAR character set. The string is blank-padded if the database is
initialized with blank-padded strings.
DT_NSTRING
Null-terminated character string, in the NCHAR character set. The string is blank-padded if the database is
initialized with blank-padded strings.
DT_DATE
Fixed-length blank-padded character string, in the CHAR character set. The maximum length, specified in
bytes, is 32767. The data is not null-terminated.
DT_NFIXCHAR
Fixed-length blank-padded character string, in the NCHAR character set. The maximum length, specified
in bytes, is 32767. The data is not null-terminated.
DT_VARCHAR
Varying length character string, in the CHAR character set, with a two-byte length field. When sending
data, you must set the length field. The maximum length that can be sent is 32767 bytes. When fetching
data, the database server sets the length field. The maximum length that can be fetched is 32767 bytes.
The data is not null-terminated or blank-padded. The sqldata field points to this data area that is exactly
sqllen + 2 bytes long.
DT_NVARCHAR
Varying length character string, in the NCHAR character set, with a two-byte length field. When sending
data, you must set the length field. The maximum length that can be sent is 32767 bytes. When fetching
data, the database server sets the length field. The maximum length that can be fetched is 32767 bytes.
The data is not null-terminated or blank-padded. The sqldata field points to this data area that is exactly
sqllen + 2 bytes long.
DT_LONGVARCHAR
The LONGVARCHAR structure can be used with more than 32767 bytes of data. Large data can be fetched
all at once, or in pieces using the GET DATA statement. Large data can be supplied to the server all at once,
or in pieces by appending to a database variable using the SET statement. The data is not null-terminated
or blank-padded.
DT_LONGNVARCHAR
Long varying length character string, in the NCHAR character set. The macro defines a structure, as
follows:
The LONGNVARCHAR structure can be used with more than 32767 bytes of data. Large data can be
fetched all at once, or in pieces using the GET DATA statement. Large data can be supplied to the server all
at once, or in pieces by appending to a database variable using the SET statement. The data is not null-
terminated or blank-padded.
DT_BINARY
Varying length binary data with a two-byte length field. When sending data, you must set the length field.
The maximum length that can be sent is 32767 bytes. When fetching data, the database server sets the
length field. The maximum length that can be fetched is 32767 bytes. The data is not null-terminated or
blank-padded. The sqldata field points to this data area that is exactly sqllen + 2 bytes long.
DT_LONGBINARY
The SQLDATETIME structure can be used to retrieve fields of DATE, TIME, and TIMESTAMP type (or
anything that can be converted to one of these). Often, applications have their own formats and date
manipulation code. Fetching data in this structure makes it easier for you to manipulate this data. DATE,
TIME, and TIMESTAMP fields can also be fetched and updated with any character type.
If you use a SQLDATETIME structure to enter a date, time, or timestamp into the database, the day_of_year
and day_of_week members are ignored.
DT_VARIABLE
Null-terminated character string. The character string must be the name of a SQL variable whose value is
used by the database server. This data type is used only for supplying data to the database server. It
cannot be used when fetching data from the database server.
The structures are defined in the sqlca.h file. The VARCHAR, NVARCHAR, BINARY, DECIMAL, and LONG data
types are not useful for declaring host variables because they contain a one-character array. However, they are
useful for allocating variables dynamically or typecasting other variables.
There are no corresponding Embedded SQL interface data types for the various DATE and TIME database
types. These database types are all fetched and updated using either the SQLDATETIME structure or character
strings.
Related Information
How to Send and Retrieve Long Values Using Embedded SQL [page 320]
Host variables are C variables that are identified to the Embedded SQL preprocessor. Host variables can be
used to send values to the database server or receive values from the database server.
Host variables are quite easy to use, but they have some restrictions. Dynamic SQL is a more general way of
passing information to and from the database server using a structure known as the SQL Descriptor Area
(SQLDA). The Embedded SQL preprocessor automatically generates a SQLDA for each statement in which
host variables are used.
Host variables cannot be used in batches. Host variables cannot be used within a subquery in a SET statement.
In this section:
Related Information
Host variables are defined by putting them into a declaration section. According to the ANSI Embedded SQL
standard, host variables are defined by surrounding the normal C variable declarations with the following:
These host variables can then be used in place of value constants in any SQL statement. When the database
server executes the statement, the value of the host variable is used. Host variables cannot be used in place of
table or column names: dynamic SQL is required for this. The variable name is prefixed with a colon (:) in a SQL
statement to distinguish it from other identifiers allowed in the statement.
Example
The following sample code illustrates the use of host variables on an INSERT statement. The variables are filled
in by the program and then inserted into the database:
Related Information
Only a limited number of C data types are supported as host variables. Also, certain host variable types do not
have a corresponding C type.
Macros defined in the sqlca.h header file can be used to declare host variables of the following types:
NCHAR, VARCHAR, NVARCHAR, LONGVARCHAR, LONGNVARCHAR, BINARY, LONGBINARY, DECIMAL,
DT_FIXCHAR, DT_NFIXCHAR, DATETIME (SQLDATETIME), BIT, BIGINT, or UNSIGNED BIGINT. They are used
as follows:
The preprocessor recognizes these macros within an Embedded SQL declaration section and treats the
variable as the appropriate type. Do not use the DECIMAL (DT_DECIMAL, DECL_DECIMAL) type since the
format of decimal numbers is proprietary.
The following table lists the C variable types that are allowed for host variables and their corresponding
Embedded SQL interface data types.
Character Sets
For DT_FIXCHAR, DT_STRING, DT_VARCHAR, and DT_LONGVARCHAR, character data is in the application's
CHAR character set, which is usually the character set of the application's locale. An application can change
the CHAR character set either by using the CHARSET connection parameter, or by calling the
db_change_char_charset function.
Regardless of the CHAR and NCHAR character sets in use, all data lengths are specified in bytes.
If character set conversion occurs between the server and the application, it is the application's responsibility
to ensure that buffers are sufficiently large to handle the converted data, and to issue additional GET DATA
statements if data is truncated.
Pointers to Char
The database interface considers a host variable declared as a pointer to char (char * a) to be 32767 bytes
long. Any host variable of type pointer to char used to retrieve information from the database must point to a
buffer large enough to hold any value that could possibly come back from the database.
This is potentially quite dangerous because someone could change the definition of the column in the database
to be larger than it was when the program was written. This could cause random memory corruption problems.
It is better to use a declared array, even as a parameter to a function, where it is passed as a pointer to char.
This technique allows the Embedded SQL statements to know the size of the array.
A standard host-variable declaration section can appear anywhere that C variables can normally be declared.
This includes the parameter declaration section of a C function. The C variables have their normal scope
(available within the block in which they are defined). However, since the Embedded SQL preprocessor does
not scan C code, it does not respect C blocks.
As far as the Embedded SQL preprocessor is concerned, host variables are global to the source file; two host
variables cannot have the same name.
Related Information
● SELECT, INSERT, UPDATE, and DELETE statements in any place where a number or string constant is
allowed.
● Host variables cannot be used in place of a table name or a column name in any statement.
● Host variables cannot be used in batches.
● Host variables cannot be used within a subquery in a SET statement.
The ISO/ANSI standard allows an Embedded SQL source file to declare the following special host variables
within an Embedded SQL declaration section:
long SQLCODE;
char SQLSTATE[6];
If used, these variables are set after any Embedded SQL statement that makes a database request (EXEC SQL
statements other than DECLARE SECTION, INCLUDE, WHENEVER SQLCODE, and so on). As a consequence,
the SQLCODE and SQLSTATE host variables must be visible in the scope of every Embedded SQL statement
that generates database requests.
The following is not valid Embedded SQL because SQLSTATE is not defined in the scope of the function sub2:
The Embedded SQL preprocessor -k option permits the declaration of the SQLCODE variable outside the scope
of an Embedded SQL declaration section.
Related Information
Indicator variables are C variables that hold supplementary information when you are fetching or putting data.
NULL values
To enable applications to handle cases when fetched values must be truncated to fit into host variables.
Conversion errors
An indicator variable is a host variable of type a_sql_len that is placed immediately following a regular host
variable in a SQL statement. For example, in the following INSERT statement, :ind_phone is an indicator
variable:
On a fetch or execute where no rows are received from the database server (such as when an error or end of
result set occurs), then indicator values are unchanged.
Note
To allow for the future use of 32 and 64-bit lengths and indicators, the use of short int for Embedded SQL
indicator variables is deprecated. Use a_sql_len instead.
In this section:
Do not confuse the SQL concept of NULL with the C language constant of the same name.
In the SQL language, NULL represents either an unknown attribute or inapplicable information. The C language
NULL is referred to as the null pointer constant and represents a pointer value that does not point to a memory
location.
When NULL is used in this documentation, it refers to the SQL database meaning given above.
NULL is not the same as any value of the column's defined type. So, something extra is required beyond regular
host variables to pass NULL values to the database or receive NULL results back. Indicator variables are used
for this purpose.
If the indicator variable has a value of -1, a NULL is written. If it has a value of 0, the actual value of
employee_phone is written.
Indicator variables are also used when receiving data from the database. They are used to indicate that a NULL
value was fetched (indicator is negative). If a NULL value is fetched from the database and an indicator variable
is not supplied, an error is generated (SQLE_NO_INDICATOR).
Indicator variables indicate whether any fetched values were truncated to fit into a host variable. This enables
applications to handle truncation appropriately.
If a value is truncated on fetching, the indicator variable is set to a positive value, containing the actual length of
the database value before truncation. If the actual length of the database value is greater than 32767 bytes,
then the indicator variable contains 32767.
By default, the conversion_error database option is set to On, and any data type conversion failure leads to an
error, with no row returned.
You can use indicator variables to tell which column produced a data type conversion failure. If you set the
database option conversion_error to Off, any data type conversion failure gives a CANNOT_CONVERT warning,
rather than an error. If the column that suffered the conversion error has an indicator variable, that variable is
set to a value of -2.
If you set the conversion_error option to Off when inserting data into the database, a value of NULL is inserted
when a conversion failure occurs.
Indicator values are used to convey information about retrieved column values.
The SQL Communication Area (SQLCA) is an area of memory that is used on every database request for
communicating statistics and errors from the application to the database server and back to the application.
The SQLCA is used as a handle for the application-to-database communication link. It is passed in to all
database library functions that must communicate with the database server. It is implicitly passed on all
Embedded SQL statements.
A global SQLCA variable is defined in the interface library. The Embedded SQL preprocessor generates an
external reference for the global SQLCA variable and an external reference for a pointer to it. The external
reference is named sqlca and is of type SQLCA. The pointer is named sqlcaptr. The actual global variable is
declared in the import library.
The SQLCA is defined by the sqlca.h header file, included in the SDK\Include subdirectory of your software
installation directory.
You reference the SQLCA to test for a particular error code. The sqlcode and sqlstate fields contain error codes
when a database request has an error. Some C macros are defined for referencing the sqlcode field, the
sqlstate field, and some other fields.
In this section:
sqlcaid
An 8-byte character field that contains the string SQLCA as an identification of the SQLCA structure. This
field helps in debugging when you are looking at memory contents.
sqlcabc
A 32-bit integer that contains the length of the SQLCA structure (136 bytes).
sqlcode
A 32-bit integer that specifies the error code when the database detects an error on a request. Definitions
for the error codes can be found in the header file sqlerr.h. The error code is 0 (zero) for a successful
operation, positive for a warning, and negative for an error.
sqlerrml
Zero or more character strings to be inserted into an error message. Some error messages contain one or
more placeholder strings (%1, %2, ...) that are replaced with the strings in this field.
For example, if a Table Not Found error is generated, sqlerrmc contains the table name, which is
inserted into the error message at the appropriate place.
sqlerrp
Reserved.
sqlerrd
Reserved.
sqlstate
The SQLSTATE status value. The ANSI SQL standard defines this type of return value from a SQL
statement in addition to the SQLCODE value. The SQLSTATE value is always a five-character null-
terminated string, divided into a two-character class (the first two characters) and a three-character
subclass. Each character can be a digit from 0 through 9 or an uppercase alphabetic character A through
Z.
Any class or subclass that begins with 0 through 4 or A through H is defined by the SQL standard; other
classes and subclasses are implementation defined. The SQLSTATE value '00000' means that there has
been no error or warning.
sqlerror Array
The actual number of input/output operations that were required to complete a statement.
The database server does not set this number to zero for each statement. Your program can set this
variable to zero before executing a sequence of statements. After the last statement, this number is the
total number of input/output operations for the entire statement sequence.
sqlerrd[2] (SQLCOUNT)
On a cursor OPEN or RESUME, this field is filled in with either the actual number of rows in the cursor
(a value greater than or equal to 0) or an estimate thereof (a negative number whose absolute value is
the estimate). It is the actual number of rows if the database server can compute it without counting
the rows. The database can also be configured to always return the actual number of rows using the
row_counts option.
FETCH cursor statement
The SQLCOUNT field is filled if a SQLE_NOTFOUND warning is returned. It contains the number of
rows by which a FETCH RELATIVE or FETCH ABSOLUTE statement goes outside the range of possible
cursor positions (a cursor can be on a row, before the first row, or after the last row). For a wide fetch,
SQLCOUNT is the number of rows actually fetched, and is less than or equal to the number of rows
requested. During a wide fetch, SQLE_NOTFOUND is only set if no rows are returned.
The value is 0 if the row was not found, but the position is valid, for example, executing FETCH
RELATIVE 1 when positioned on the last row of a cursor. The value is positive if the attempted fetch was
beyond the end of the cursor, and negative if the attempted fetch was before the beginning of the
cursor.
GET DATA statement
If the WITH VARIABLE RESULT clause is used to describe procedures that may have more than one
result set, SQLCOUNT is set to one of the following values:
The result set may change: the procedure call should be described again following each OPEN
statement.
1
For the SQLE_SYNTAX_ERROR syntax error, the field contains the approximate character position
within the statement where the error was detected.
sqlerrd[3] (SQLIOESTIMATE)
The estimated number of input/output operations that are required to complete the statement. This field
is given a value on an OPEN or EXPLAIN statement.
However, if you use a single connection, you are restricted to one active request per connection. In a
multithreaded application, do not use the same connection to the database on each thread unless you use a
semaphore to control access.
There are no restrictions on using separate connections on each thread that wants to use the database. The
SQLCA is used by the runtime library to distinguish between the different thread contexts. So, each thread
wanting to use the database concurrently must have its own SQLCA. The exception is that a thread can use the
db_cancel_request function to cancel a statement executing on a different thread using that thread's SQLCA.
#include <stdio.h>
#include <string.h>
#include <malloc.h>
#include <ctype.h>
#include <stdlib.h>
#include <process.h>
#include <windows.h>
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
#define TRUE 1
#define FALSE 0
// multithreading support
typedef struct a_thread_data {
SQLCA sqlca;
int num_iters;
int thread;
int done;
} a_thread_data;
// each thread's ESQL test
EXEC SQL SET SQLCA "&thread_data->sqlca";
static void PrintSQLError( a_thread_data * thread_data )
/******************************************************/
{
char buffer[200];
printf( "%d: SQL error %d -- %s ... aborting\n",
thread_data->thread,
SQLCODE,
sqlerror_message( &thread_data->sqlca,
buffer, sizeof( buffer ) ) );
exit( 1 );
}
EXEC SQL WHENEVER SQLERROR { PrintSQLError( thread_data ); };
static void do_one_iter( void * data )
{
a_thread_data * thread_data = (a_thread_data *)data;
int i;
EXEC SQL BEGIN DECLARE SECTION;
char user[ 20 ];
EXEC SQL END DECLARE SECTION;
Related Information
Each SQLCA used in your program must be initialized with a call to db_init and cleaned up at the end with a call
to db_fini.
The Embedded SQL statement SET SQLCA is used to tell the Embedded SQL preprocessor to use a different
SQLCA for database requests. Usually, a statement such as EXEC SQL SET SQLCA 'task_data->sqlca';
Each thread must have its own SQLCA. This requirement also applies to code in a shared library (in a DLL, for
example) that uses Embedded SQL and is called by more than one thread in your application.
You can use the multiple SQLCA support in any of the supported Embedded SQL environments, but it is only
required in reentrant code.
You do not need to use multiple SQLCAs to connect to more than one database or have more than one
connection to a single database.
Each SQLCA can have one unnamed connection. Each SQLCA has an active or current connection.
All operations on a given database connection must use the same SQLCA that was used when the connection
was established.
Note
Operations on different connections are subject to the normal record locking mechanisms and may cause
each other to block and possibly to deadlock.
There are two ways to embed SQL statements into a C program: statically or dynamically.
In this section:
All standard SQL data manipulation and data definition statements can be embedded in a C program by
prefixing them with EXEC SQL and suffixing the statement with a semicolon (;). These statements are referred
to as static statements.
Static statements can contain references to host variables. Host variables can only be used in place of string or
numeric constants. They cannot be used to substitute column names or table names; dynamic statements are
required to perform those operations.
Related Information
In the C language, strings are stored in arrays of characters. Dynamic SQL statements are constructed in C
language strings. These statements can then be executed using the PREPARE and EXECUTE statements.
These SQL statements cannot reference host variables in the same manner as static statements since the C
language variables are not accessible by name when the C program is executing.
To pass information between the statements and the C language variables, a data structure called the SQL
Descriptor Area (SQLDA) is used. This structure is set up for you by the Embedded SQL preprocessor if you
specify a list of host variables on the EXECUTE statement in the USING clause. These variables correspond by
position to placeholders in the appropriate positions of the prepared statement.
A placeholder is put in the statement to indicate where host variables are to be accessed. A placeholder is
either a question mark (?) or a host variable reference as in static statements (a host variable name preceded
by a colon). In the latter case, the host variable name used in the actual text of the statement serves only as a
placeholder indicating a reference to the SQL descriptor area.
A host variable used to pass information to the database is called a bind variable.
Example
This method requires you to know how many host variables there are in the statement. Usually, this is not the
case. So, you can set up your own SQLDA structure and specify this SQLDA in the USING clause on the
EXECUTE statement.
The DESCRIBE BIND VARIABLES statement returns the host variable names of the bind variables that are
found in a prepared statement. This makes it easier for a C program to manage the host variables. The general
method is as follows:
SQLDA Contents
The SQLDA consists of an array of variable descriptors. Each descriptor describes the attributes of the
corresponding C program variable or the location that the database stores data into or retrieves data from:
● data type
● length if type is a string type
● memory address
● indicator variable
The indicator variable is used to pass a NULL value to the database or retrieve a NULL value from the database.
The database server also uses the indicator variable to indicate truncation conditions encountered during a
database operation. The indicator variable is set to a positive value when not enough space was provided to
receive a database value.
A SELECT statement that returns only a single row can be prepared dynamically, followed by an EXECUTE with
an INTO clause to retrieve the one-row result. SELECT statements that return multiple rows, however, are
managed using dynamic cursors.
With dynamic cursors, results are put into a host variable list or a SQLDA that is specified on the FETCH
statement (FETCH INTO and FETCH USING DESCRIPTOR). Since the number of SELECT list items is usually
unknown, the SQLDA route is the most common. The DESCRIBE SELECT LIST statement sets up a SQLDA with
the types of the SELECT list items. Space is then allocated for the values using the fill_sqlda or fill_s_sqlda
functions, and the information is retrieved by the FETCH USING DESCRIPTOR statement.
Note
To avoid consuming unnecessary resources, ensure that statements are dropped after use.
Related Information
The SQLDA (SQL Descriptor Area) is an interface structure that is used for dynamic SQL statements. The
structure is used to pass information regarding host variables and SELECT statement results to and from the
database. The SQLDA is defined in the header file sqlda.h.
There are functions in the database interface shared library or DLL that you can use to manage SQLDAs.
When host variables are used with static SQL statements, the preprocessor constructs a SQLDA for those host
variables. It is this SQLDA that is actually passed to and from the database server.
In this section:
Related Information
The SQLDA (SQL Descriptor Area) data structure is described by the sqlda.h header file.
#ifndef _SQLDA_H_INCLUDED
#define _SQLDA_H_INCLUDED
#define II_SQLDA
#include "sqlca.h"
#if defined( _SQL_PACK_STRUCTURES )
#if defined( _MSC_VER ) && _MSC_VER > 800
#pragma warning(push)
#pragma warning(disable:4103)
#endif
#include "pshpk1.h"
#endif
#define SQL_MAX_NAME_LEN 30
#define _sqldafar
typedef short int a_sql_type;
struct sqlname {
short int length; /* length of char data */
char data[ SQL_MAX_NAME_LEN ]; /* data */
};
struct sqlvar { /* array of variable descriptors */
short int sqltype; /* type of host variable */
a_sql_len sqllen; /* length of host variable */
void *sqldata; /* address of variable */
a_sql_len *sqlind; /* indicator variable pointer */
struct sqlname sqlname;
};
#if defined( _SQL_PACK_STRUCTURES )
#include "poppk.h"
/* The SQLDA should be 4-byte aligned */
#include "pshpk4.h"
#endif
struct sqlda {
unsigned char sqldaid[8]; /* eye catcher "SQLDA" */
a_sql_int32 sqldabc; /* length of sqlda structure */
short int sqln; /* descriptor size in number of entries */
short int sqld; /* number of variables found by DESCRIBE */
struct sqlvar sqlvar[1]; /* array of variable descriptors */
};
#define SCALE(sqllen) ((sqllen)/256)
#define PRECISION(sqllen) ((sqllen)&0xff)
#define SET_PRECISION_SCALE(sqllen,precision,scale) \
sqllen = (scale)*256 + (precision)
#define DECIMALSTORAGE(sqllen) (PRECISION(sqllen)/2 + 1)
typedef struct sqlda SQLDA;
typedef struct sqlvar SQLVAR, SQLDA_VARIABLE;
typedef struct sqlname SQLNAME, SQLDA_NAME;
#ifndef SQLDASIZE
#define SQLDASIZE(n) ( sizeof( struct sqlda ) + \
(n-1) * sizeof( struct sqlvar) )
#endif
#if defined( _SQL_PACK_STRUCTURES )
#include "poppk.h"
#if defined( _MSC_VER ) && _MSC_VER > 800
#pragma warning(pop)
#endif
#endif
#endif
The SQLDA (SQL Descriptor Area) is a data structure consisting of a number of fields.
Field Description
sqld The number of variable descriptors that are valid (contain in
formation describing a host variable). This field is set by the
DESCRIBE statement. As well, you can set it when supplying
data to the database server.
sqltype
The low order bit indicates whether NULL values are allowed. Valid types and constant definitions can be
found in the sqldef.h header file.
This field is filled by the DESCRIBE statement. You can set this field to any type when supplying data to the
database server or retrieving data from the database server. Any necessary type conversion is done
automatically.
sqllen
The length of the variable. A sqllen value has type a_sql_len. What the length actually means depends on
the type information and how the SQLDA is being used.
For LONG VARCHAR, LONG NVARCHAR, and LONG BINARY data types, the array_len field of the
DT_LONGVARCHAR, DT_LONGNVARCHAR, or DT_LONGBINARY data type structure is used instead of the
sqllen field.
sqldata
A pointer to the memory occupied by this variable. This memory must correspond to the sqltype and
sqllen fields.
If the DESCRIBE statement uses LONG NAMES, this field holds the long name of the result set column. If,
in addition, the DESCRIBE statement is a DESCRIBE USER TYPES statement, then this field holds the long
name of the user-defined data type, instead of the column. If the type is a base type, the field is empty.
sqlind
A pointer to the indicator value. An indicator value has type a_sql_len. A negative indicator value indicates a
NULL value. A positive indicator value indicates that this variable has been truncated by a FETCH
statement, and the indicator value contains the length of the data before truncation. A value of -2 indicates
a conversion error if the conversion_error database option is set to Off.
If the sqlind pointer is the null pointer, no indicator variable pertains to this host variable.
The sqlind field is also used by the DESCRIBE statement to indicate parameter types. If the type is a user-
defined data type, this field is set to DT_HAS_USERTYPE_INFO. In this case, perform a DESCRIBE USER
TYPES to obtain information about the user-defined data types.
sqlname
struct sqlname {
short int length;
char data[ SQL_MAX_NAME_LEN ];
};
It is filled by a DESCRIBE statement and is not otherwise used. This field has a different meaning for the
two formats of the DESCRIBE statement:
SELECT LIST
The name data buffer is filled with the column heading of the corresponding item in the SELECT list.
BIND VARIABLES
The name data buffer is filled with the name of the host variable that was used as a bind variable, or "?"
if an unnamed parameter marker is used.
On a DESCRIBE SELECT LIST statement, any indicator variables present are filled with a flag indicating
whether the SELECT list item is updatable or not. More information about this flag can be found in the
sqldef.h header file.
If the DESCRIBE statement is a DESCRIBE USER TYPES statement, then this field holds the long name of
the user-defined data type instead of the column. If the type is a base type, the field is empty.
Related Information
The DESCRIBE statement gets information about the host variables required to store data retrieved from the
database, or host variables required to pass data to the database.
The following table indicates the values of the sqllen and sqltype structure members returned by the
DESCRIBE statement for the various database types (both SELECT LIST and BIND VARIABLE DESCRIBE
statements). For a user-defined database data type, the base type is described.
Your program can use the types and lengths returned from a DESCRIBE, or you may use another type. The
database server performs type conversions between any two types. The memory pointed to by the sqldata field
must correspond to the sqltype and sqllen fields. The Embedded SQL type is obtained by a bitwise AND of
sqltype with DT_TYPES (sqltype & DT_TYPES).
BIGINT DT_BIGINT 8
BINARY(n) DT_BINARY n
BIT DT_BIT 1
DOUBLE DT_DOUBLE 8
FLOAT DT_FLOAT 4
INT DT_INT 4
REAL DT_FLOAT 4
SMALLINT DT_SMALLINT 2
TINYINT DT_TINYINT 1
1The type returned for CHAR and VARCHAR may be DT_LONGVARCHAR if the maximum byte length in the
client's CHAR character set is greater than 32767 bytes.
2 The type returned for NCHAR and NVARCHAR may be DT_LONGNVARCHAR if the maximum byte length in
the client's NCHAR character set is greater than 32767 bytes. NCHAR, NVARCHAR, and LONG NVARCHAR are
described by default as either DT_FIXCHAR, DT_VARCHAR, or DT_LONGVARCHAR, respectively. If the
db_change_nchar_charset function has been called, the types are described as DT_NFIXCHAR,
DT_NVARCHAR, and DT_LONGNVARCHAR, respectively.
The manner in which the length of the data being sent to the database is determined depends on the data type.
For non-long data types, the sqllen field of the sqlvar structure in the SQLDA represents the maximum size
of the data buffer. For example, a column described as VARCHAR(300) will have a sqllen value of 300,
representing the maximum length for that column. For blank-padded data types such as DT_FIXCHAR, the
sqllen field represents the maximum size of the data buffer. For fixed-size data types such as integers and
DT_TIMESTAMP_ STRUCT, the sqllen field is ignored and the length need not be specified. For long data
types, the array_len field specifies the maximum length of the data buffer. The sqllen field is never
modified when you send or retrieve data.
Only the data types displayed in the table below are allowed. The DT_DATE, DT_TIME, and DT_TIMESTAMP
data types are treated the same as DT_STRING when you send or retrieve information. The value is formatted
as a character string in the current date format.
DT_BINARY(n) Set the len field to the length in bytes of data in the array
field of the BINARY structure. The maximum length is 32767.
DT_NVARCHAR(n) Set the len field to the length in bytes of data in the array
field of the NVARCHAR structure.
DT_VARCHAR(n) Set the len field to the length in bytes of data in the array
field of the VARCHAR structure. The maximum length is
32767.
Related Information
The manner in which the length of the data being retrieved from the database is determined depends on the
data type.
For non-long data types, the sqllen field of the sqlvar structure in the SQLDA represents the maximum size
of the data buffer. For example, a column described as VARCHAR(300) will have a sqllen value of 300,
representing the maximum length for that column. For blank-padded data types such as DT_FIXCHAR, the
sqllen field represents the maximum size of the data buffer. For fixed-size data types such as integers and
DT_TIMESTAMP_ STRUCT, the sqllen field is ignored and the length need not be specified. For long data
types, the array_len field specifies the maximum length of the data buffer. The sqllen field is never
modified when you send or retrieve data.
How the Length of the Data Area Is How the Length of the Value Is Deter
Embedded SQL Data Type Specified Before Fetching a Value mined After Fetching a Value
DT_BINARY(n) The sqllen field is set to the maxi The len field of the BINARY structure
mum length in bytes of the array field is updated to the actual length in bytes
of the BINARY structure plus the size of of data in the array field of the BI
the len field (n+2). The maximum NARY structure. The length will not ex
length is 32767. ceed 32765.
DT_DATE The sqllen field is set to the length in A null character is placed at the end of
bytes of the sqldata area. The maxi the string.
mum length is 32767.
DT_FIXCHAR(n) The sqllen field is set to the length in The value is blank-padded to the
bytes of the sqldata area. The maxi sqllen value.
mum length is 32767.
DT_NFIXCHAR The sqllen field is set to the length in The value is blank-padded to the
bytes of the sqldata area. The maxi sqllen value.
mum length is 32767.
DT_NSTRING The sqllen field is set to the length in The string is at most 32766 bytes long.
bytes of the sqldata area. The maxi A null character is placed at the end of
mum length is 32767. the string.
DT_NVARCHAR(n) The sqllen field is set to the maxi The len field of the NVARCHAR struc
mum length in bytes of the array field ture is updated to the actual length in
of the NVARCHAR structure plus the bytes of data in the array field of the
size of the len field (n+2). The maxi NVARCHAR structure. The length will
mum length is 32767. not exceed 32765.
DT_STRING The sqllen field is set to the length in The string is at most 32766 bytes long.
bytes of the sqldata area. The maxi A null character is placed at the end of
mum length is 32767. the string.
DT_TIME The sqllen field is set to the length in A null character is placed at the end of
bytes of the sqldata area. The maxi the string.
mum length is 32767.
DT_TIMESTAMP The sqllen field is set to the length in A null character is placed at the end of
bytes of the sqldata area. The maxi the string.
mum length is 32767.
DT_TIMESTAMP_ STRUCT Fixed size. No action required. Fixed size. No action required.
DT_VARCHAR(n) The sqllen field is set to the maxi The len field of the VARCHAR struc
mum length in bytes of the array field ture is updated to the actual length in
of the VARCHAR structure plus the size bytes of data in the array field. The
of the len field (n+2). The maximum length will not exceed 32765.
length is 32767.
Related Information
Use an INTO clause to assign the returned values directly to host variables.
The SELECT statement may return multiple rows
In this section:
A single row query retrieves at most one row from the database.
A single-row query SELECT statement has an INTO clause following the SELECT list and before the FROM
clause. The INTO clause contains a list of host variables to receive the value for each SELECT list item. There
must be the same number of host variables as there are SELECT list items. The host variables may be
accompanied by indicator variables to indicate NULL results.
When the SELECT statement is executed, the database server retrieves the results and places them in the host
variables. If the query results contain more than one row, the database server returns an error.
If the query results in no rows being selected, an error is returned indicating that no rows can be found
(SQLCODE 100). Errors and warnings are returned in the SQLCA structure.
Example
The following code fragment returns 1 if a row from the Employees table is fetched successfully, 0 if the row
doesn't exist, and -1 if an error occurs.
Related Information
A cursor is used to retrieve rows from a query that has multiple rows in its result set.
A cursor is a handle or an identifier for the SQL query and a position within the result set. Cursor management
in Embedded SQL involves the following steps:
1. Declare a cursor for a particular SELECT statement, using the DECLARE CURSOR statement.
2. Open the cursor using the OPEN statement.
3. Retrieve results one row at a time from the cursor using the FETCH statement.
4. Fetch rows until the Row Not Found warning is returned.
Errors and warnings are returned in the SQLCA structure.
5. Close the cursor, using the CLOSE statement.
By default, cursors are automatically closed at the end of a transaction (on COMMIT or ROLLBACK). Cursors
that are opened with a WITH HOLD clause are kept open for subsequent transactions until they are explicitly
closed.
Cursor Positioning
● On a row
● Before the first row
● After the last row
When a cursor is opened, it is positioned before the first row. The cursor position can be moved using the
FETCH statement. It can be positioned to an absolute position either from the start or from the end of the
query results. It can also be moved relative to the current cursor position.
There are special positioned versions of the UPDATE and DELETE statements that can be used to update or
delete the row at the current position of the cursor. If the cursor is positioned before the first row or after the
last row, an error is returned indicating that there is no corresponding row in the cursor.
Inserts and some updates to DYNAMIC SCROLL cursors can cause problems with cursor positioning. The
database server does not put inserted rows at a predictable position within a cursor unless there is an ORDER
BY clause on the SELECT statement. Sometimes the inserted row does not appear until the cursor is closed
and opened again.
The UPDATE statement can cause a row to move in the cursor. This happens if the cursor has an ORDER BY
clause that uses an existing index (a temporary table is not created).
Related Information
The FETCH statement can be modified to fetch more than one row at a time, which may improve performance.
This is called a wide fetch or an array fetch.
To use wide fetches in Embedded SQL, include the FETCH statement in your code as follows:
where ARRAY nnn is the last item of the FETCH statement. The fetch count nnn can be a host variable. The
number of variables in the SQLDA must be the product of nnn and the number of columns per row. The first
row is placed in SQLDA variables 0 to (columns per row) - 1, and so on.
Each column must be of the same type in each row of the SQLDA, or a SQLDA_INCONSISTENT error is
returned.
The server returns in SQLCOUNT the number of records that were fetched, which is always greater than zero
unless there is an error or warning. On a wide fetch, a SQLCOUNT of 1 with no error condition indicates that one
valid row has been fetched.
Example
The following example code illustrates the use of wide fetches. The complete example is found in
%SQLANYSAMP17%\SQLAnywhere\esqlwidefetch\widefetch.sqc.
● In the function PrepareSQLDA, the SQLDA memory is allocated using the alloc_sqlda function. This allows
space for indicator variables, rather than using the alloc_sqlda_noind function.
● If the number of rows fetched is fewer than the number requested, but is not zero (at the end of the cursor
for example), the SQLDA items corresponding to the rows that were not fetched are returned as NULL by
setting the indicator value. If no indicator variables are present, an error is generated
(SQLE_NO_INDICATOR: no indicator variable for NULL result).
● If a row being fetched has been updated, generating a SQLE_ROW_UPDATED_WARNING warning, the fetch
stops on the row that caused the warning. The values for all rows processed to that point (including the row
that caused the warning) are returned. SQLCOUNT contains the number of rows that were fetched,
including the row that caused the warning. All remaining SQLDA items are marked as NULL.
● If a row being fetched has been deleted or is locked, generating a SQLE_NO_CURRENT_ROW or
SQLE_LOCKED error, SQLCOUNT contains the number of rows that were read before the error. This does
not include the row that caused the error. The SQLDA does not contain values for any of the rows since
SQLDA values are not returned on errors. The SQLCOUNT value can be used to reposition the cursor, if
necessary, to read the rows.
The INSERT statement can be used to insert more than one row at a time, which may improve performance.
This is called a wide insert or an array insert.
To use wide inserts in Embedded SQL, prepare and then execute an INSERT statement in your code as follows:
where ARRAY nnn is the last part of the EXECUTE statement. The batch size nnn can be a host variable. The
number of variables in the SQLDA must be the product of the batch size and the number of placeholders in the
statement to be executed.
Each variable must be of the same type in each row of the SQLDA, or a SQLDA_INCONSISTENT error is
returned.
Example
The following complete code example illustrates the use of wide inserts.
// [wideinsert.sqc]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "sqldef.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL WHENEVER SQLERROR { PrintSQLError();
goto err; };
static void PrintSQLError()
{
char buffer[200];
printf( "SQL error %d -- %s\n",
SQLCODE,
sqlerror_message( &sqlca,
buffer,
sizeof( buffer ) ) );
}
unsigned RowsToInsert = 20;
unsigned short BatchSize = 5;
char * ConnectStr = "";
static void Usage()
{
fprintf( stderr, "Usage: wideinsert [options] \n" );
fprintf( stderr, "Options:\n" );
fprintf( stderr, " -n nnn : number of rows to insert (default:
20)\n" );
fprintf( stderr, " -b nnn : insert nnn rows at a time (default:
5)\n" );
// Prepare the static parts of the SQLDA object for the insert
sqlda->sqld = batch_size * NUM_PARAMS;
for( unsigned short current_row = 0; current_row < batch_size; current_row+
+ )
{
var = &sqlda->sqlvar[ current_row * NUM_PARAMS + 0 ];
var->sqltype = DT_INT;
var->sqllen = sizeof( int );
var = &sqlda->sqlvar[ current_row * NUM_PARAMS + 1 ];
var->sqltype = DT_STRING;
var->sqllen = 30;
var = &sqlda->sqlvar[ current_row * NUM_PARAMS + 2 ];
var->sqltype = DT_INT;
var->sqllen = sizeof( int );
}
fill_sqlda( sqlda );
printf( "Insert %u rows into table \"WideInsertSample\" with batch size %u.
\n",
rows_to_insert,
batch_size );
● The size of the SQLDA is based on the size of the batch (the maximum number of rows that you want to
insert at a time) multiplied by the number of parameters or columns that you want to insert
(NUM_PARAMS in the following examples). Memory for the SQLDA is allocated using the
alloc_sqlda_noind function. This function does not allocate space for indicator variables.
● The entire SQLDA is initialized one row and column at a time. In general, the position in the SQLDA for each
row and column is calculated using zero-based offsets as in the following example:
● Once this has been done, the fill_sqlda routine can be called to allocate buffers for the column values in all
rows of the batch. Values are stuffed into the buffers using zero-based offsets prior to executing the
prepared INSERT statement. The following is an example for storing an integer value.
● The number of rows that were inserted is returned in SQLCOUNT, which is always greater than zero unless
there is an error or warning.
The DELETE statement can be used to delete an arbitrary set of rows, which may improve performance. This is
called a wide delete or an array delete.
To use wide deletes in Embedded SQL, prepare and then execute a DELETE statement in your code as follows:
where ARRAY nnn is the last part of the EXECUTE statement. The batch size nnn can be a host variable. The
number of variables in the SQLDA must be the product of the batch size and the number of placeholders in the
statement to be executed.
Each variable must be of the same type in each row of the SQLDA, or a SQLDA_INCONSISTENT error is
returned.
Example
The following complete code example illustrates the use of wide deletes.
// [widedelete.sqc]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "sqldef.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL WHENEVER SQLERROR { PrintSQLError();
goto err; };
static void PrintSQLError()
{
char buffer[200];
printf( "SQL error %d -- %s\n",
SQLCODE,
sqlerror_message( &sqlca,
buffer,
sizeof( buffer ) ) );
}
unsigned RowsToDelete[100];
unsigned short BatchSize = 0;
char * ConnectStr = "";
static void Usage()
{
fprintf( stderr, "Usage: widedelete [options] \n" );
fprintf( stderr, "Options:\n" );
// Prepare the static parts of the SQLDA object for the delete
sqlda->sqld = batch_size;
for( unsigned short current_row = 0; current_row < batch_size; current_row+
+ )
{
var = &sqlda->sqlvar[ current_row ];
var->sqltype = DT_INT;
var->sqllen = sizeof( int );
}
fill_sqlda( sqlda );
● To try this example, use the example in the wide inserts topic to populate the WideInsertSample table.
● The size of the SQLDA is based on the size of the batch (the maximum number of rows that you want to
delete at a time) multiplied by the number of parameters in the DELETE statement. In this example, there
is only 1 parameter but you could have more. Memory for the SQLDA is allocated using the
alloc_sqlda_noind function. This function does not allocate space for indicator variables.
● The entire SQLDA is initialized one row and parameter at a time. In general, the position in the SQLDA for
each row and parameter is calculated using zero-based offsets as in the following example (for this
DELETE example, num_params is 1):
● Once this has been done, the fill_sqlda routine can be called to allocate buffers for the parameter values in
all rows of the batch. Values are stuffed into the buffers using zero-based offsets prior to executing the
prepared DELETE statement. The following is an example for storing the integer row number.
● The prepared DELETE statement is executed using the EXEC SQL EXECUTE statement and the number of
rows to delete is specified by the ARRAY clause. The following is an example.
● The number of rows that were deleted is returned in SQLCOUNT, which is always greater than zero unless
no row matched any of the specified rows (for example, the rows were already deleted) or there is an error
or warning.
The MERGE statement can be used to merge multiple sets of rows into a table, which may improve
performance. This is called a wide merge or an array merge.
To use wide merges in Embedded SQL, prepare and then execute a MERGE statement in your code as follows:
where ARRAY nnn is the last part of the EXECUTE statement. The batch size nnn can be a host variable. The
number of variables in the SQLDA must be the product of the batch size and the number of placeholders in the
statement to be executed.
Each variable must be of the same type in each row of the SQLDA, or a SQLDA_INCONSISTENT error is
returned.
// [widemerge.sqc]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "sqldef.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL WHENEVER SQLERROR { PrintSQLError();
goto err; };
static void PrintSQLError()
{
char buffer[200];
printf( "SQL error %d -- %s\n",
SQLCODE,
sqlerror_message( &sqlca,
buffer,
sizeof( buffer ) ) );
}
char * ConnectStr = "";
static void Usage()
{
fprintf( stderr, "Usage: widemerge [options] \n" );
fprintf( stderr, "Options:\n" );
fprintf( stderr, " -c conn_str : database connection string (required)
\n" );
}
static int ArgumentIsASwitch( char * arg )
{
#if defined( UNIX )
return ( arg[0] == '-' );
#else
return ( arg[0] == '-' ) || ( arg[0] == '/' );
#endif
}
static int ProcessOptions( char * argv[] )
{
int argc;
char * arg;
char opt;
#define _get_arg_param() \
arg += 2; \
if( !arg[0] ) arg = argv[++argc]; \
if( arg == NULL ) \
{ \
fprintf( stderr, "Missing argument parameter\n" ); \
return( -1 ); \
}
for( argc = 1; (arg = argv[argc]) != NULL; ++ argc )
{
if( !ArgumentIsASwitch( arg ) ) break;
opt = arg[1];
switch( opt )
{
case 'c':
_get_arg_param();
ConnectStr = arg;
break;
default:
fprintf( stderr, "**** Unknown option: -%c\n", opt );
Usage();
return( -1 );
}
}
batch_size = 4;
fill_sqlda( sqlda );
● The entire SQLDA is initialized one row and parameter at a time. In general, the position in the SQLDA for
each row and parameter is calculated using zero-based offsets as in the following example:
● Once this has been done, the fill_sqlda routine can be called to allocate buffers for the parameter values in
all rows of the batch. Values are stuffed into the buffers using zero-based offsets prior to executing the
prepared MERGE statement. The following is an example for storing the region string and the
representative number for a given row.
● The prepared MERGE statement is executed using the EXEC SQL EXECUTE statement and the size of the
batch is specified by the ARRAY clause. The following is an example.
● The number of rows that were affected is returned in SQLCOUNT, which is always greater than zero unless
no row matched any of the specified rows (for example, the rows were already present) or there is an error
or warning.
The method for sending and retrieving LONG VARCHAR, LONG NVARCHAR, and LONG BINARY values in
Embedded SQL applications is different from that for other data types.
The standard SQLDA fields are limited to 32767 bytes of data as the fields holding the length information
(sqllen, *sqlind) are 16-bit values. Changing these values to 32-bit values would break existing applications.
The method of describing LONG VARCHAR, LONG NVARCHAR, and LONG BINARY values is the same as for
other data types.
Separate fields are used to hold the allocated, stored, and untruncated lengths of LONG BINARY, LONG
VARCHAR, and LONG NVARCHAR data types. The static SQL data types are defined in sqlca.h as follows:
The size+1 allocation does not indicate that the LONGVARCHAR/LONGNVARCHAR array is null-terminated
by the client library. The extra byte is included for those applications that wish to null-terminate the chunk that
has been fetched from the database. Use the stored_len field to determine the amount of data fetched.
When any of these macros are used in an Embedded SQL DECLARE SECTION, the array_len field is
initialized to size. Otherwise, the array_len field is not initialized.
For dynamic SQL, set the sqltype field to DT_LONGVARCHAR, DT_LONGNVARCHAR, or DT_LONGBINARY as
appropriate. The associated LONGVARCHAR, LONGNVARCHAR, and LONGBINARY structure is as follows:
For both static and dynamic SQL structures, the structure members are defined as follows:
array_len
(Sending and retrieving.) The number of bytes allocated for the array part of the structure.
stored_len
(Retrieving only.) The number of bytes that would be stored in the array if the value was not truncated.
Always greater than or equal to stored_len. If truncation occurs, this value is larger than array_len.
In this section:
Retrieve a LONG VARCHAR, LONG NVARCHAR, or LONG BINARY value using static SQL.
Procedure
sqlind
The sqlind field points to an indicator. The indicator value is negative if the value is NULL, 0 if there is
no truncation, and is the positive untruncated length in bytes up to a maximum of 32767. If the
indicator value is positive, use the untrunc_len field instead.
stored_len
The number of bytes stored in the array. Always less than or equal to array_len and untrunc_len.
untrunc_len
The number of bytes that would be stored in the array if the value was not truncated. Always greater
than or equal to stored_len. If truncation occurs, this value is larger than array_len.
array
This area contains the data fetched. The data is not null-terminated.
Example
The following code fragment illustrates the mechanics of retrieving a LONG VARCHAR using static Embedded
SQL. It is not intended to be a practical application.
Related Information
Retrieve a LONG VARCHAR, LONG NVARCHAR, or LONG BINARY value using dynamic SQL.
Procedure
sqlind
The number of bytes stored in the array. Always less than or equal to array_len and untrunc_len.
untrunc_len
The number of bytes that would be stored in the array if the value was not truncated. Always greater
than or equal to stored_len. If truncation occurs, this value is larger than array_len.
array
This area contains the data fetched. The data is not null-terminated.
Results
Example
The following code fragment illustrates the mechanics of retrieving LONG VARCHAR data using dynamic
Embedded SQL. It is not intended to be a practical application:
Related Information
Send LONG values to the database using static SQL from an Embedded SQL application.
Procedure
Results
The Embedded SQL application is ready to send LONG values to the database.
Example
The following code fragment illustrates the mechanics of sending a LONG VARCHAR using static Embedded
SQL. It is not intended to be a practical application.
Related Information
Send LONG values to the database using dynamic SQL from an Embedded SQL application.
Procedure
Results
The Embedded SQL application is ready to send LONG values to the database.
Related Information
You can create and call stored procedures using Embedded SQL.
You can embed a CREATE PROCEDURE just like any other data definition statement, such as CREATE TABLE.
You can also embed a CALL statement to execute a stored procedure. The following code fragment illustrates
both creating and executing a stored procedure in Embedded SQL:
To pass host variable values to a stored procedure or to retrieve the output variables, you prepare and execute
a CALL statement. The following code fragment illustrates the use of host variables. Both the USING and INTO
clauses are used on the EXECUTE statement.
In this section:
Database procedures can also contain SELECT statements. The procedure is declared using a RESULT clause
to specify the number, name, and types of the columns in the result set.
Result set columns are different from output parameters. For procedures with result sets, the CALL statement
can be used in place of a SELECT statement in the cursor declaration.
In this example, the procedure has been invoked with an OPEN statement rather than an EXECUTE statement.
The OPEN statement causes the procedure to execute until it reaches a SELECT statement. At this point, C1 is
a cursor for the SELECT statement within the database procedure. You can use all forms of the FETCH
statement (backward and forward scrolling) until you are finished with it. The CLOSE statement stops
execution of the procedure.
If there had been another statement following the SELECT in the procedure, it would not have been executed.
To execute statements following a SELECT, use the RESUME cursor-name statement. The RESUME statement
either returns the warning SQLE_PROCEDURE_COMPLETE or it returns SQLE_NOERROR indicating that there
is another cursor. The example illustrates a two-select procedure:
These examples have used static cursors. Full dynamic cursors can also be used for the CALL statement.
The DESCRIBE statement works fully for procedure calls. A DESCRIBE OUTPUT produces a SQLDA that has a
description for each of the result set columns.
If the procedure does not have a result set, the SQLDA has a description for each INOUT or OUT parameter for
the procedure. A DESCRIBE INPUT statement produces a SQLDA having a description for each IN or INOUT
parameter for the procedure.
DESCRIBE ALL
DESCRIBE ALL describes IN, INOUT, OUT, and RESULT set parameters. DESCRIBE ALL uses the indicator
variables in the SQLDA to provide additional information.
The DT_PROCEDURE_IN and DT_PROCEDURE_OUT bits are set in the indicator variable when a CALL
statement is described. DT_PROCEDURE_IN indicates an IN or INOUT parameter and DT_PROCEDURE_OUT
indicates an INOUT or OUT parameter. Procedure RESULT columns have both bits clear.
After a DESCRIBE OUTPUT, these bits can be used to distinguish between statements that have result sets
(need to use OPEN, FETCH, RESUME, CLOSE) and statements that do not (need to use EXECUTE).
If you have a procedure that returns multiple result sets, you must re-describe after each RESUME statement if
the result sets change shapes.
Describe the cursor, not the statement, to re-describe the current position of the cursor.
Related Information
Since a typical Embedded SQL application must wait for the completion of each database request before
carrying out the next step, an application that uses multiple execution threads can carry on with other tasks.
If you must use a single execution thread, then some degree of multitasking can be accomplished by
registering a callback function using the db_register_a_callback function with the DB_CALLBACK_WAIT option.
Your callback function is called repeatedly by the interface library while the database server or client library is
busy processing your database request.
Related Information
The db_backup function provides another way to perform an online backup in Embedded SQL applications.
The Backup utility also makes use of this function.
You can also interface directly to the Backup utility using the Database Tools DBBackup function.
Undertake to write a program using the db_backup function only if your backup requirements are not satisfied
by the any of the other backup methods.
Related Information
The Embedded SQL preprocessor generates calls to functions in the interface library or DLL. In addition to the
calls generated by the Embedded SQL preprocessor, a set of library functions is provided to make database
operations easier to perform.
Prototypes for these functions are included by the EXEC SQL INCLUDE SQLCA statement.
The DLL entry points are the same except that the prototypes have a modifier appropriate for DLLs.
You can declare the entry points in a portable manner using _esqlentry_, which is defined in sqlca.h. On
Windows platforms, it resolves to the value __stdcall.
Allocate a SQLDA.
Syntax
numvar
Returns
Pointer to a SQLDA if successful and returns the null pointer if there is not enough memory available.
Remarks
Allocates a SQLDA with descriptors for numvar variables. The sqln field of the SQLDA is initialized to numvar.
Space is allocated for the indicator variables, the indicator pointers are set to point to this space, and the
indicator value is initialized to zero. A null pointer is returned if memory cannot be allocated. Use this function
instead of the alloc_sqlda_noind function.
Related Information
Syntax
Parameters
numvar
Returns
Pointer to a SQLDA if successful and returns the null pointer if there is not enough memory available.
Remarks
Allocates a SQLDA with descriptors for numvar variables. The sqln field of the SQLDA is initialized to numvar.
Space is not allocated for indicator variables; the indicator pointers are set to the null pointer. A null pointer is
returned if memory cannot be allocated.
Related Information
Back up a database.
Syntax
void db_backup(
SQLCA * sqlca,
int op,
int file_num,
unsigned long page_num,
struct sqlda * sqlda);
Parameters
sqlca
The page number of the database. A value in the range 0 to the maximum number of pages less 1.
sqlda
Authorization
Must be connected as a user with BACKUP DATABASE system privilege, or have the
SYS_RUN_REPLICATION_ROLE system role.
Remarks
Although this function provides one way to add backup features to an application, the recommended way to do
this task is to use the BACKUP statement.
DB_BACKUP_START
Must be called before a backup can start. Only one backup can be running per database at one time
against any given database server. Database checkpoints are disabled until the backup is complete
(db_backup is called with an op value of DB_BACKUP_END). If the backup cannot start, the SQLCODE is
SQLE_BACKUP_NOT_STARTED. Otherwise, the SQLCOUNT field of the sqlca is set to the database page
size. Backups are processed one page at a time.
Open the database file specified by file_num, which allows pages of the specified file to be backed up
using DB_BACKUP_READ_PAGE. Valid file numbers are 0 through DB_BACKUP_MAX_FILE for the root
database files, and 0 through DB_BACKUP_TRANS_LOG_FILE for the transaction log file. If the specified
file does not exist, the SQLCODE is SQLE_NOTFOUND. Otherwise, SQLCOUNT contains the number of
pages in the file, SQLIOESTIMATE contains a 32-bit value (POSIX time_t) that identifies the time that the
database file was created, and the operating system file name is in the sqlerrmc field of the SQLCA.
Read one page of the database file specified by file_num. The page_num should be a value from 0 to one
less than the number of pages returned in SQLCOUNT by a successful call to db_backup with the
DB_BACKUP_OPEN_FILE operation. Otherwise, SQLCODE is set to SQLE_NOTFOUND. The sqlda
descriptor should be set up with one variable of type DT_BINARY or DT_LONG_BINARY pointing to a buffer.
DT_BINARY data contains a two-byte length followed by the actual binary data, so the buffer must be two
bytes longer than the page size.
Note
This call makes a copy of the specified database page into the buffer, but it is up to the application to
save the buffer on some backup media.
DB_BACKUP_READ_RENAME_LOG
This action is the same as DB_BACKUP_READ_PAGE, except that after the last page of the transaction log
has been returned, the database server renames the transaction log and starts a new one.
If the database server is unable to rename the log at the current time (for example in version 7.0.x or earlier
databases there may be incomplete transactions), the SQLE_BACKUP_CANNOT_RENAME_LOG_YET error
is set. In this case, do not use the page returned, but instead reissue the request until you receive
SQLE_NOERROR and then write the page. Continue reading the pages until you receive the
SQLE_NOTFOUND condition.
When you receive the SQLE_NOTFOUND condition, the transaction log has been backed up successfully
and the file has been renamed. The name for the old transaction file is returned in the sqlerrmc field of
the SQLCA.
Check the sqlda->sqlvar[0].sqlind value after a db_backup call. If this value is greater than zero, the last log
page has been written and the transaction log file has been renamed. The new name is still in
sqlca.sqlerrmc, but the SQLCODE value is SQLE_NOERROR.
Do not call db_backup again after this, except to close files and finish the backup. If you do, you get a
second copy of your backed up log file and you receive SQLE_NOTFOUND.
DB_BACKUP_CLOSE_FILE
Must be called when processing of one file is complete to close the database file specified by file_num.
Must be called at the end of the backup. No other backup can start until this backup has ended.
Checkpoints are enabled again.
Starts a parallel backup. Like DB_BACKUP_START, only one backup can be running against a database at
one time on any given database server. Database checkpoints are disabled until the backup is complete
(until db_backup is called with an op value of DB_BACKUP_END). If the backup cannot start, you receive
SQLE_BACKUP_NOT_STARTED. Otherwise, the SQLCOUNT field of the sqlca is set to the database page
size.
The file_num parameter instructs the database server to rename the transaction log and start a new one
after the last page of the transaction log has been returned. If the value is non-zero then the transaction log
The page_num parameter informs the database server of the maximum size of the client's buffer, in
database pages. On the server side, the parallel backup readers try to read sequential blocks of pages. This
value lets the server know how large to allocate these blocks: passing a value of nnn lets the server know
that the client is willing to accept at most nnnn database pages at a time from the server. The server may
return blocks of pages of less than size nnn if it is unable to allocate enough memory for blocks of nnn
pages. If the client does not know the size of database pages until after the call to
DB_BACKUP_PARALLEL_START, this value can be provided to the server with the DB_BACKUP_INFO
operation. This value must be provided before the first call to retrieve backup pages
(DB_BACKUP_PARALLEL_READ).
Note
If you are using db_backup to start a parallel backup, db_backup does not create writer threads. The
caller of db_backup must receive the data and act as the writer.
DB_BACKUP_INFO
This parameter provides additional information to the database server about the parallel backup. The
file_num parameter indicates the type of information being provided, and the page_num parameter
provides the value. You can specify the following additional information with DB_BACKUP_INFO:
DB_BACKUP_INFO_CHKPT_LOG
This is the client-side equivalent to the WITH CHECKPOINT LOG option of the BACKUP statement. A
page_num value of DB_BACKUP_CHKPT_COPY indicates COPY, while the value
DB_BACKUP_CHKPT_NOCOPY indicates NO COPY. If this value is not provided it defaults to COPY.
DB_BACKUP_INFO_PAGES_IN_BLOCK
The page_num argument contains the maximum number of pages that should be sent back in one
block.
This operation reads a block of pages from the database server. Before invoking this operation, use the
DB_BACKUP_OPEN_FILE operation to open all the files that you want to back up.
DB_BACKUP_PARALLEL_READ ignores the file_num and page_num arguments.
The sqlda descriptor should be set up with one variable of type DT_LONGBINARY pointing to a buffer. The
buffer should be large enough to hold binary data of the size nnn pages (specified in the
DB_BACKUP_START_PARALLEL operation, or in a DB_BACKUP_INFO operation).
The server returns a sequential block of database pages for a particular database file. The page number of
the first page in the block is returned in the SQLCOUNT field. The file number that the pages belong to is
returned in the SQLIOESTIMATE field, and this value matches one of the file numbers used in the
DB_BACKUP_OPEN_FILE calls. The size of the data returned is available in the stored_len field of the
DT_LONGBINARY variable, and is always a multiple of the database page size. While the data returned by
this call contains a block of sequential pages for a given file, it is not safe to assume that separate blocks of
data are returned in sequential order, or that all of one database file's pages are returned before another
An application should make repeated calls to this operation until the size of the read data is 0, or the value
of sqlda->sqlvar[0].sqlind is greater than 0. If the backup is started with transaction log renaming/
restarting, SQLERROR could be set to SQLE_BACKUP_CANNOT_RENAME_LOG_YET. In this case, do not
use the pages returned, but instead reissue the request until you receive SQLE_NOERROR, and then write
the data. The SQLE_BACKUP_CANNOT_RENAME_LOG_YET error may be returned multiple times and on
multiple pages. In your retry loop, add a delay so the database server is not slowed down by too many
requests. Continue reading the pages until either of the first two conditions are met.
The dbbackup utility uses the following algorithm. This is not C code, and does not include error checking.
sqlda->sqld = 1;
sqlda->sqlvar[0].sqltype = DT_LONGBINARY
/* Allocate LONGBINARY value for page buffer. It MUST have */
/* enough room to hold the requested number (128) of database pages */
sqlda->sqlvar[0].sqldata = allocated buffer
/* Open the server files needing backup */
for file_num = 0 to DB_BACKUP_MAX_FILE
db_backup( ... DB_BACKUP_OPEN_FILE, file_num ... )
if SQLCODE == SQLE_NOERROR
/* The file exists */
num_pages = SQLCOUNT
file_time = SQLIOESTIMATE
open backup file with name from sqlca.sqlerrmc
end for
/* read pages from the server, write them locally */
while TRUE
/* file_no and page_no are ignored */
db_backup( &sqlca, DB_BACKUP_PARALLEL_READ, 0, 0, &sqlda );
if SQLCODE != SQLE_NOERROR
break;
if buffer->stored_len == 0 || sqlda->sqlvar[0].sqlind > 0
break;
/* SQLCOUNT contains the starting page number of the block */
/* SQLIOESTIMATE contains the file number the pages belong to */
write block of pages to appropriate backup file
end while
/* close the server backup files */
for file_num = 0 to DB_BACKUP_MAX_FILE
/* close backup file */
db_backup( ... DB_BACKUP_CLOSE_FILE, file_num ... )
end for
/* shut down the backup */
db_backup( ... DB_BACKUP_END ... )
/* cleanup */
free page buffer
Related Information
Syntax
Parameters
sqlca
Returns
Remarks
Cancels the currently active database server request. This function checks to make sure a database server
request is active before sending the cancel request.
A non-zero return value does not mean that the request was canceled. There are a few critical timing cases
where the cancel request and the response from the database or server cross. In these cases, the cancel simply
has no effect, even though the function still returns TRUE.
The db_cancel_request function can be called asynchronously. This function and db_is_working are the only
functions in the database interface library that can be called asynchronously using a SQLCA that might be in
use by another request.
If you cancel a request that is carrying out a cursor operation, the position of the cursor is indeterminate. You
must locate the cursor by its absolute position or close it, following the cancel.
Related Information
Syntax
Parameters
sqlca
Returns
Remarks
Data sent and fetched using DT_FIXCHAR, DT_VARCHAR, DT_LONGVARCHAR, and DT_STRING types are in
the CHAR character set.
Related Information
Syntax
Parameters
sqlca
Returns
Remarks
Data sent and fetched using DT_NFIXCHAR, DT_NVARCHAR, DT_LONGNVARCHAR, and DT_NSTRING host
variable types are in the NCHAR character set.
If the db_change_nchar_charset function is not called, all data is sent and fetched using the CHAR character
set. Typically, an application that wants to send and fetch Unicode data should set the NCHAR character set to
UTF-8.
If this function is called, the charset parameter is usually "UTF-8". The NCHAR character set cannot be set to
UTF-16.
In Embedded SQL, NCHAR, NVARCHAR and LONG NVARCHAR are described as DT_FIXCHAR, DT_VARCHAR,
and DT_LONGVARCHAR, respectively, by default. If the db_change_nchar_charset function has been called,
these types are described as DT_NFIXCHAR, DT_NVARCHAR, and DT_LONGNVARCHAR, respectively.
Related Information
Syntax
Parameters
sqlca
Returns
Server status as an unsigned short value, or 0 if no server can be found over shared memory.
Remarks
Returns an unsigned short value, which indicates status information about the local database server whose
name is name. If no server can be found over shared memory with the specified name, the return value is 0. A
non-zero value indicates that the local server is currently running.
If a null pointer is specified for name, information is returned about the default database server.
Each bit in the return value conveys some information. Constants that represent the bits for the various pieces
of information are defined in the sqldef.h header file. Their meaning is described below.
DB_ENGINE
Related Information
Syntax
Parameters
sqlca
Returns
Remarks
This function frees resources used by the database interface or DLL. You must not make any other library calls
or execute any Embedded SQL statements after db_fini is called. If an error occurs during processing, the error
code is set in SQLCA and the function returns 0. If there are no errors, a non-zero value is returned.
The db_fini function should not be called directly or indirectly from the DllMain function in a Windows Dynamic
Link Library. The DllMain entry point function is intended to perform only simple initialization and termination
tasks. Calling db_fini can create deadlocks and circular dependencies.
Related Information
Obtain information about the database interface or the server to which you are connected.
Syntax
Parameters
sqlca
The maximum length of the string value_buffer, including room for the terminating null character.
Returns
1 if successful; 0 otherwise.
This function is used to obtain information about the database interface or the server to which you are
connected.
DB_PROP_CLIENT_CHARSET
This property value gets the client character set (for example, "windows-1252").
DB_PROP_SERVER_ADDRESS
This property value gets the current connection's server network address as a printable string. The shared
memory protocol always returns the empty string for the address. The TCP/IP protocol returns non-empty
string addresses.
DB_PROP_DBLIB_VERSION
This property value gets the database interface library's version (for example, "17.0.10.1293").
Related Information
Syntax
Parameters
sqlca
Returns
This function initializes the database interface library. This function must be called before any other library call
is made and before any Embedded SQL statement is executed. The resources the interface library required for
your program are allocated and initialized on this call.
Use db_fini to free the resources at the end of your program. If there are any errors during processing, they are
returned in the SQLCA and 0 is returned. If there are no errors, a non-zero value is returned and you can begin
using Embedded SQL statements and functions.
Usually, this function should be called only once (passing the address of the global sqlca variable defined in the
sqlca.h header file). If you are writing a DLL or an application that has multiple threads using Embedded SQL,
call db_init once for each SQLCA that is being used.
Related Information
Syntax
Parameters
sqlca
Returns
1 if your application has a database request in progress that uses the given sqlca and 0 if there is no request in
progress that uses the given sqlca.
This function can be called asynchronously. This function and db_cancel_request are the only functions in the
database interface library that can be called asynchronously using a SQLCA that might be in use by another
request.
Related Information
Obtain information about all database servers on the local network that are listening on TCP/IP.
Syntax
Parameters
sqlca
Returns
1 if successful; 0 otherwise.
Provides programmatic access to the information displayed by the dblocate utility, listing all the database
servers on the local network that are listening on TCP/IP.
The callback function is called for each server found. If the callback function returns 0, db_locate_servers stops
iterating through servers.
The sqlca and callback_user_data passed to the callback function are those passed into db_locate_servers.
The second parameter is a pointer to an a_server_address structure. a_server_address is defined in sqlca.h,
with the following definition:
port_type
Related Information
Obtain information about all database servers on the local network that are listening on TCP/IP.
Syntax
Parameters
sqlca
Returns
1 if successful; 0 otherwise.
Remarks
Provides programmatic access to the information displayed by the dblocate utility, listing all the database
servers on the local network that are listening on TCP/IP, and provides a mask parameter used to select
addresses passed to the callback function.
The callback function is called for each server found. If the callback function returns 0, db_locate_servers_ex
stops iterating through servers.
The sqlca and callback_user_data passed to the callback function are those passed into db_locate_servers.
The second parameter is a pointer to an a_server_address structure. a_server_address is defined in sqlca.h,
with the following definition:
port_type
● DB_LOOKUP_FLAG_NUMERIC
● DB_LOOKUP_FLAG_ADDRESS_INCLUDES_PORT
● DB_LOOKUP_FLAG_DATABASES
DB_LOOKUP_FLAG_NUMERIC ensures that addresses passed to the callback function are IP addresses,
instead of host names.
DB_LOOKUP_FLAG_ADDRESS_INCLUDES_PORT specifies that the address includes the TCP/IP port number
in the a_server_address structure passed to the callback function.
DB_LOOKUP_FLAG_DATABASES specifies that the callback function is called once for each database found, or
once for each database server found if the database server doesn't support sending database information
(version 9.0.2 and earlier database servers).
Related Information
Syntax
void db_register_a_callback(
SQLCA * sqlca,
a_db_callback_index index,
( SQL_CALLBACK_PARM ) callback );
sqlca
Remarks
If you do not register a DB_CALLBACK_WAIT callback, the default action is to do nothing. Your application
blocks, waiting for the database response. You must register a callback for the MESSAGE TO CLIENT
statement.
DB_CALLBACK_DEBUG_MESSAGE
The supplied function is called once for each debug message and is passed a null-terminated string
containing the text of the debug message. A debug message is a message that is logged to the LogFile file.
In order for a debug message to be passed to this callback, the LogFile connection parameter must be
used. The string normally has a newline character (\n) immediately before the terminating null character.
The prototype of the callback function is as follows:
DB_CALLBACK_START
This function is called just before a database request is sent to the server. DB_CALLBACK_START is used
only on Windows.
DB_CALLBACK_FINISH
This function is called after the response to a database request has been received by the DBLIB interface
DLL. DB_CALLBACK_FINISH is used only on Windows operating systems.
DB_CALLBACK_CONN_DROPPED
This function is called when the database server is about to drop a connection because of a liveness
timeout, through a DROP CONNECTION statement, or because the database server is being shut down.
The connection name conn_name is passed in to allow you to distinguish between connections. If the
connection was not named, it has a value of NULL.
DB_CALLBACK_WAIT
This function is called repeatedly by the interface library while the database server or client library is busy
processing your database request.
DB_CALLBACK_MESSAGE
This is used to enable the application to handle messages received from the server during the processing
of a request. Messages can be sent to the client application from the database server using the SQL
MESSAGE statement. Messages can also be generated by long running database server statements.
The msg_type parameter states how important the message is. You can handle different message types in
different ways. The following possible values for msg_type are defined in sqldef.h.
MESSAGE_TYPE_INFO
The message type was PROGRESS. This type of message is generated by long running database server
statements such as BACKUP DATABASE and LOAD TABLE.
The code parameter may provide a SQLCODE associated with the message, otherwise the value is 0.
The length parameter tells you how long the message is.
DBLIB, ODBC, and C API clients can use the DB_CALLBACK_MESSAGE callback to receive progress
messages. For example, the Interactive SQL callback displays STATUS and INFO message on the History
tab, while messages of type ACTION and WARNING go to a window. If an application does not register this
callback, there is a default callback, which causes all messages to be written to the server logfile (if
debugging is on and a logfile is specified). In addition, messages of type MESSAGE_TYPE_WARNING and
MESSAGE_TYPE_ACTION are more prominently displayed, in an operating system-dependent manner.
When a message callback is not registered by the application, messages sent to the client are saved to the
message log file when the LogFile connection parameter is specified. Also, ACTION or STATUS messages
sent to the client appear in a window on Windows operating systems and are logged to stderr on UNIX and
Linux operating systems.
DB_CALLBACK_VALIDATE_FILE_TRANSFER
This is used to register a file transfer validation callback function. Before allowing any transfer to take
place, the client library invokes the validation callback, if it exists. If the client data transfer is being
requested during the execution of indirect statements such as from within a stored procedure, the client
library will not allow a transfer unless the client application has registered a validation callback. The
conditions under which a validation call is made are described more fully below.
The file_name parameter is the name of the file to be read or written. The is_write parameter is 0 if a
read is requested (transfer from the client to the server), and non-zero for a write. The callback function
should return 0 if the file transfer is not allowed, non-zero otherwise.
For data security, the server tracks the origin of statements requesting a file transfer. The server
determines if the statement was received directly from the client application. When initiating the transfer
of data from the client, the server sends the information about the origin of the statement to the client
software. On its part, the Embedded SQL client library allows unconditional transfer of data only if the data
transfer is being requested due to the execution of a statement sent directly by the client application.
Otherwise, the application must have registered the validation callback described above, in the absence of
which the transfer is denied and the statement fails with an error. If the client statement invokes a stored
procedure already existing in the database, then the execution of the stored procedure itself is considered
not to have been for a client initiated statement. However, if the client application explicitly creates a
temporary stored procedure then the execution of the stored procedure results in the server treating the
procedure as having been client initiated. Similarly, if the client application executes a batch statement,
then the execution of the batch statement is considered as being done directly by the client application.
Example
#include <stdio.h>
#include <stdlib.h>
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of parameter settings, each of the form
KEYWORD=value. For example:
"UID=DBA;PWD=password;DBF=c:\\db\\mydatabase.db"
Returns
Remarks
The database is started on an existing server, if possible. Otherwise, a new server is started.
If the database was already running or was successfully started, the return value is true (non-zero) and
SQLCODE is set to 0. Error information is returned in the SQLCA.
If a user ID and password are supplied in the parameters, they are ignored.
The privilege required to start and stop a database is set on the server command line using the -gd option.
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of parameter settings, each of the form
KEYWORD=value. For example:
"UID=DBA;PWD=password;DBF=c:\\db\\mydatabase.db"
Returns
Remarks
If the database server was already running or was successfully started, the return value is TRUE (non-zero) and
SQLCODE is set to 0. Error information is returned in the SQLCA.
The following call to db_start_engine starts the database server, loads the specified database, and names the
server demo.
Unless the ForceStart (FORCE) connection parameter is used and set to YES, the db_start_engine function
attempts to connect to a server before starting one, to avoid attempting to start a server that is already
running.
db_start_engine( &sqlda,
"START=dbsrv17 -n server_2 mydb.db;ForceStart=YES" )
If ForceStart (FORCE) is not used and the ServerName (Server) parameter is not used, then the second
command would have attempted to connect to server_1. The db_start_engine function does not pick up the
server name from the -n option of the StartLine (START) parameter.
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of parameter settings, each of the form
KEYWORD=value. For example:
"UID=DBA;PWD=password;DBF=c:\\db\\mydatabase.db"
Returns
Stop the database identified by DatabaseName (DBN) on the server identified by ServerName (Server). If
ServerName is not specified, the default server is used.
By default, this function does not stop a database that has existing connections. If Unconditional (UNC) is set
to yes, the database is stopped regardless of existing connections.
The privilege required to start and stop a database is set on the server command line using the -gd option.
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of parameter settings, each of the form
KEYWORD=value. For example:
"UID=DBA;PWD=password;DBF=c:\\db\\mydatabase.db"
Returns
Stops execution of the database server. The steps carried out by this function are:
● Look for a local database server that has a name that matches the ServerName (Server) parameter. If no
ServerName is specified, look for the default local database server.
● If no matching server is found, this function returns with success.
● Send a request to the server to tell it to checkpoint and shut down all databases.
● Unload the database server.
By default, this function does not stop a database server that has existing connections. If the
Unconditional=yes connection parameter is specified, the database server is stopped regardless of existing
connections.
A C program can use this function instead of spawning dbstop. A return value of TRUE indicates that there were
no errors.
The use of db_stop_engine is subject to the privileges set with the -gk server option.
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of parameter settings, each of the form
KEYWORD=value. For example:
"UID=DBA;PWD=password;DBF=c:\\db\\mydatabase.db"
Remarks
The algorithm used by this function is described in the troubleshooting connections topic.
The return value is TRUE (non-zero) if a connection was successfully established and FALSE (zero) otherwise.
Error information for starting the server, starting the database, or connecting is returned in the SQLCA.
Related Information
Syntax
Parameters
sqlca
A null-terminated string containing a semicolon-delimited list of connection parameters, each of the form
keyword=value. For example:
Remarks
This function disconnects the connection identified by the ConnectionName (CON) connection parameter, if
one is specified. All other parameters are ignored.
If no ConnectionName parameter is specified in the string, the unnamed connection is disconnected. This is
equivalent to the Embedded SQL DISCONNECT statement. The return value is TRUE if a connection was
successfully ended. Error information is returned in the SQLCA.
This function shuts down the database if it was started with the AutoStop=yes connection parameter and there
are no other connections to the database. It also stops the server if it was started with the AutoStop=yes
parameter and there are no other databases running.
Related Information
Determine if a server can be located, and optionally, if it a successful connection to a database can be made.
Syntax
Parameters
sqlca
The connect_string is a normal connection string that may or may not contain server and database
information.
If connect_to_db is non-zero (TRUE), then the function attempts to connect to a database on a server. It
returns TRUE only if the connection string is sufficient to connect to the named database on the named
server.
If connect_to_db is zero, then the function only attempts to locate a server. It returns TRUE only if the
connection string is sufficient to locate a server. It makes no attempt to connect to the database.
Returns
TRUE (non-zero) if the server or database was successfully located; FALSE (zero) otherwise. Error information
for locating the server or database is returned in the SQLCA.
Remarks
This function can be used to determine if a server can be located, and optionally, if it a successful connection to
a database can be made.
Related Information
Notify the server that the time has changed on the client.
Syntax
Parameters
sqlca
Remarks
This function permits clients to notify the server that the time has changed on the client. This function
recalculates the time zone adjustment and sends it to the server. On Windows platforms, applications must call
this function when they receive the WM_TIMECHANGE message. This will make sure that UTC timestamps are
consistent over time changes, time zone changes, or daylight savings time changeovers.
Related Information
Syntax
Parameters
size
Returns
DBAlloc returns a void pointer to the allocated space, or NULL if there is insufficient memory available.
Use this function to allocate memory for Embedded SQL data areas.
Example
In this example, assume that the third column has sqltype DT_LONGVARCHAR. Since fill_sqlda allocates at
most 32767 bytes of memory for the data area, the data area is released and a new area is allocated for a
maximum size string of 100K bytes. The LONGVARCHAR data structure is initialized. A subsequent call to the
free_filled_sqlda function will release this new data area.
Syntax
Parameters
ptr
Remarks
Use this function to free memory for Embedded SQL data areas.
In this example, assume that the third column has sqltype DT_LONGVARCHAR. Since fill_sqlda allocates at
most 32767 bytes of memory for the data area, the data area is released and reallocated for a maximum size
string of 100K bytes. The LONGVARCHAR data structure is initialized. A subsequent call to the free_filled_sqlda
function will release this new data area.
Syntax
Parameters
ptr
Pointer to the memory allocated by a previous call to the DBAlloc or DBRealloc function.
size
Returns
DBRealloc returns a void pointer to the allocated space, or NULL if there is insufficient memory available.
Remarks
Use this function to reallocate memory for Embedded SQL data areas.
In this example, assume that the third column has sqltype DT_LONGVARCHAR. Since fill_sqlda allocates at
most 32767 bytes of memory for the data area, the data area is reallocated for a maximum size string of 100K
bytes. The LONGVARCHAR data structure is initialized. A subsequent call to the free_filled_sqlda function will
release this new data area.
Allocate space for each variable described in each descriptor of a SQLDA, changing all data types to
DT_STRING.
Syntax
Parameters
sqlda
Returns
sqlda if successful and returns NULL if there is not enough memory available.
This function is the same as fill_sqlda, except that it changes all the data types in sqlda to type DT_STRING.
Enough space is allocated to hold the string representation of the type originally specified by the SQLDA, up to
a maximum of maxlen - 1 bytes. The address of this memory is assigned to the sqldata field of the
corresponding descriptor. The maximum value for maxlen is 32767.
For DT_STRING variable types, the sqllen is updated to include the null-terminator.
Related Information
Syntax
Parameters
sqlda
Returns
sqlda if successful and returns NULL if there is not enough memory available.
Remarks
Allocates space for each variable described in each descriptor of sqlda, and assigns the address of this
memory to the sqldata field of the corresponding descriptor. Enough space is allocated for the database type
and length indicated in the descriptor.
For the DT_STRING variable type, the sqllen is updated to include the null-terminator. DT_STRING variables
are always null-terminated. Other variable types are not null-terminated.
Related Information
Allocate space for each variable described in each descriptor of a SQLDA, with special processing for LONG
data types.
Syntax
Parameters
sqlda
0 or FILL_SQLDA_FLAG_RETURN_DT_LONG
Returns
sqlda if successful and returns NULL if there is not enough memory available.
Allocates space for each variable described in each descriptor of sqlda, and assigns the address of this
memory to the sqldata field of the corresponding descriptor. Enough space is allocated for the database type
and length indicated in the descriptor.
For the DT_STRING variable type, the sqllen is updated to include the null-terminator. DT_STRING variables
are always null-terminated. Other variable types are not null-terminated.
Related Information
Free memory allocated to each sqldata pointer and the space allocated for the SQLDA itself.
Syntax
Parameters
sqlda
Free the memory allocated to each sqldata pointer and the space allocated for the SQLDA itself. Any null
pointer is not freed.
This should only be called if fill_sqlda, fill_sqlda_ex, or fill_s_sqlda was used to allocate the sqldata fields of the
SQLDA.
Calling this function causes free_sqlda to be called automatically, and so any descriptors allocated by
alloc_sqlda are freed.
Related Information
Syntax
Parameters
sqlda
Remarks
Free space allocated to this sqlda and free the indicator variable space, as allocated in fill_sqlda. Do not free
the memory referenced by each sqldata pointer.
Syntax
Parameters
sqlda
Remarks
Free space allocated to this sqlda. Do not free the memory referenced by each sqldata pointer. The indicator
variable pointers are ignored.
Related Information
Syntax
Parameters
sqlca
Returns
TRUE or FALSE indicating whether the string requires double quotes around it when it is used as a SQL
identifier.
Remarks
This function formulates a request to the database server to determine if quotes are needed. Relevant
information is stored in the sqlcode field.
Related Information
Syntax
Parameters
sqlda
Returns
An unsigned 32-bit integer value representing the amount of storage required to store any value for the variable
described in sqlda->sqlvar[varno]. For example, this includes the null-termination characters for DT_STRING
and DT_NSTRING, as well as the overhead for types such as DT_VARCHAR, DT_BINARY, DT_LONGVARCHAR,
DT_LONGNVARCHAR, and so on.
Remarks
For some types, there are also macros defined in the sqlca.h and sqlda.h header files that return the
amount of storage required to store a variable of the indicated size.
_BINARYSIZE( n )
Returns the amount of storage required to store a BINARY of the specified length.
_VARCHARSIZE( n )
Returns the amount of storage required to store a VARCHAR of the specified length.
DECIMALSTORAGE( n )
Returns the amount of storage required to store a DECIMAL of the specified length.
LONGBINARYSIZE( n )
Returns the amount of storage required to store a LONGBINARY of the specified length.
LONGNVARCHARSIZE( n )
Returns the amount of storage required to store a LONGNVARCHAR of the specified length.
LONGVARCHARSIZE( n )
These can be used to determine how much storage to allocate for a variable.
Related Information
Syntax
Parameters
sqlda
Returns
An unsigned 32-bit integer value representing the length of the C string (type DT_STRING) that would be
required to hold the variable sqlda->sqlvar[varno] (no matter what its type is). It includes the null termination
character.
Example
In this example, all columns will be fetched as C strings (DT_STRING). Since sqlda_string_length includes the
null termination character, the sqllen field is set to exclude this character. The new sqltype must be set after
the call to sqlda_string_length.
Related Information
Syntax
Parameters
sqlca
Returns
A pointer to a string that contains an error message or NULL if no error was indicated.
Remarks
Returns a pointer to a string that contains an error message. The error message contains text for the error
code in the SQLCA. If no error was indicated, a null pointer is returned. The error message is placed in the
buffer supplied, truncated to length max if necessary.
All Embedded SQL statements must be preceded with EXEC SQL and end with a semicolon (;).
There are two groups of Embedded SQL statements. Standard SQL statements are used by simply placing
them in a C program enclosed with EXEC SQL and a semicolon (;). CONNECT, DELETE, SELECT, SET, and
UPDATE have additional formats only available in Embedded SQL. The additional formats fall into the second
category of Embedded SQL specific statements.
Several SQL statements are specific to Embedded SQL and can only be used in a C program.
Standard data manipulation and data definition statements can be used from Embedded SQL applications. In
addition, the following statements are specifically for Embedded SQL programming:
Close a cursor.
CONNECT statement [ESQL] [Interactive SQL]
Declare a cursor.
DELETE statement (positioned) [ESQL] [SP]
Open a cursor.
PREPARE statement [ESQL]
Describe the variables in a SQLDA and place data into the SQLDA.
SET SQLCA statement [ESQL]
The SQL Anywhere C application programming interface (API) is a data access API for the C / C++ languages.
The C API specification defines a set of functions, variables and conventions that provide a consistent database
interface independent of the actual database being used. Using the C API, your C / C++ applications have
direct access to SQL Anywhere database servers.
In this section:
The SQL Anywhere C Application Programming Interface (API) is a data access API for the C / C++ languages.
The C API specification defines a set of functions, variables and conventions that provide a consistent database
interface independent of the actual database being used.
Using the C API, your C / C++ applications have direct access to databases running on SQL Anywhere
database servers.
The C API is layered on top of the DBLIB package and it is implemented with Embedded SQL. Although it is not
a replacement for DBLIB, the C API simplifies the creation of applications using C and C++. You do not need an
advanced knowledge of Embedded SQL to use the C API.
The C API also simplifies the creation of C and C++ wrapper drivers for several interpreted programming
languages including PHP, Perl, Python, and Ruby.
API Distribution
The API is built as a dynamic link library (DLL) (dbcapi.dll) on Microsoft Windows systems and as a shared
object (libdbcapi.so) on UNIX and Linux systems. The DLL is statically linked to the DBLIB package of the
software version on which it is built. When the dbcapi.dll file is loaded, the corresponding dblibX.dll file is
loaded by the operating system. Applications using dbcapi.dll can either link directly to it or load it
dynamically.
Descriptions of the C API data types and entry points are provided in the sacapi.h header file which is located
in the sdk\dbcapi directory of your software installation.
Threading Support
The C API library is thread-unaware; the library does not perform any tasks that require mutual exclusion. To
allow the library to work in threaded applications, only one request is allowed on a single connection. With this
rule, the application is responsible for doing mutual exclusion when accessing any connection-specific
resource. This includes connection handles, prepared statements, and result set objects.
C API Examples
Examples that show how to use the C API can be found in the sdk\dbcapi\examples subdirectory of your
database software installation.
callback.cpp
This is an example of how to create a connection object and use it to connect to a database.
dbcapi_isql.cpp
This example shows how to fetch multiple result sets from a stored procedure.
preparing_statements.cpp
This example shows how to insert and retrieve a blob in one chunk.
send_retrieve_part_blob.cpp
This example shows how to insert a blob in chunks and how to retrieve it in chunks as well.
In this section:
The code to dynamically load the DLL is contained in the sacapidll.c source file which is located in the sdk
\dbcapi subdirectory of your software installation. Applications must use the sacapidll.h header file and
include the source code in sacapidll.c. You can use the sqlany_initialize_interface method to dynamically
load the DLL and look up the entry points. Examples are provided with the software installation.
Header Files
● sacapi.h
Remarks
The sacapi.h header file defines the SQL Anywhere C API entry points.
The sacapidll.h header file defines the C API library initialization and finalization functions. You must
include sacapidll.h in your source files and include the source code from sacapidll.c.
The SQL Anywhere C API reference is available in the SQL Anywhere- C API Reference at https://help.sap.com/
viewer/b86b137c54474b11b955a4d16358b208/LATEST/en-US.
You can call a function in an external library from a stored procedure or function.
You can call functions in a DLL under Windows operating systems and in a shared object on Unix.
The material that follows describes how to use the external function call interface. Sample external stored
procedures, plus the files required to build a DLL containing them, are located in the following folder:
%SQLANYSAMP17%\SQLAnywhere\ExternalProcedures.
Caution
External libraries called from procedures share the memory of the server. If you call an external library from
a procedure and the external library contains memory-handling errors, you can crash the server or corrupt
your database. Ensure that you thoroughly test your libraries before deploying them on production
databases.
The interface described replaces an older interface, which has been deprecated. Libraries written to the older
interface, used in versions before version 7.0.x, are still supported, but in any new development, the new
interface is recommended. The new interface must be used for all Unix platforms and for all 64-bit platforms,
including 64-bit Windows.
The database server includes a set of system procedures that make use of this capability, for example to send
MAPI email messages.
In this section:
You can create a SQL stored procedure that calls a C/C++ function in a library (a Dynamic Link Library (DLL) or
shared object) as follows:
You must have the CREATE EXTERNAL REFERENCE system privilege to create procedures or functions that
reference external libraries.
When you define a stored procedure or function in this way, you are creating a bridge to the function in the
external DLL. The stored procedure or function cannot perform any other tasks.
Similarly, you can create a SQL stored function that calls a C/C++ function in a library as follows:
In these statements, the EXTERNAL NAME clause indicates the function name and library in which it resides. In
the example, myFunction is the exported name of a function in the library, and myLibrary is the name of the
library (for example, myLibrary.dll or myLibrary.so).
The LANGUAGE clause indicates that the function is to be called in an external environment. The LANGUAGE
clause can specify one of C_ESQL32, C_ESQL64, C_ODBC32, or C_ODBC64. The 32 or 64 suffix indicates that
the function is compiled as a 32-bit or 64-bit application. The ODBC designation indicates that the application
uses the ODBC API. The ESQL designation indicates that the application could use the Embedded SQL API, the
C API, any other non-ODBC API, or no API at all.
If the LANGUAGE clause is omitted, then the library containing the function is loaded into the address space of
the database server. When called, the external function will execute as part of the server. In this case, if the
function causes a fault, then the database server is terminated. For this reason, loading and executing
functions in an external environment is recommended. If a function causes a fault in an external environment,
the database server will continue to run.
The arguments in parameter-list must correspond in type and order to the arguments expected by the
library function. The library function accesses the procedure arguments using a special interface.
Any value or result set returned by the external function can be returned by the stored procedure or function to
the calling environment.
You can specify operating-system dependent calls, so that a procedure calls one function when run on one
operating system, and another function (presumably analogous) on another operating system. The syntax for
such calls involves prefixing the function name with the operating system name. The operating system
identifier must be unix for UNIX and Linux systems. An example follows.
If the list of functions does not contain an entry for the operating system on which the server is running, but the
list does contain an entry without an operating system specified, the database server calls the function in that
entry.
Related Information
The interface that you use for functions written in C or C++ is defined by two header files named dllapi.h
and extfnapi.h, in the SDK\Include subdirectory of your software installation directory. These header files
handle the platform-dependent features of external function prototypes.
Function Prototypes
The name of the function must match that referenced in the CREATE PROCEDURE or CREATE FUNCTION
statement. Suppose the following CREATE FUNCTION statement had been executed.
#include "dllapi.h"
#include "extfnapi.h"
extern "C"
{
The function must return void, and must take as arguments a pointer to a structure used to call a set of
callback functions and a handle to the arguments provided by the SQL procedure.
Example
The following example implements a function called upstring that converts a string to uppercase letters and
returns the result..
#include "dllapi.h"
#include "extfnapi.h"
extern "C"
{
a_sql_uint32 _entry extfn_use_new_api(void)
{
return(EXTFN_API_VERSION);
}
void _callback extfn_cancel(void *cancel_handle)
{
*(short *)cancel_handle = 1;
}
void _entry upstring(an_extfn_api *api, void *arg_handle)
{
short result;
short canceled;
an_extfn_value arg;
an_extfn_value retval;
a_sql_data_type data_type;
unsigned offset;
char *string;
canceled = 0;
api->set_cancel(arg_handle, &canceled);
result = api->get_value(arg_handle, 1, &arg);
if (canceled || result == 0 || arg.data == NULL)
{
return; // no parameter or parameter is NULL
}
data_type = arg.type & DT_TYPES;
string = (char *)malloc(arg.len.total_len + 1);
offset = 0;
for (; result != 0; )
{
if (arg.data == NULL) break;
memcpy(&string[offset], arg.data, arg.piece_len);
offset += arg.piece_len;
string[offset] = '\0';
if (arg.piece_len == 0) break;
if (canceled) break;
result = api->get_piece(arg_handle, 1, &arg, offset);
}
if (!canceled)
{
switch (data_type)
{
case DT_NSTRING:
case DT_NFIXCHAR:
case DT_NVARCHAR:
On Windows platforms, this C code must be compiled and linked such that the entry points are exported from
the Dynamic Link Library (DLL) with undecorated names. This is usually accomplished with a linker EXPORTS
file such as the following example:
EXPORTS
extfn_use_new_api
extfn_cancel
upstring
The following is an example of a SQL statement that calls the upstring function.
In this section:
To notify the database server that the external library is written using the external function call interface, your
external library must export the extfn_use_new_api function.
Syntax
Returns
The function returns an unsigned 32-bit integer. The returned value must be the interface version number,
EXTFN_API_VERSION, defined in extfnapi.h. A return value of 0 means that the old, deprecated interface is
being used.
Remarks
If the function is not exported by the library, the database server assumes that the old interface is in use. The
new interface must be used for all UNIX and Linux platforms and for all 64-bit platforms, including 64-bit
Microsoft Windows.
To notify the database server that the external library supports cancel processing, your external library must
export the extfn_cancel function.
Syntax
Parameters
cancel_handle
Remarks
This function is called asynchronously by the database server whenever the currently executing SQL statement
is canceled.
The function uses the cancel_handle to set a flag indicating to the external library functions that the SQL
statement has been canceled.
If the function is not exported by the library, the database server assumes that cancel processing is not
supported.
#include "dllapi.h"
extern "C"
{
void _callback extfn_cancel(void *cancel_handle)
{
*(short *)cancel_handle = 1;
}
}
If the extfn_post_load_library function is implemented and exposed in the external library, it is executed by the
database server after the external library has been loaded and the version check has been performed, and
before any other function defined in the external library is called.
Syntax
Remarks
This function is required only if there is a library-specific requirement to do library-wide setup before any
function within the library is called.
This function is called asynchronously by the database server after the external library has been loaded and the
version check has been performed, and before any other function defined in the external library is called.
Example
#include "dllapi.h"
extern "C"
{
void _entry extfn_post_load_library( void )
{
MessageBox(NULL, L"Library loaded", L"Application Notification", MB_OK |
MB_TASKMODAL);
}
}
Related Information
If the extfn_pre_unload_library function is implemented and exposed in the external library, it is executed by
the database server immediately before unloading the external library.
Syntax
Remarks
This function is required only if there is a library-specific requirement to do library-wide cleanup before the
library is unloaded.
This function is called asynchronously by the database server immediately before unloading the external
library.
Example
#include "dllapi.h"
extern "C"
{
void _entry extfn_post_load_library( void )
{
MessageBox(NULL, L"Library unloading", L"Application Notification",
MB_OK | MB_TASKMODAL);
}
}
Related Information
The an_extfn_api structure is used to communicate with the calling SQL environment. This structure is defined
by the header file named extfnapi.h, in the SDK\Include subdirectory of your software installation
directory.
Syntax
Properties
get_value
Use this callback function to get the specified parameter's value. The following example gets the value for
parameter 1.
get_piece
Use this callback function to get the next chunk of the specified parameter's value (if there are any). The
following example gets the remaining pieces for parameter 1.
set_value
Use this callback function to set the specified parameter's value. The following example sets the return
value (parameter 0) for a RETURNS clause of a FUNCTION.
an_extfn_value retval;
int ret = -1;
set_cancel
Use this callback function to establish a pointer to a variable that can be set by the extfn_cancel method.
The following is example.
short canceled = 0;
api->set_cancel( arg_handle, &canceled );
Remarks
A pointer to the an_extfn_api structure is passed by the caller to your external function. Here is an example.
#include "dllapi.h"
#include "extfnapi.h"
extern "C"
{
void _entry upstring(an_extfn_api *api, void *arg_handle)
{
short result;
short canceled;
an_extfn_value arg;
canceled = 0;
api->set_cancel( arg_handle, &canceled );
Whenever you use any of the callback functions, you must pass back the argument handle that was passed to
your external function as the second parameter.
Related Information
The an_extfn_value structure is used to access parameter data from the calling SQL environment.
Syntax
Properties
data
The length of this segment of the parameter. This is less than or equal to total_len.
total_len
The total length of the parameter. For strings, this represents the length of the string and does not include
a null terminator. This property is set after a call to the get_value callback function. This property is no
longer valid after a call to the get_piece callback function.
remain_len
When the parameter is obtained in segments, this is the length of the part that has not yet been obtained.
This property is set after each call to the get_piece callback function.
type
Indicates the type of the parameter. This is one of the Embedded SQL data types such as DT_INT,
DT_FIXCHAR, or DT_BINARY.
Remarks
Suppose that your external function interface was described using the following SQL statement.
an_extfn_value arg;
result = api->get_value( arg_handle, 1, &arg );
if( result == 0 || arg.data == NULL )
{
return; // no parameter or parameter is NULL
}
if( arg.type != DT_LONGVARCHAR )
{
return; // unexpected type of parameter
}
cmd = (char *)malloc( arg.len.total_len + 1 );
offset = 0;
for( ; result != 0; )
{
if( arg.data == NULL ) break;
memcpy( &cmd[offset], arg.data, arg.piece_len );
offset += arg.piece_len;
cmd[offset] = '\0';
if( arg.piece_len == 0 ) break;
result = api->get_piece( arg_handle, 1, &arg, offset );
}
Related Information
The an_extfn_result_set_info structure facilitates the return of result sets to the calling SQL environment.
Syntax
Properties
number_of_columns
Remarks
The following code fragment shows how to set the properties for objects of this type.
int columns = 2;
an_extfn_result_set_info rs_info;
rs_info.number_of_columns = columns;
rs_info.column_infos = col_info;
rs_info.column_data_values = col_data;
Related Information
Syntax
Properties
column_name
Indicates the type of the column. This is one of the Embedded SQL data types such as DT_INT,
DT_FIXCHAR, or DT_BINARY.
Defines the maximum width for char(n), varchar(n) and binary(n) declarations and is set to 0 for all other
types.
column_index
Remarks
The following code fragment shows how to set the properties for objects of this type and how to describe the
result set to the calling SQL environment.
Related Information
The an_extfn_result_set_column_data structure is used to return the data values for columns.
Syntax
Properties
column_index
Used to return the column value in chunks. Set to 1 when returning a partial column value; 0 otherwise.
Remarks
The following code fragment shows how to set the properties for objects of this type and how to return the
result set row to the calling SQL environment.
The external function call interface methods are used to send and retrieve data to the database server.
get_value Callback
The get_value callback function can be used to obtain the value of a parameter that was passed to the stored
procedure or function that acts as the interface to the external function. It returns 0 if not successful;
otherwise it returns a non-zero result. After calling get_value, the total_len field of the an_extfn_value structure
contains the length of the entire value. The piece_len field contains the length of the portion that was obtained
as a result of calling get_value. The piece_len field will always be less than or equal to total_len. When it is less
than, a second function get_piece can be called to obtain the remaining pieces. The total_len field is only valid
after the initial call to get_value. This field is overlaid by the remain_len field which is altered by calls to
get_piece. It is important to preserve the value of the total_len field immediately after calling get_value if you
plan to use it later on.
get_piece Callback
If the entire parameter value cannot be returned in one piece, then the get_piece function can be called
iteratively to obtain the remaining pieces of the parameter value.
The sum of all the piece_len values returned by both calls to get_value and get_piece will add up to the initial
value that was returned in the total_len field after calling get_value. After calling get_piece, the remain_len field,
which overlays total_len, represents the amount not yet obtained.
The following example shows the use of get_value and get_piece to obtain the value of a string parameter such
as a long varchar parameter.
To call the external function from SQL, use a statement like the following.
A sample implementation for the Windows operating system of the mystring function, written in C, follows:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <windows.h>
#include "dllapi.h"
#include "extfnapi.h"
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved )
{
return TRUE;
}
extern "C"
{
a_sql_uint32 _entry extfn_use_new_api( void )
{
return( EXTFN_API_VERSION );
}
void _entry mystring( an_extfn_api *api, void *arg_handle )
{
short result;
an_extfn_value arg;
unsigned offset;
char *string;
result = api->get_value( arg_handle, 1, &arg );
if( result == 0 || arg.data == NULL )
{
return; // no parameter or parameter is NULL
}
string = (char *)malloc( arg.len.total_len + 1 );
offset = 0;
for( ; result != 0; ) {
if( arg.data == NULL ) break;
memcpy( &string[offset], arg.data, arg.piece_len );
offset += arg.piece_len;
string[offset] = '\0';
if( arg.piece_len == 0 ) break;
result = api->get_piece( arg_handle, 1, &arg, offset );
}
MessageBoxA( NULL, string,
"Application Notification",
MB_OK | MB_TASKMODAL );
free( string );
return;
}
}
The set_value callback function can be used to set the values of OUT parameters and the RETURNS result of a
stored function. Use an arg_num value of 0 to set the RETURNS value. The following is an example.
an_extfn_value retval;
retval.type = DT_LONGVARCHAR;
retval.data = result;
retval.piece_len = retval.len.total_len = (a_sql_uint32) strlen( result );
api->set_value( arg_handle, 0, &retval, 0 );
The append argument of set_value determines whether the supplied data replaces (false) or appends to (true)
the existing data. You must call set_value with append=FALSE before calling it with append=TRUE for the same
argument. The append argument is ignored for fixed-length data types.
To return NULL, set the data field of the an_extfn_value structure to NULL.
set_cancel Callback
External functions can get the values of IN or INOUT parameters and set the values of OUT parameters and the
RETURNS result of a stored function. There is a case, however, where the parameter values obtained may no
longer be valid or the setting of values is no longer necessary. This occurs when an executing SQL statement is
canceled. This may occur as the result of an application abruptly disconnecting from the database server. To
handle this situation, you can define a special entry point in the library called extfn_cancel. When this function
is defined, the server will call it whenever a running SQL statement is canceled.
The extfn_cancel function is called with a handle that can be used in any way you consider suitable. A typical
use of the handle is to indirectly set a flag to indicate that the calling SQL statement has been canceled.
The value of the handle that is passed can be set by functions in the external library using the set_cancel
callback function. For example:
Setting a static global "canceled" variable is inappropriate since that would be misinterpreted as all SQL
statements on all connections being canceled which is usually not the case. This is why a set_cancel callback
function is provided. Make sure to initialize the "canceled" variable before calling set_cancel.
It is important to check the setting of the "canceled" variable at strategic points in your external function.
Strategic points would include before and after calling any of the external library call interface functions like
get_value and set_value. When the variable is set (as a result of extfn_cancel having been called), then the
external function can take appropriate termination action. A code fragment based on the earlier example
follows:
if( canceled )
{
free( string );
return;
}
Notes
The get_piece function for any given argument can only be called immediately after the get_value function for
the same argument.
Calling get_value on an OUT parameter returns the type field of the an_extfn_value structure set to the data
type of the argument, and returns the data field of the an_extfn_value structure set to NULL.
The header file extfnapi.h in the database server software SDK\Include folder contains some additional
notes.
The following table shows the conditions under which the functions defined in an_extfn_api return false:
Data types
You cannot use any of the date or time data types, and you cannot use the DECIMAL or NUMERIC data types
(including the money types).
To provide values for INOUT or OUT parameters, use the set_value function. To read IN and INOUT parameters,
use the get_value function.
After a call to get_value, the type field of the an_extfn_value structure can be used to obtain data type
information for the parameter. The following sample code fragment shows how to identify the type of the
parameter.
an_extfn_value arg;
a_sql_data_type data_type;
api->get_value( arg_handle, 1, &arg );
data_type = arg.type & DT_TYPES;
switch( data_type )
{
case DT_FIXCHAR:
case DT_VARCHAR:
case DT_LONGVARCHAR:
break;
default:
return;
}
UTF-8 Types
The UTF-8 data types such as NCHAR, NVARCHAR, LONG NVARCHAR and NTEXT as passed as UTF-8
encoded strings. A function such as the Windows MultiByteToWideChar function can be used to convert a
UTF-8 string to a wide-character (Unicode) string.
Passing NULL
You can pass NULL as a valid value for all arguments. Functions in external libraries can supply NULL as a
return value for any data type.
To set a return value in an external function, call the set_value function with an arg_num parameter value of 0. If
set_value is not called with arg_num set to 0, the function result is NULL.
It is also important to set the data type of a return value for a stored function call. The following code fragment
shows how to set the return data type.
an_extfn_value retval;
retval.type = DT_LONGVARCHAR;
retval.data = result;
retval.piece_len = retval.len.total_len = (a_sql_uint32) strlen( result );
api->set_value( arg_handle, 0, &retval, 0 );
Related Information
The system procedure sa_external_library_unload can be used to unload an external library when the library is
not in use.
The procedure takes one optional parameter, a long varchar. The parameter specifies the library to be
unloaded and must match the file path string used to load the library. If no parameter is specified, all external
libraries not in use are unloaded.
This function is useful when developing a set of external functions because you do not have to shut down the
database server to install a newer version of the library.
Seven external runtime environments are supported. These include Embedded SQL and ODBC applications
written in C/C++, and applications written in Java, JavaScript, Perl, PHP, or languages such as Microsoft C#
While it is possible to call compiled native functions in a dynamic link library or shared object that is loaded by
the database server into its own address space, the risk here is that if the native function causes a fault, then
the database server crashes. Running compiled native functions outside the database server, in an external
environment, eliminates these risks to the server.
● The START EXTERNAL ENVIRONMENT and STOP EXTERNAL ENVIRONMENT statements are used to
start or stop an external environment on demand. These statements are optional since external
environments are automatically started and stopped when needed.
● The ALTER EXTERNAL ENVIRONMENT statement is used to set or modify the location of an external
environment.
● The COMMENT ON EXTERNAL ENVIRONMENT statement is used to add a comment for an external
environment.
● Once an external environment is set up to be used on the database server, you can then install objects into
the database and create stored procedures and functions that make use of these objects within the
external environment.
● The INSTALL EXTERNAL OBJECT statement is used to install a JavaScript, Perl or PHP external object (for
example, a Perl script) from a file or an expression into the database. Once the external objects are
installed in the database, they can be used within external stored procedure and function definitions.
● The COMMENT ON EXTERNAL ENVIRONMENT OBJECT statement is used to add a comment for an
external environment object.
● To remove an installed JavaScript, Perl or PHP external object from the database, you use the REMOVE
EXTERNAL OBJECT statement.
● The CREATE PROCEDURE and CREATE FUNCTION statements are used to create external stored
procedure and function definitions. They can be used like any other stored procedure or function in the
database. The database server, when encountering an external environment stored procedure or function,
automatically launches the external environment (if it has not already been started), and sends over
whatever information is needed to get the external environment to fetch the external object from the
database and execute it. Any result sets or return values resulting from the execution are returned as
needed.
● All external environments connect back to the database server over shared memory. If the database server
is configured to accept only encrypted connections, then the external environment will fail to connect. In
this case, make sure to include the -es option with the other database server start options to enable shared
memory connections.
In this section:
CLR stored procedures and functions can be called from within the database in the same manner as SQL
stored procedures.
A CLR stored procedure or function behaves the same as a SQL stored procedure or function except that the
code for the procedure or function is written in a Microsoft .NET language such as Microsoft C# or Microsoft
Visual Basic, and the execution of the procedure or function takes place outside the database server (that is,
within a separate Microsoft .NET executable).
There is only one instance of this Microsoft .NET executable per database. All connections executing CLR
functions and stored procedures use the same Microsoft .NET executable instance, but the namespaces for
each connection are separate. Statics persist for the duration of the connection, but are not shareable across
connections.
By default, the database server uses one CLR external environment for each database running on the database
server. The -sclr command-line option or the SingleCLRInstanceVersion database server option can be used to
request that one CLR external environment be used for all databases running on the database server. This
option must be specified before starting the CLR external environment on any database.
All external environments connect back to the database server over shared memory. If the database server is
configured to accept only encrypted connections, then the external environment will fail to connect. In this
case, make sure to include the -es option with the other database server start options to enable shared
memory connections.
Note that the ADO.NET data provider is used in the CLR external environment and it operates in automatic
commit mode by default using the connection of the caller. This will affect uncommitted transactions in the
client application. If this is an issue, then consider using a second connection for CLR external environment
calls.
To call an external CLR function or procedure, you define a corresponding stored procedure or function with an
EXTERNAL NAME string defining which DLL to load and which function within the assembly to call. You must
also specify LANGUAGE CLR when defining the stored procedure or function. An example declaration follows:
The following table lists the various CLR argument types and the corresponding suggested SQL data type:
bool bit
byte tinyint
short smallint
int int
long bigint
decimal numeric
float real
double double
DateTime timestamp
If you see the error message Object reference not set to an instance of an object, check that you have specified
the path to the DLL.
The declaration of the DLL can be either a relative or absolute path. If the specified path is relative, then the
external Microsoft .NET executable searches the path, and other locations, for the DLL. The executable does
not search the Global Assembly Cache (GAC) for the DLL.
Like the existing Java stored procedures and functions, CLR stored procedures and functions can make server-
side requests back to the database, and they can return result sets. Also, like Java, any information output to
Console.Out and Console.Error is automatically redirected to the database server messages window.
To use CLR in the database, make sure the database server is able to locate and start the CLR external
environment executable. There are several versions of this executable.
3.5 dbextclr[VER_MAJOR]
4.x dbextclr[VER_MAJOR]_v4.5
For portability to newer versions of the software, it is recommended that you use [VER_MAJOR] in the
LOCATION string rather than the current software release version number. The database server will replace
[VER_MAJOR] with the appropriate version number.
Make sure that the file corresponding to the .NET version you are using is present in the %SQLANY17%\Bin32
or %SQLANY17%\Bin64 folder. By default, the 64-bit database server will use the 64-bit version of the module
and the 32-bit database server will use the 32-bit version of the module. Ideally, your .NET application should
be targeted for the Any CPU platform.
You can verify if the database server is able to locate and start the selected CLR external environment
executable by executing the following statement:
If the database server fails to start CLR, then the database server is likely not able to locate the CLR external
environment executable. If you see a message that SAClrClassLoader has stopped working, then run
SetupVSPackage to install the current version of the .NET data provider.
The START EXTERNAL ENVIRONMENT CLR statement is not necessary other than to verify that the database
server can launch CLR executables. In general, making a CLR stored procedure or function call starts CLR
automatically.
Similarly, the STOP EXTERNAL ENVIRONMENT CLR statement is not necessary to stop an instance of CLR
since the instance automatically goes away when the connection terminates. The STOP EXTERNAL
ENVIRONMENT CLR statement releases the CLR instance for your connection. There are a few reasons why
you might do this.
● You are completely done with CLR and you want to free up some resources.
● You change CLR versions using the ALTER EXTERNAL ENVIRONMENT statement and the CLR instance is
currently running.
● You want to reload/restart the assemblies that have been loaded by the CLR.
Unlike the Perl, PHP, and Java external environments, the CLR environment does not require the installation of
anything in the database. As a result, you do not need to execute any INSTALL statements before using the CLR
external environment.
Here is an example of a function written in C# that can be run within an external environment.
When compiled into a dynamic link library, this function can be called from an external environment. An
executable image called dbextclr17.exe is started by the database server and it loads the dynamic link
library for you. Different versions of this executable are provided. For example, on Windows you may have both
To build this application into a dynamic link library using the Microsoft C# compiler, use a command like the
following. The source code for the above example is assumed to reside in a file called StaticTest.cs.
This command places the compiled code in a DLL called clrtest.dll. To call the compiled Microsoft C#
function, GetValue, a wrapper is defined as follows using Interactive SQL:
For CLR, the EXTERNAL NAME string is specified in a single line of SQL. You may be required to include the
path to the DLL as part of the EXTERNAL NAME string so that it can be located. For dependent assemblies (for
example, if myLib.dll has code that calls functions in, or in some way depends on, myOtherLib.dll) then it
is up to the .NET Framework to load the dependencies. The CLR external environment will take care of loading
the specified assembly, but extra steps might be required to ensure that dependent assemblies are loaded.
One solution is to register all dependencies in the Global Assembly Cache (GAC) by using the Microsoft gacutil
utility installed with the Microsoft .NET Framework. For custom-developed libraries, gacutil requires that these
be signed with a strong name key before they can be registered in the GAC.
SELECT stc_get_value();
Each time the Microsoft C# function is called, a new integer result is produced. The sequence of values
returned is 1, 2, 3, and so on.
The following example illustrates how to make a server-side connection using C#. You create a connection
object in the usual way and then assign it a value using the SAServerSideConnection.Connection object.
using Sap.Data.SQLAnywhere;
using Sap.SQLAnywhere.Server;
class test
{
private static SAConnection _conn;
private static void GetConnection()
{
if( _conn == null )
{
_conn = SAServerSideConnection.Connection;
}
}
}
For more information and examples on using the CLR in the database support, refer to the examples located in
the %SQLANYSAMP17%\SQLAnywhere\ExternalEnvironments\CLR directory.
For more information and examples on using the CLR in the database support, how to make server-side
requests, and how to return result sets from a CLR function or stored procedure, refer to the samples located in
the %SQLANYSAMP17%\SQLAnywhere\ExternalEnvironments\CLR directory.
The database server software has long supported the loading of dynamic link libraries or shared objects
containing compiled C or C++ code into its address space, and the subsequent calling of functions in these
libraries. While being able to call native functions is very efficient, there can be serious consequences if the
native function misbehaves. In particular, if the native function enters an infinite loop, then the database server
can hang, and if the native function causes a fault, then the database server crashes.
Because of these consequences, it is much better to run compiled native functions outside of the database
server's address space, in an external environment. There are some key benefits to running a compiled native
function in an external environment:
● The database server does not hang or crash if the compiled native function misbehaves.
● The native function can be written to use ODBC, Embedded SQL (ESQL), or the C API and can make
server-side calls back into the database server without having to make a connection.
● The native function can return a result set to the database server.
● In the external environment, a 32-bit database server can communicate with a 64-bit compiled native
function and vice versa. This is not possible when the compiled native functions are loaded directly into the
address space of the database server. A 32-bit library can only be loaded by a 32-bit server and a 64-bit
library can only be loaded by a 64-bit server.
Running a compiled native function in an external environment instead of within the database server results in
a small performance penalty.
Also, the compiled native function must use the external call interface to pass information to and return
information from the native function.
To run a compiled native C function in an external environment instead of within the database server, the stored
procedure or function is defined with the EXTERNAL NAME clause followed by the LANGUAGE attribute
specifying one of C_ESQL32, C_ESQL64, C_ODBC32, or C_ODBC64.
Unlike the Perl, PHP, JavaScript, and Java external environments, you do not install any source code or
compiled objects in the database. As a result, you do not need to execute any INSTALL statements before using
the ESQL and ODBC external environments.
All external environments connect back to the database server over shared memory. If the database server is
configured to accept only encrypted connections, then the external environment will fail to connect. In this
case, make sure to include the -es option with the other database server start options to enable shared
memory connections.
Here is an example of a function written in C++ that can be run within the database server or in an external
environment.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "extfnapi.h"
j = 1000;
k = 0;
for( i = 1; i <= 4; i++ )
{
result = api->get_value( arg_handle, i, &arg );
if( result == 0 || arg.data == NULL ) break;
if( arg.type & DT_TYPES != DT_INT ) break;
intptr = (int *) arg.data;
k += *intptr * j;
j = j / 10;
}
retval.type = DT_INT;
retval.data = (void*)&k;
retval.piece_len = retval.len.total_len =
(a_sql_uint32) sizeof( int );
api->set_value( arg_handle, 0, &retval, 0 );
return;
}
When compiled into a dynamic link library or shared object, this function can be called from an external
environment. An executable image called dbexternc17 is started by the database server and this executable
image loads the dynamic link library or shared object for you. Different versions of this executable are provided.
For example, on Windows you may have both 32-bit and 64-bit executables.
Either the 32-bit or 64-bit version of the database server can be used and either version can start 32-bit or 64-
bit versions of dbexternc17. This is one of the advantages of using the external environment. Once
dbexternc17 is started by the database server, it does not terminate until the connection has been terminated
or a STOP EXTERNAL ENVIRONMENT statement (with the correct environment name) is executed. Each
connection that does an external environment call will get its own copy of dbexternc17.
When the native function uses none of the ODBC, ESQL, or C API calls to make server-side requests, then
either C_ODBC32 or C_ESQL32 can be used for 32-bit applications and either C_ODBC64 or C_ESQL64 can be
used for 64-bit applications. This is the case in the external C function shown above. It does not use any of
these APIs.
To execute the sample compiled native function, execute the following statement.
SELECT SimpleCDemo(1,2,3,4);
To use server-side ODBC, the C/C++ code must use the default database connection. To get a handle to the
database connection, call get_value with an EXTFN_CONNECTION_HANDLE_ARG_NUM argument. The
argument tells the database server to return the current external environment connection rather than opening
a new one.
#include <windows.h>
#include <stdio.h>
#include "odbc.h"
#include "extfnapi.h"
If the above ODBC code is stored in the file extodbc.cpp, it can be built for Windows using the following
commands (assuming that the database server software is installed in the folder c:\sa17 and that Microsoft
Visual C++ is installed).
The following example creates a table, defines the stored procedure wrapper to call the compiled native
function, and then calls the native function to populate the table.
Similarly, to use server-side ESQL, the C/C++ code must use the default database connection. To get a handle
to the database connection, call get_value with an EXTFN_CONNECTION_HANDLE_ARG_NUM argument. The
argument tells the database server to return the current external environment connection rather than opening
a new one.
#include <windows.h>
#include <stdio.h>
#include "sqlca.h"
#include "sqlda.h"
#include "extfnapi.h"
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
return TRUE;
}
EXEC SQL INCLUDE SQLCA;
static SQLCA *_sqlc;
EXEC SQL SET SQLCA "_sqlc";
EXEC SQL WHENEVER SQLERROR { ret = _sqlc->sqlcode; };
extern "C" __declspec( dllexport )
void ServerSideFunction( an_extfn_api *api, void *arg_handle )
{
short result;
an_extfn_value arg;
an_extfn_value retval;
EXEC SQL BEGIN DECLARE SECTION;
char *stmt_text =
"INSERT INTO esqlTab "
"SELECT table_id, table_name "
"FROM SYS.SYSTAB";
If the above Embedded SQL statements are stored in the file extesql.sqc, it can be built for Windows using
the following commands (assuming that the database server software is installed in the folder c:\sa17 and
that Microsoft Visual C++ is installed).
The following example creates a table, defines the stored procedure wrapper to call the compiled native
function, and then calls the native function to populate the table.
As in the previous examples, to use server-side C API calls, the C/C++ code must use the default database
connection. To get a handle to the database connection, call get_value with an
EXTFN_CONNECTION_HANDLE_ARG_NUM argument. The argument tells the database server to return the
current external environment connection rather than opening a new one. The following example shows the
framework for obtaining the connection handle, initializing the C API environment, and transforming the
connection handle into a connection object (a_sqlany_connection) that can be used with the C API.
#include <windows.h>
#include "sacapidll.h"
#include "extfnapi.h"
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
return TRUE;
}
SQLAnywhereInterface capi;
a_sqlany_connection * sqlany_conn;
unsigned int max_api_ver;
result = extapi->get_value( arg_handle,
EXTFN_CONNECTION_HANDLE_ARG_NUM,
&arg );
if( result == 0 || arg.data == NULL )
{
return;
}
if( !sqlany_initialize_interface( &capi, NULL ) )
{
return;
}
if( !capi.sqlany_init( "MyApp",
SQLANY_CURRENT_API_VERSION,
&max_api_ver ) )
{
sqlany_finalize_interface( &capi );
return;
}
sqlany_conn = sqlany_make_connection( arg.data );
// processing code goes here
capi.sqlany_fini();
sqlany_finalize_interface( &capi );
return;
}
If the above C code is stored in the file extcapi.c, it can be built for Windows using the following commands
(assuming that the database server software is installed in the folder c:\sa17 and that Microsoft Visual C++ is
installed).
The following example defines the stored procedure wrapper to call the compiled native function, and then
calls the native function.
The LANGUAGE attribute in the above example specifies C_ESQL32. For 64-bit applications, you would use
C_ESQL64. You must use the Embedded SQL language attribute since the C API is built on the same layer
(library) as ESQL.
As mentioned earlier, each connection that does an external environment call will start its own copy of
dbexternc17. This executable application is loaded automatically by the server the first time an external
environment call is made. However, you can use the START EXTERNAL ENVIRONMENT statement to preload
Another case where preloading dbexternc17 is useful is when you want to debug your external function. You
can use the debugger to attach to the running dbexternc17 process and set breakpoints in your external
function.
The STOP EXTERNAL ENVIRONMENT statement is useful when updating a dynamic link library or shared
object. It will terminate the native library loader, dbexternc17, for the current connection thereby releasing
access to the dynamic link library or shared object. If multiple connections are using the same dynamic link
library or shared object then each of their copies of dbexternc17 must be terminated. The appropriate
external environment name must be specified in the STOP EXTERNAL ENVIRONMENT statement. Here is an
example of the statement.
To return a result set from an external function, the compiled native function must use the external call
interface.
The following code fragment shows how to set up a result set information structure. It contains a column
count, a pointer to an array of column information structures, and a pointer to an array of column data value
structures. The example also uses the C API.
an_extfn_result_set_info rs_info;
int columns = capi.sqlany_num_cols( sqlany_stmt );
an_extfn_result_set_column_info *col_info =
(an_extfn_result_set_column_info *)
malloc( columns * sizeof(an_extfn_result_set_column_info) );
an_extfn_result_set_column_data *col_data =
(an_extfn_result_set_column_data *)
malloc( columns * sizeof(an_extfn_result_set_column_data) );
rs_info.number_of_columns = columns;
rs_info.column_infos = col_info;
rs_info.column_data_values = col_data;
The following code fragment shows how to describe the result set. It uses the C API to obtain column
information for a SQL query that was executed previously by the C API. The information that is obtained from
the C API for each column is transformed into a column name, type, width, index, and null value indicator that
are used to describe the result set.
a_sqlany_column_info info;
for( int i = 0; i < columns; i++ )
{
if( sqlany_get_column_info( sqlany_stmt, i, &info ) )
{
// set up a column description
col_info[i].column_name = info.name;
col_info[i].column_type = info.native_type;
switch( info.native_type )
{
case DT_DATE: // DATE is converted to string by C API
case DT_TIME: // TIME is converted to string by C API
case DT_TIMESTAMP: // TIMESTAMP is converted to string by C API
case DT_DECIMAL: // DECIMAL is converted to string by C API
col_info[i].column_type = DT_FIXCHAR;
break;
case DT_FLOAT: // FLOAT is converted to double by C API
Once the result set has been described, the result set rows can be returned. The following code fragment
shows how to return the rows of the result set. It uses the C API to fetch the rows for a SQL query that was
executed previously by the C API. The rows returned by the C API are sent back, one at a time, to the calling
environment. The array of column data value structures must be filled in before returning each row. The
column data value structure consists of a column index, a pointer to a data value, a data length, and an append
flag.
Related Information
Java methods can be called from the database in the same manner as SQL stored procedures.
A Java method behaves the same as a SQL stored procedure or function except that the code for the
procedure or function is written in Java and the execution of the Java method takes place outside the database
server (that is, within a Java VM environment).
There can be one instance of the Java VM for each database or there can be one instance of the Java VM for
each database server (that is, all databases use the same instance).
● A copy of the Java Runtime Environment must be installed on the database server computer.
● The database server must be able to locate the Java executable (the Java VM).
● All external environments connect back to the database server over shared memory. If the database server
is configured to accept only encrypted connections, then the external environment will fail to connect. In
this case, make sure to include the -es option with the other database server start options to enable shared
memory connections.
To use Java in the database, make sure that the database server is able to locate and start the Java executable.
Verify that this can be done by executing:
If the database server fails to start Java then the problem probably occurs because the database server is not
able to locate the Java executable. In this case, execute an ALTER EXTERNAL ENVIRONMENT statement to
explicitly set the location of the Java executable. Make sure to include the executable file name.
For example:
You can query the location of the Java VM that the database server will use by executing the following SQL
query:
SELECT db_property('JAVAVM');
Similarly, the STOP EXTERNAL ENVIRONMENT JAVA statement is not necessary to stop an instance of Java
since the instance automatically goes away when the all connections to the database have terminated.
However, if you are completely done with Java and you want to make it possible to free up some resources,
then the STOP EXTERNAL ENVIRONMENT JAVA statement decrements the usage count for the Java VM.
Once you have verified that the database server can start the Java VM executable, the next thing to do is to
install the necessary Java class code into the database. Do this by using the INSTALL JAVA statement. For
example, you can execute the following statement to install a