Att Unity Manual
Att Unity Manual
March 2008
AIS User Guide and Reference, Version 5.1 AIS5100 Copyright March, 2008, Attunity Ltd. All rights reserved. Primary Authors: David Goldman, Andre Liss, Jeanne Wiegelmann
Contributors: Yishai Hadas, Dror Harari, Tzachi Nissim, Adeeb Massad, Costi Zaboura, Sami Zeitoun, Gadi Farhat, Arie Kremer The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose. If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation and technical data, shall be subject to the licensing restrictions set forth in the applicable Attunit license agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial Computer SoftwareRestricted Rights (June 1987). Attunity Ltd., 70 jBlanchard Road, Burlington, MA 01803 The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim liability for any damages caused by such use of the Programs. Attunity is a registered trademark of Attunity Ltd and/or its affiliates. Other names may be trademarks of their respective owners. The Programs may provide links to Web sites and access to content, products, and services from third parties. Attunity is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated with the use of such content. If you choose to purchase any products or services from a third party, the relationship is directly between you and the third party. Attunity is not responsible for: (a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and warranty obligations related to purchased products or services. Attunity is not responsible for any loss or damage of any sort that you may incur from dealing with any third party.
Contents
Send Us Your Comments ......................................................................................................................... li Preface ................................................................................................................................................................ liii
Audience...................................................................................................................................................... Organization ............................................................................................................................................... Related Documentation ............................................................................................................................. Conventions ................................................................................................................................................ liii liv liv liv
Part I 1
Binding Configuration
Binding Configuration Overview ......................................................................................................... 3-1 Server Binding .................................................................................................................................... 3-1 Client Binding..................................................................................................................................... 3-2 Setting up Bindings in Attunity Studio............................................................................................... 3-2 Adding Bindings ................................................................................................................................ 3-2 Editing Bindings................................................................................................................................. 3-3 Setting the Binding Environment in Attunity Studio............................................................ 3-3 Defining Remote Machines in a Binding................................................................................. 3-4 Binding Syntax.......................................................................................................................................... 3-6 <remoteMachines> Statement.......................................................................................................... 3-7 <remoteMachine> Statement ........................................................................................................... 3-7 <adapters> Statement........................................................................................................................ 3-8 <adapter> Statement ......................................................................................................................... 3-8 <config> Statement ..................................................................................................................... 3-8 <datasources> Statement .................................................................................................................. 3-9 <datasource> Statement.................................................................................................................... 3-9 <config> Statement .................................................................................................................. 3-11 Sample Binding ..................................................................................................................................... 3-11 Environment Properties ....................................................................................................................... 3-12 Debug................................................................................................................................................ 3-13 General.............................................................................................................................................. 3-14 Language .......................................................................................................................................... 3-15 Modeling .......................................................................................................................................... 3-16 ODBC ................................................................................................................................................ 3-17 OLE DB ............................................................................................................................................. 3-17 Optimizer ......................................................................................................................................... 3-18 Parallel Processing .......................................................................................................................... 3-20 Query Processor .............................................................................................................................. 3-20 Temp Features ................................................................................................................................. 3-22 Transaction....................................................................................................................................... 3-23 Tuning............................................................................................................................................... 3-23 XML................................................................................................................................................... 3-24 Languages ........................................................................................................................................ 3-25 ARA (Arabic) ............................................................................................................................ 3-25 ENG (English)........................................................................................................................... 3-26
vi
FR (French)................................................................................................................................ GER (German) .......................................................................................................................... GREEK (Greek)......................................................................................................................... HEB (Hebrew) .......................................................................................................................... JPN (Japanese) .......................................................................................................................... KOR (Korean) ........................................................................................................................... SCHI (Simple Chinese)............................................................................................................ SPA (Spanish) ........................................................................................................................... TCHI (Traditional Chinese).................................................................................................... TUR (Turkish)........................................................................................................................... Sample Environment Properties ...................................................................................................
3-26 3-26 3-26 3-26 3-26 3-26 3-26 3-27 3-27 3-27 3-27
Setting up Daemons
Daemons..................................................................................................................................................... 4-1 Defining Daemons at Design Time ...................................................................................................... 4-1 Adding a Daemon.............................................................................................................................. 4-2 Editing a Daemon............................................................................................................................... 4-3 Control .......................................................................................................................................... 4-3 Logging......................................................................................................................................... 4-5 Security ......................................................................................................................................... 4-8 Administering Selected User Only Lists.................................................................................. 4-9 Reloading Daemon Configurations at Runtime ............................................................................. 4-10 Editing Daemon Configurations................................................................................................... 4-10 Checking the Daemon Status.............................................................................................................. 4-10 Checking the Daemon Status with Attunity Studio................................................................... 4-10 Starting and Stopping Daemons ........................................................................................................ 4-11 Starting a Daemon in Attunity Studio ......................................................................................... 4-11 Shutting Down a Daemon in Attunity Studio ............................................................................ 4-11 Sample Daemon Configuration.......................................................................................................... 4-11 Adding and Editing Workspaces........................................................................................................ 4-12 Adding a Workspace ...................................................................................................................... 4-12 Editing a Workspace....................................................................................................................... 4-15 General....................................................................................................................................... 4-16 Server Mode.............................................................................................................................. 4-19 Security ...................................................................................................................................... 4-22 Selecting a Binding Configuration................................................................................................ 4-24 Disabling a Workspace................................................................................................................... 4-25 Setting Workspace Authorization ................................................................................................ 4-25
Managing Metadata
Data Source Metadata Overview........................................................................................................... Importing Metadata ................................................................................................................................. Importing Metadata Using an Attunity Studio Import Wizard.................................................. Importing Metadata Using a Standalone Utility ........................................................................... Managing Metadata ................................................................................................................................. Using Attunity Metadata with AIS Supported Data Sources.......................................................... 5-1 5-2 5-2 5-3 5-3 5-4
vii
Extended Native Data Source Metadata......................................................................................... 5-4 Native Metadata Caching ................................................................................................................. 5-4 Procedure Metadata Overview .............................................................................................................. 5-5 Importing Procedure Metadata Using the Import Wizard ............................................................... 5-5 Procedure Metadata Statements ............................................................................................................ 5-6 The <procedure> Statement ............................................................................................................. 5-6 Syntax ........................................................................................................................................... 5-6 <procedure> Attributes ............................................................................................................. 5-7 The <parameters> Statement............................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <dbCommand> Statement ........................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <fields> Statement...................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 The <field> Statement ....................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 <field> Attributes..................................................................................................................... 5-10 The <group> Statement.................................................................................................................. 5-10 Syntax ........................................................................................................................................ 5-10 <group> Attributes.................................................................................................................. 5-11 The <variant> Statement................................................................................................................ 5-11 Variant without selector.......................................................................................................... 5-11 Variant with selector................................................................................................................ 5-12 ADD Syntax .............................................................................................................................. 5-13 Usage Notes .............................................................................................................................. 5-13 Resolving Variants in Attunity Studio.................................................................................. 5-13 The <case> Statement ..................................................................................................................... 5-14 Syntax ........................................................................................................................................ 5-14 <case> Attributes ..................................................................................................................... 5-14 ADD Supported Data Types ............................................................................................................... 5-15 ADD Syntax............................................................................................................................................ 5-23 The <table> Statement.................................................................................................................... 5-24 Syntax ........................................................................................................................................ 5-24 Table Attributes........................................................................................................................ 5-25 The <dbCommand> Statement ..................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-29 Examples ................................................................................................................................... 5-29 The <fields> Statement................................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-30 The <field> Statement .................................................................................................................... 5-30 Syntax ........................................................................................................................................ 5-30 Example ..................................................................................................................................... 5-30 Field Attributes......................................................................................................................... 5-30 The <group> Statement.................................................................................................................. 5-35 Syntax ........................................................................................................................................ 5-35 Example ..................................................................................................................................... 5-35 Group Attributes...................................................................................................................... 5-35
viii
The <variant> Statement................................................................................................................ Variant without selector.......................................................................................................... Variant with selector................................................................................................................ Usage Notes .............................................................................................................................. Resolving Variants in Attunity Studio.................................................................................. Variant Attributes .................................................................................................................... The <case> Statement ..................................................................................................................... Syntax ........................................................................................................................................ Case Attributes ......................................................................................................................... The <keys> Statement .................................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... The <key> Statement ...................................................................................................................... Syntax ........................................................................................................................................ Key Attributes .......................................................................................................................... The <segments> Statement............................................................................................................ Syntax ........................................................................................................................................ The <segment> Statement.............................................................................................................. Syntax ........................................................................................................................................ Segment Attributes .................................................................................................................. The <foreignKeys> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <foreignKey> Statement......................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... foreignKey Attributes.............................................................................................................. The <primaryKey> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <pKeySegments> Statement .................................................................................................. Syntax ........................................................................................................................................ Example ..................................................................................................................................... pKeySegment Attributes.........................................................................................................
5-38 5-38 5-39 5-40 5-40 5-40 5-41 5-41 5-41 5-42 5-42 5-43 5-43 5-43 5-43 5-46 5-46 5-46 5-46 5-46 5-47 5-47 5-47 5-48 5-48 5-48 5-49 5-49 5-49 5-49 5-49 5-50
ix
Update Button .......................................................................................................................... Modelling Tab.................................................................................................................................. Importing Data Source Metadata with the Attunity Import Wizard .......................................... Starting the Import Process............................................................................................................ Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation ...................................................................................................................... Import Manipulation Screen .................................................................................................. Field Manipulation Screen...................................................................................................... Metadata Model Selection.............................................................................................................. Import the Metadata ....................................................................................................................... Working with Application Adapter Metadata................................................................................. Adapter Metadata General Properties ......................................................................................... Adapter Metadata Schema Records ............................................................................................. Editing an Existing Schema Definition ................................................................................. Adapter Metadata Interactions ..................................................................................................... Editing an Existing Interaction .............................................................................................. Interaction Advanced Tab ...................................................................................................... Working with Procedure Metadata .................................................................................................... Manually Creating Procedure Metadata ..................................................................................... Managing Procedure Metadata..................................................................................................... Importing Procedure Metadata.....................................................................................................
6-11 6-12 6-14 6-14 6-15 6-17 6-19 6-19 6-20 6-21 6-27 6-29 6-30 6-31 6-32 6-34 6-34 6-35 6-36 6-36 6-37 6-37 6-37
Handling Arrays
Overview of Handling Arrays ............................................................................................................... 7-1 Representing Metadata ........................................................................................................................... 7-1 Methods of Handling Arrays ................................................................................................................. 7-5 Columnwise Normalization ............................................................................................................. 7-5 Virtual Tables...................................................................................................................................... 7-7 Virtual Views ................................................................................................................................... 7-10 Sequential Flattening (Bulk Load of Array Data)....................................................................... 7-11 ADO/OLE DB Chapters ................................................................................................................ 7-14 Chapter Handling in Query and Database Adapters ......................................................... 7-16 XML................................................................................................................................................... 7-17
Using SQL
Overview of Using SQL .......................................................................................................................... Batching SQL Statements ....................................................................................................................... Hierarchical Queries ................................................................................................................................ Generating Hierarchical Results Using SQL .................................................................................. Accessing Hierarchical Data Using SQL......................................................................................... Examples ...................................................................................................................................... Flattening Hierarchical Data Using SQL ........................................................................................ Using an Alias ............................................................................................................................. Examples ...................................................................................................................................... Using Virtual Tables to Represent Hierarchical Data................................................................... 8-1 8-1 8-2 8-3 8-4 8-4 8-5 8-6 8-6 8-9
Creating Virtual Tables ........................................................................................................... Hierarchical Queries From an Application ................................................................................. Drill-down Operations in an ADO Application.................................................................. Drill-down Operations in an ODBC Application................................................................ ODBC Drill-down Operations Using RDO .......................................................................... ODBC Drill-down Operations Using C ................................................................................ Drill-down Operations in a Java Application...................................................................... Copying Data From One Table to Another ...................................................................................... Passthru SQL.......................................................................................................................................... For a Specific SQL Statement......................................................................................................... Via ADO .................................................................................................................................... Via RDO and DAO .................................................................................................................. For all SQL During a Session......................................................................................................... Via ADO/OLE DB ................................................................................................................... Via ODBC .................................................................................................................................. Passthru Queries as Part of an SQL Statement ........................................................................... Writing Queries Using SQL ................................................................................................................ Writing Efficient SQL ..................................................................................................................... Locking Considerations ....................................................................................................................... Locking Modes ................................................................................................................................ Optimistic Locking .................................................................................................................. Pessimistic Locking.................................................................................................................. No Locking................................................................................................................................ ODBC Locking Considerations ..................................................................................................... ADO Locking Considerations ....................................................................................................... Managing the Execution of Queries over Large Tables ................................................................. Optimizing Outer Joins........................................................................................................................ Limitations ....................................................................................................................................... Query Optimization........................................................................................................................ Property ............................................................................................................................................ Changing the Property............................................................................................................
8-10 8-10 8-11 8-11 8-12 8-14 8-16 8-17 8-17 8-18 8-18 8-19 8-20 8-20 8-20 8-21 8-22 8-22 8-23 8-23 8-23 8-23 8-23 8-23 8-24 8-24 8-25 8-26 8-26 8-26 8-26
xi
Logging Web Service Activities.......................................................................................................... Changing the Log File Location .................................................................................................... Changing the Error Message Level .............................................................................................. Changing the Error Message Format ...........................................................................................
Part II 10
Attunity Connect
11
xii
12
13
14
xiii
15
16
Setting Up Adapters
Setting up Adapters Overview ........................................................................................................... Working with Adapters........................................................................................................................ Adding Application Adapters....................................................................................................... Configuring Application Adapters .............................................................................................. Testing Application Adapters ....................................................................................................... 16-1 16-1 16-1 16-2 16-3
17
Part III 18
Attunity Stream
xiv
19
20
xv
Production Metadata Change Procedure .................................................................................. 20-21 Migration from XML-based CDC .................................................................................................... 20-21
21
Part IV 22
Attunity Federate
23
xvi
Metadata Considerations ..................................................................................................................... Defining Tables ..................................................................................................................................... Creating Synonyms............................................................................................................................... Defining Stored Procedures ................................................................................................................ Creating Views....................................................................................................................................... Using a Virtual Database .....................................................................................................................
24
Part V 25
Attunity Studio
xvii
Part VI 26
27
xviii
28
Managing Security
Overview of Attunity Security ........................................................................................................... Managing Design Time Security........................................................................................................ Local Access to AIS Design-Time Resources .............................................................................. Remote Access to AIS Design-Time Resources .......................................................................... Design Roles ............................................................................................................................. Assigning Design Roles .......................................................................................................... Password Handling in Attunity Studio ....................................................................................... Setting Up the Password Caching Policy in Attunity Studio............................................ Assigning Authorization Rights to a Workspace................................................................ Setting Up a Master Password for a User............................................................................. Managing Runtime Security ............................................................................................................... User Profiles ..................................................................................................................................... Setting a Master Password for a User Profile ...................................................................... Using a Client User Password................................................................................................ Using a Server User Profile..................................................................................................... Managing a User Profile in Attunity Studio ............................................................................... Adding a User .......................................................................................................................... Add Authenticators ............................................................................................................... Add Encryption Keys ............................................................................................................ Editing a User Profile ............................................................................................................ Remove an Authenticator or Encryption key .................................................................... Client Authentication ................................................................................................................... Client Authentication for Thin Clients ............................................................................... Client Authentication for Fat Clients .................................................................................. Client Authorization and Access Restriction ............................................................................ Restricting Access to a User by Login User Name............................................................ Restricting Access to Data with a Virtual Database.......................................................... Transport Encryption ................................................................................................................... Encrypting Network Communications...................................................................................... Setting a Client Encryption Protocol ................................................................................... Configuring Encrypted Communication............................................................................ Configuring the Encryption Key on the Server Machine................................................. Firewall Support............................................................................................................................ Accessing a Server through a Firewall....................................................................................... Selecting a Port Range for Workspace Servers .................................................................. Accessing a Server Using Fixed NAT ................................................................................. Dynamic Credentials .................................................................................................................... Providing Credentials in the Connection String ............................................................... Interactively Prompting for Credentials............................................................................. Setting Up Impersonation............................................................................................................ Setting Up Impersonation for DB2 ...................................................................................... Granting Daemon Administration Rights to Users ................................................................. Granting Workspace Administration Rights to Users............................................................. 28-1 28-1 28-1 28-2 28-2 28-2 28-3 28-3 28-4 28-4 28-5 28-6 28-6 28-7 28-7 28-9 28-9 28-11 28-12 28-13 28-14 28-14 28-15 28-15 28-15 28-15 28-17 28-17 28-18 28-18 28-19 28-20 28-22 28-23 28-24 28-24 28-25 28-25 28-26 28-26 28-28 28-28 28-30
xix
29
Backing Up AIS
Overview of the AIS Backup Process................................................................................................ Backing Up and Restoring AIS Server Installation........................................................................ Backing Up and Restoring AIS Server Metadata............................................................................ Backing Up and Restoring AIS Server Scripts ................................................................................ Backing Up and Restoring AIS Server Data .................................................................................... Backing Up and Restoring AIS Studio Metadata ........................................................................... 29-1 29-1 29-2 29-3 29-3 29-3
30
Transaction Support
Overview................................................................................................................................................. Using Attunity Connect as a Stand-alone Transaction Coordinator ........................................... Attunity Connect Data Source Driver Capabilities........................................................................ Data Sources That Do Not Support Transactions....................................................................... Data Sources with One-Phase Commit Capability .................................................................... Data Sources with Two-Phase Commit Capability.................................................................... Relational Database Procedures.................................................................................................... Distributed Transactions...................................................................................................................... Transaction Log File........................................................................................................................ CommitConfirm Table ................................................................................................................... Recovery .................................................................................................................................................. Recovery Utility Toolbar ................................................................................................................ Platform Specific Information ............................................................................................................ 30-1 30-2 30-3 30-3 30-3 30-4 30-4 30-4 30-5 30-6 30-7 30-9 30-9
31
Troubleshooting in AIS
Troubleshooting Overview.................................................................................................................. Product Flow Maps ............................................................................................................................... Local Data Access Scenario............................................................................................................ Remote Data Access Scenario........................................................................................................ Using the Product Flow Maps for Troubleshooting ....................................................................... SQL Application/SQL API Issues (A1, B1) ................................................................................. ADO/OLEDB ........................................................................................................................... JDBC........................................................................................................................................... SQL API Issues/Query Processor Issues (A2, B2F) ................................................................... Troubleshooting Methods ................................................................................................................... Using the NAV_UTIL CHECK SERVER Utility ......................................................................... Using the NAV_UTIL CHECK DATASOURCE Utility ............................................................ Using Trace Log Files ..................................................................................................................... Log Traces ................................................................................................................................. Using Extended Logging Options ................................................................................................ New Log Entries.............................................................................................................................. Identifying Nodes ......................................................................................................................... Configuring the Optimizer Trace File................................................................................. Reading the Log............................................................................................................................. Configuring the Extended Logging Option ....................................................................... Configuring Advanced Environment Parameters ................................................................... Common Errors and Solutions ......................................................................................................... 31-1 31-1 31-2 31-3 31-5 31-5 31-5 31-6 31-6 31-6 31-6 31-7 31-8 31-8 31-9 31-9 31-10 31-10 31-11 31-11 31-12 31-13
xx
Part VII 32
Utilities
33
34
xxi
36
37
xxii
Executing SQL Statements.................................................................................................... NavSQL Commands.............................................................................................................. EXPORT.......................................................................................................................................... GEN_ARRAY_TABLES................................................................................................................ IMPORT.......................................................................................................................................... IRPCDCMD.................................................................................................................................... LOCAL_COPY............................................................................................................................... PASSWORD ................................................................................................................................... PROTOGEN ................................................................................................................................... REGISTER ...................................................................................................................................... SERVICE ......................................................................................................................................... SVC.................................................................................................................................................. TEST ................................................................................................................................................ UPDATE ......................................................................................................................................... Removing Metadata Statistics .............................................................................................. UPD_DS.......................................................................................................................................... UPD_SEC........................................................................................................................................ VERSION........................................................................................................................................ VERSION_HISTORY .................................................................................................................... VIEW ............................................................................................................................................... XML................................................................................................................................................. XML Samples..........................................................................................................................
37-14 37-15 37-16 37-19 37-20 37-21 37-21 37-22 37-22 37-22 37-22 37-23 37-23 37-23 37-23 37-25 37-25 37-26 37-26 37-26 37-28 37-29
38
Part VIII 39
xxiii
Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Configuration Properties ................................................................................................................... Platform-specific Information .......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables........................................................................................ Relinking to the Adabas Driver on UNIX Platforms ........................................................ Accessing 64 Bit Adabas ....................................................................................................... z/OS Platforms.............................................................................................................................. Specifying the Adabas SVC .................................................................................................. Configuring AIS to Run in multiClient Mode ................................................................... Defining an Adabas Data Source..................................................................................................... Defining the Adabas Data Source Connection ......................................................................... Configuring the Adabas Data Source Properties ..................................................................... Setting Up Adabas Data Source Metadata (Using the Import Manager) ................................. Selecting the DDM Declaration files .......................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Setting Up Adabas Data Source Metadata (Traditional Method) .............................................. Importing Attunity Metadata from DDM Files ........................................................................ Exporting Predict Metadata into Adabas ADD........................................................................ Testing the Adabas Data Source.......................................................................................................
39-8 39-8 39-8 39-10 39-12 39-12 39-12 39-13 39-13 39-13 39-13 39-13 39-17 39-17 39-18 39-19 39-19 39-22 39-23 39-23 39-24 39-25 39-31 39-33 39-34 39-34 39-35 39-35
40
xxiv
z/OS .................................................................................................................................................. Defining an ODBCINI file ...................................................................................................... Defining the Data Source Connection................................................................................... Configuring the Data Source................................................................................................ OS/400 ............................................................................................................................................ Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................ UNIX and Windows ..................................................................................................................... Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................
40-8 40-8 40-9 40-10 40-11 40-11 40-11 40-13 40-13 40-13
41
42
xxv
43
44
45
xxvi
Selecting a PCB......................................................................................................................... DLI Samples.............................................................................................................................. Configuration Properties ..................................................................................................................... IMS/DB DLI Configuration Properties ....................................................................................... IMS/DB DBCTL Configuration Properties ................................................................................. IMS/DB DBDC Configuration Properties................................................................................... Configuring Advanced Data Source Properties ......................................................................... Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Hospital Database Example ................................................................................................................ Defining the IMS/DB DLI Data Source ......................................................................................... Defining the IMS/DB DLI Data Source Connection................................................................ Configuring the IMS/DB DLI Data Source............................................................................... Setting Up the Daemon Workspace ........................................................................................... Defining the IMS/DB DBCTL Data Source .................................................................................. Defining the IMS/DB DBCTL Data Source Connection ......................................................... Configuring the IMS/DB DBCTL Data Source......................................................................... Accessing IMS/DB Data under CICS......................................................................................... Defining the IMS/DB DBDC Data Source .................................................................................... Defining the IMS/DB DBDC Data Source Connection ........................................................... Configuring the IMS/DB DBDC Data Source .......................................................................... Accessing IMS/DB Data under IMS/TM.................................................................................. Setting Up IMS/DB Metadata ........................................................................................................... Selecting the Input Files ............................................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Matching DBD to COBOL............................................................................................................ Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata .....................................................................................................................
45-6 45-6 45-7 45-7 45-7 45-8 45-8 45-8 45-9 45-9 45-11 45-11 45-12 45-12 45-13 45-13 45-14 45-15 45-16 45-16 45-16 45-17 45-18 45-19 45-20 45-22 45-22 45-23 45-23 45-25 45-31 45-33
46
xxvii
Locking Levels ................................................................................................................................. Isolation Levels ................................................................................................................................ Security.................................................................................................................................................... Data Types .............................................................................................................................................. Defining an Informix Data Source .................................................................................................... Defining the Informix Data Source Connection ......................................................................... Configuring the Informix Data Source Properties ................................................................... Testing the Informix Data Source .................................................................................................... Sample Log File Explained ..........................................................................................................
47
48
xxviii
49
50
51
xxix
Security.................................................................................................................................................. Data Types ............................................................................................................................................ Platform-Specific Information.......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables on UNIX Platforms .................................................... Linking to Oracle Libraries on UNIX Platforms................................................................ OpenVMS Platform....................................................................................................................... Verifying Environment Variables on OpenVMS Platforms ............................................ Linking to Oracle Libraries on OpenVMS Platforms ....................................................... Defining the Oracle Data Source ..................................................................................................... Defining the Oracle Data Source Connection ........................................................................... Configuring the Oracle Data Source Properties ....................................................................... Configuring Table and Column Names to be Case Sensitive................................................. Checking Oracle Environment Variables .................................................................................. Testing the Oracle Data Source......................................................................................................... Sample Log File .............................................................................................................................
51-11 51-11 51-13 51-13 51-13 51-13 51-13 51-13 51-14 51-14 51-14 51-15 51-16 51-17 51-17 51-18
52
53
xxx
Data Types .............................................................................................................................................. Defining the RMS Data Source .......................................................................................................... Defining the RMS Data Source Connection ................................................................................ Configuring the RMS Data Source ............................................................................................... Setting Up the RMS Data Source Metadata with the Import Manager...................................... Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Importing Attunity Metadata Using the RMS_CDD Import Utility ........................................
53-3 53-3 53-3 53-4 53-5 53-6 53-8 53-9 53-10 53-10 53-12 53-17 53-19 53-20
54
55
56
xxxi
Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Verifying Environment Variables on UNIX Platforms.............................................................. Defining the Sybase Data Source ...................................................................................................... Defining the Sybase Data Source Connection............................................................................. Configuring the Sybase Data Source Properties......................................................................... Checking Sybase Environment Variables....................................................................................
57
58
59
xxxii
Defining a VSAM Data Source .......................................................................................................... Defining the VSAM Data Source Connection ............................................................................. Defining the VSAM (CICS) Data Source Connection ................................................................ Configuring the VSAM Data Source Properties ......................................................................... Configuring the VSAM (CICS) Data Source Properties .......................................................... Setting Up the VSAM Data Source Metadata................................................................................ Selecting the COBOL files ............................................................................................................ Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Create VSAM Indexes................................................................................................................... Assigning File Names................................................................................................................... Assigning Index File Names........................................................................................................ Metadata Model Selection............................................................................................................ Importing the Metadata ...............................................................................................................
59-6 59-6 59-7 59-9 59-10 59-11 59-12 59-14 59-16 59-16 59-17 59-18 59-24 59-25 59-26 59-27 59-29
Part IX 60
61
xxxiii
Parameter Descriptions .................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Platform-specific Information ............................................................................................................ Windows Platforms and AIS Procedures (ADO Considerations) ........................................... HP NonStop Platforms and Attunity Connect Procedures ...................................................... Load Modules and DLLs on MVS ................................................................................................ Descriptors on OpenVMS ............................................................................................................ OS/400 Issues ................................................................................................................................ Defining the Procedure Data Source............................................................................................... Defining the Procedure Data Source Connection..................................................................... Configuring the Procedure Data Source.................................................................................... Setting Up Procedure Data Source Metadata................................................................................. Defining Return Values................................................................................................................ Defining Input and Output Arguments .................................................................................... Testing the Procedure Data Source .................................................................................................. Executing a Procedure ........................................................................................................................
61-3 61-7 61-7 61-7 61-7 61-7 61-8 61-9 61-10 61-10 61-10 61-10 61-10 61-11 61-12 61-13 61-16 61-16
62
Part X
Adapters Reference
xxxiv
63
64
65
66
xxxv
Setting Up Legacy Application Metadata......................................................................................... Importing Attunity Metadata from PCML Files ........................................................................ Defining Interactions and Records ............................................................................................... Defining Interaction Properties.............................................................................................. Defining Schema Records ....................................................................................................... Configuring a Trigger for the Legacy Plug Adapter................................................................
67
68
Part XI 69
Database Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. 69-1 69-1 69-1 69-2 69-2
xxxvi
Security.................................................................................................................................................... SQL Interaction Types .......................................................................................................................... Database Query Interaction ........................................................................................................... Database Modification Interaction ............................................................................................... Stored Procedure Call Interaction................................................................................................. Transaction Support.............................................................................................................................. Interaction Parameters.......................................................................................................................... Defining the Database Adapter.......................................................................................................... Defining the Database Adapter Connection ............................................................................... Configuring the Database Adapter .............................................................................................. Configuring Database Adapter Interactions.................................................................................... Automatically Creating Interactions ............................................................................................ Manually Creating Interactions .................................................................................................... Specifying Parameters ........................................................................................................... Testing Database Adapter Interactions ........................................................................................... Creating SQL Queries ........................................................................................................................
69-2 69-2 69-2 69-3 69-3 69-4 69-4 69-4 69-5 69-5 69-5 69-6 69-8 69-20 69-21 69-21
70
Query Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Features Highlights......................................................................................................................... Metadata.................................................................................................................................................. Security.................................................................................................................................................... Transaction Support.............................................................................................................................. Predefined Interactions........................................................................................................................ callProcedure Interaction ............................................................................................................... Input Record ............................................................................................................................. Output Record .......................................................................................................................... ddl Interaction ................................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... getSchema Interaction .................................................................................................................... Input Record ............................................................................................................................. query Interaction ............................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... Output Data Formats............................................................................................................... setErrorAction Interaction ........................................................................................................... Input Record ........................................................................................................................... update Interaction ......................................................................................................................... Input Record ........................................................................................................................... Output Record ........................................................................................................................ Interactions for Internal Use ........................................................................................................ Using the Query Adapter................................................................................................................... 70-1 70-1 70-1 70-1 70-2 70-2 70-2 70-2 70-2 70-3 70-4 70-4 70-5 70-6 70-6 70-8 70-8 70-9 70-9 70-13 70-13 70-13 70-13 70-14 70-15 70-16
xxxvii
71
Part XII 72
73
74
xxxviii
Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Configuring the Adabas CDC............................................................................................................. Identifying the Adabas CDC in the Adabas System.................................................................. Defining Adabas CDC in Attunity Studio....................................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................
75
76
77
xxxix
Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform-specific Information ............................................................................................................ Setting Up Enscribe to use the Attunity Enscribe CDC Agent .................................................... Adding the Enscribe Agent to Attunity Studio............................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................
78
79
xl
User Defined Data Types (UDT)................................................................................................... Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Setting up the SQL Server CDC in Attunity Studio ...................................................................... Enabling MS SQL Replication ......................................................................................................... MS SQL Server 2000 Replication................................................................................................. MS SQL Server 2005 Replication................................................................................................. Configuring Security Properties ...................................................................................................... Setting up Log On Information ........................................................................................................ Setting up the Database ..................................................................................................................... MS SQL Server 2000 Settings....................................................................................................... MS SQL Server 2005 Settings....................................................................................................... Setting Up the TLOG Miner (LGR) ................................................................................................. Call the LGR Service Interface..................................................................................................... Configuring the Template Input File ......................................................................................... Registering the TLOG Miner (LGR) Service.............................................................................. Setting the Recovery Policy ......................................................................................................... Testing Attunitys Microsoft SQL Server CDC Solution............................................................. Handling Metadata Changes ............................................................................................................ Environment Verification .................................................................................................................. Verify the MS SQL Server Version ............................................................................................. Ensure that the Service is Registered ......................................................................................... Verify that the LGR Service is Running ..................................................................................... Viewing the Service Greetings .................................................................................................... Check the Output Files .................................................................................................................
79-7 79-8 79-8 79-8 79-11 79-11 79-11 79-12 79-12 79-13 79-13 79-14 79-14 79-14 79-15 79-18 79-18 79-19 79-19 79-20 79-20 79-21 79-22 79-23 79-23
80
81
xli
82
83
84
xlii
Configuring the Logger........................................................................................................................ Creating the Logstream.................................................................................................................. Managing the MVS Logstream .............................................................................................. Creating the CDC$PARM Data Set .............................................................................................. Updating Jobs and Scripts ............................................................................................................. Updating Jobs for Activating CDC JRNAD ......................................................................... Updating Jobs for Using the Logical Transaction Manager .............................................. Update the REXX Scripts ........................................................................................................ Setting up the VSAM Batch Agent in Attunity Studio.................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................
84-5 84-5 84-5 84-6 84-6 84-6 84-6 84-7 84-7 84-7 84-7
Part XIII 85
Interface Reference
xliii
Calling the Transaction ................................................................................................................ Transaction Output ....................................................................................................................... CICS Connection Pooling under CICS ........................................................................................... Using Connection Pooling under CICS ..................................................................................... CICS Connection Pool Flow ........................................................................................................ Control Operations Flow ...................................................................................................... 3GL Operations Flow ............................................................................................................ ATTCALL Program Interface...................................................................................................... COMMAREA.......................................................................................................................... Control Protocol ..................................................................................................................... 3GL Protocol ........................................................................................................................... ATTCNTRL Program ................................................................................................................... ATTCNTRL Parameters........................................................................................................ Setting up 3GL under CICS ......................................................................................................... Create a Log File..................................................................................................................... CICS Definitions..................................................................................................................... IMS/TM as a Client Invoking an Application Adapter (z/OS Only) ..................................... Setting Up the IBM z/OS Machine............................................................................................. Setting Up a Call to the Transaction........................................................................................... Calling the Transaction ................................................................................................................ C Call ....................................................................................................................................... COBOL Call ............................................................................................................................ The Transaction Output ...............................................................................................................
85-25 85-25 85-26 85-26 85-26 85-27 85-28 85-28 85-28 85-29 85-30 85-31 85-32 85-33 85-33 85-33 85-33 85-34 85-34 85-35 85-36 85-36 85-37
86
87
xliv
JDBC API Conformance....................................................................................................................... Supported Interfaces....................................................................................................................... Supported Classes ......................................................................................................................... DataSource Properties .................................................................................................................. ConnectionPool Data Source and XADatasource Interface Properties................................. Connection Pooling Properties ................................................................................................... JDBC Client Interface......................................................................................................................... JDBC Sample Program .......................................................................................................................
88
89
xlv
OLE DB Data Types ............................................................................................................................ Mapping SQL Data Types to OLE DB Data Types .................................................................. ADO Conformance Level .................................................................................................................. OLE DB Interfaces and Methods ................................................................................................ OLE DB Properties ........................................................................................................................ Initialization Properties......................................................................................................... Data Source Properties .......................................................................................................... Data Source Information Properties.................................................................................... Session Properties .................................................................................................................. Rowset Properties .................................................................................................................. Specific Properties..................................................................................................................
89-10 89-12 89-12 89-13 89-15 89-15 89-15 89-16 89-17 89-17 89-19
90
Part XIV
Appendixes
xlvi
xlvii
CREATE INDEX Statement ........................................................................................................... Keywords and Options .................................................................................................................. VIEW Statements................................................................................................................................... CREATE VIEW Statement ............................................................................................................. Keywords and Options .................................................................................................................. DROP VIEW Statement .................................................................................................................. Keywords and Options .................................................................................................................. Stored Procedure Statements .............................................................................................................. CREATE PROCEDURE Statement ............................................................................................... Keywords and Options .................................................................................................................. DROP PROCEDURE Statement.................................................................................................... Keywords and Options .................................................................................................................. CALL Statement .............................................................................................................................. Keywords and Options .................................................................................................................. Synonym Statements ............................................................................................................................ CREATE SYNONYM Statement ................................................................................................... Keywords and Options .................................................................................................................. DROP SYNONYM Statement........................................................................................................ Keywords and Options .................................................................................................................. GRANT Statement ................................................................................................................................ Keywords and Options .................................................................................................................. Transaction Statements......................................................................................................................... BEGIN Statement ............................................................................................................................ COMMIT Statement........................................................................................................................ ROLLBACK Statement ................................................................................................................... Constant Formats ............................................................................................................................ Expressions....................................................................................................................................... Operator Precedence ............................................................................................................... SIngle Quotation Marks in String Expressions.................................................................... Functions................................................................................................................................................. Aggregate Functions....................................................................................................................... Additional Information........................................................................................................... Conditional Functions .................................................................................................................... Data Type Conversion Functions ................................................................................................. Date and Time Functions ............................................................................................................... Date Format .............................................................................................................................. Time Format.............................................................................................................................. Timestamp Format................................................................................................................... Date Comparison Semantics .................................................................................................. Numeric Functions and Arithmetic Operators........................................................................... String Functions............................................................................................................................... Parameters............................................................................................................................................... Search Conditions and Comparison Operators............................................................................... Keywords and Options .................................................................................................................. Passthru Query Statements (bypassing Query Processing) .......................................................... Keywords and Options .................................................................................................................. Reserved Keywords ..............................................................................................................................
B-31 B-31 B-32 B-32 B-33 B-34 B-34 B-34 B-34 B-35 B-37 B-37 B-37 B-37 B-39 B-39 B-39 B-40 B-40 B-41 B-41 B-41 B-41 B-42 B-42 B-42 B-42 B-43 B-43 B-43 B-44 B-44 B-45 B-46 B-47 B-47 B-47 B-47 B-49 B-49 B-50 B-52 B-52 B-53 B-56 B-57 B-58
xlviii
D E
COBOL Data Types to Attunity Data Types Editing XML Files in Attunity Studio
Preparing to Edit XML Files in Attunity Studio ............................................................................... Making Changes to the XML File ........................................................................................................ Remove Objects ................................................................................................................................. Add DTD Information...................................................................................................................... Edit Namespaces ............................................................................................................................... Add Elements and Attributes.......................................................................................................... Replace an Element ........................................................................................................................... E-1 E-2 E-2 E-2 E-3 E-5 E-5
Index
xlix
Attunity welcomes your comments and suggestions on the quality and usefulness of this publication. Your input is an important part of the information used for revision.
Did you find any errors? Is the information clearly presented? Do you need more information? If so, where? Are the examples correct? Do you need more examples? What features did you like most about this manual?
If you find any errors or have any other suggestions for improvement, please indicate the title and part number of the documentation and the chapter, section, and page number (if available). You can send comments to us in the following ways:
Electronic mail: support@[Link] FAX: (781) 213-5240. Attn: Documentation and Training Manager Postal service:
Attunity Incorporated Documentation and Training Manager 70 Blanchard Road Burlington, MA 01803 USA If you would like a reply, please give your name, address, telephone number, and electronic mail address (optional). If you have problems with the software, please contact your local Attunity Support Services.
li
lii
Preface
This guide is the primary source of user and reference information on AIS (Attunity Integration Suite), which enables integration of data across platforms and formats. This document applies to the IBM z/OS Series, OS/400, OpenVMS, UNIX, and Windows platforms. This preface covers the following topics:
Audience
This manual is intended for Attunity integration administrators who perform the following tasks:
Installing and configuring the Attunity Integration Suite Diagnosing errors Using AIS to access data
Note:
You should understand the fundamentals of data base use and Microsoft screens operating system before using this guide to install or administer the Attunity Integration Suite (AIS).
liii
Organization
This document contains: Part I, "Getting Started with AIS" Part II, "Attunity Connect" Part III, "Attunity Stream" Part IV, "Attunity Federate" Part V, "Attunity Studio" Part VI, "Operation and Maintenance" Part VII, "Utilities" Part VIII, "Data Source Reference" Part IX, "Procedure Data Source Reference" Part X, "Adapters Reference" Part XI, "Non-Application Adapters Reference" Part XII, "CDC Agents Reference" Part XIII, "Interface Reference" Part XIV, "Appendixes"
Related Documentation
Printed documentation is available for at the Attunity Web site:
[Link]
You can download release notes, installation documentation, white papers, or other types of documentation. You must register online before downloading any documents.
Conventions
This section describes the conventions used in the text and code examples of this documentation set. It describes:
Conventions in Text Conventions in Code Examples Conventions for screens Operating Systems
liv
Conventions in Text
We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use.
Convention Bold Meaning Example
When you specify this clause, you create an Bold typeface indicates terms that are defined in the text or terms that appear in index-organized table. a glossary, or both. Italic typeface indicates book titles or emphasis. Oracle Database Concepts Ensure that the recovery catalog and target database do not reside on the same disk. You can specify this clause only for a NUMBER column. You can back up the database by using the BACKUP command. Query the TABLE_NAME column in the USER_ TABLES data dictionary view. Use the DBMS_STATS.GENERATE_STATS procedure. Enter sqlplus to open SQL*Plus. The password is specified in the orapwd file. The department_id, department_name, and location_id columns are in the [Link] table. Set the QUERY_REWRITE_ENABLED initialization parameter to true. Connect as oe user.
Italics
Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, RMAN keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles. Lowercase monospace typeface indicates executables, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names, and connect identifiers, as well as user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values.
Note: Some programmatic elements use a The JRepUtil class implements these mixture of UPPERCASE and lowercase. methods. Enter these elements as shown. Lowercase italic monospace font lowercase represents placeholders or variables. italic monospace (fixed-width) font You can specify the parallel_clause. Run Uold_release.SQL where old_ release refers to the release you installed prior to upgrading.
The following table describes typographic conventions used in code examples and provides examples of their use.
Convention [ ] { } Meaning Brackets enclose one or more optional items. Do not enter the brackets. Example DECIMAL (digits [ , precision ])
Braces enclose two or more items, one of {ENABLE | DISABLE} which is required. Do not enter the braces.
lv
Convention |
Meaning
Example
A vertical bar represents a choice of two {ENABLE | DISABLE} or more options within brackets or braces. [COMPRESS | NOCOMPRESS] Enter one of the options. Do not enter the vertical bar. Horizontal ellipsis points indicate either:
...
That we have omitted parts of the code that are not directly related to the example That you can repeat a portion of the code
CREATE TABLE ... AS subquery; SELECT col1, col2, ... , coln FROM employees;
. . .
Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example.
SQL> SELECT NAME FROM V$DATAFILE; NAME -----------------------------------/fsl/dbs/tbs_01.dbf /fs1/dbs/tbs_02.dbf . . . /fsl/dbs/tbs_09.dbf 9 rows selected.
Other notation
You must enter symbols other than acctbal NUMBER(11,2); brackets, braces, vertical bars, and ellipsis acct CONSTANT NUMBER(4) := 3; points as shown. Italicized text indicates placeholders or variables for which you must supply particular values. Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase. Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files. Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. CONNECT SYSTEM/system_password DB_NAME = database_name SELECT last_name, employee_id FROM employees; SELECT * FROM USER_TABLES; DROP TABLE [Link];
Italics
UPPERCASE
lowercase
SELECT last_name, employee_id FROM employees; sqlplus hr/hr CREATE USER mjones IDENTIFIED BY ty3MU9;
lvi
Convention
Meaning
Example
c:\winnt"\"system32 is the same as File and directory File and directory names are not case names sensitive. The following special characters C:\WINNT\SYSTEM32 are not allowed: left angle bracket (<), right angle bracket (>), colon (:), double quotation marks ("), slash (/), pipe (|), and dash (-). The special character backslash (\) is treated as an element separator, even when it appears in quotes. If the file name begins with \\, then screens assumes it uses the Universal Naming Convention. C:\> Represents the screens command prompt C:\attunity\NACV_UTIL> of the current hard disk drive. The escape character in a command prompt is the caret (^). Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this manual. C:\>exp scott/tiger TABLES=emp QUERY=\"WHERE job='SALESMAN' and sal<1600\" C:\>imp SYSTEM/password FROMUSER=scott TABLES=(emp, dept)
Special characters The backslash (\) special character is sometimes required as an escape character for the double quotation mark (") special character at the screens command prompt. Parentheses and the single quotation mark (') do not require an escape character. Refer to your screens operating system documentation for more information on escape and special characters.
lvii
lviii
Whats New
AIS, the Attunity Integration Suite, Version 5.1 introduces significant enhancements, support commitments and bug fixes, and continued improvements for ease of use.
Continuous CDC
Beginning with version 5.1, Attunity Stream can provide ETL tools or programs to use standard SQL to query for data changes by continuously feeding change records for processing, effectively working in real-time. This provides an alternative approach to traditional ETL processing that works by executing jobs periodically (for example, every 15 minutes), which adds latency between each pass. This provides a solution for data integration scenarios that require very low latency (near real-time). This ability is executed by a simple SQL statement. See Reading Change Tables Continuously.
Generic Attunity Connect features, including the Procedure Driver, LegacyPlug, Database Adapter, and Query Adapter SQL Server 2005 Driver Oracle Driver Oracle CDC Agent
Note: The installation of the new 64-bit Attunity Server kits for Windows include the 32-bit Server components. This addresses the various options on the Microsoft
lix
platform, where some applications still require 32-bit functionality although running on 64-bit systems. See the installation guide for the operating system you are working with for more information.
Note: Windows 64-bit clients have an installation utility that also installs a new ODBC setup wizard. All other thin clients are distributed as Zip files.
Support for [Link] 2.0 APIs including the extended API set, providing richer capabilities to developers and supporting applications that use these APIs. The [Link] client is now merged with NETACX capabilities. This allows the [Link] client to interact with Application Adapters and exchange hierarchical XML documents with back-end systems. The [Link] client now supports design-time integration with Visual Studio 2005. Users can work with the Visual Studio Server Explorer component and use the standard Data Connections option to add a connection to an AIS data source using an integrated Add Connection dialog box. In this way, users can define datasets in the integrated Visual Studio environment and define the Attunity Connect metadata definitions.
lx
Version 5.1 improves the accuracy and performance of the Attunity Connect Update Statistics utility. Using a new algorithm, the Update Statistics utility can now estimate statistics on very large tables more accurately, and much faster.
lxi
lxii
Part I
Getting Started with AIS
This part contains the following topics:
Introducing the Attunity Integration Suite Setting up Attunity Connect, Stream, and Federate Binding Configuration Setting up Daemons Managing Metadata Working with Metadata in Attunity Studio Handling Arrays Using SQL Working with Web Services
1
Introducing the Attunity Integration Suite
This section describes the Attunity Integration Suite (AIS) and its benefits. It contains the following topics:
AIS Overview
The Attunity Integration Suite (AIS) is a comprehensive integration platform for on-demand access and integration of enterprise data sources and legacy applications. AIS runs on many platforms, such as Windows, UNIX, OS/400, and mainframes and provides the many integration possibilities. The suites integration services under the banner are provided in the following programs:
Attunity Connect: Universal, Standard Data Access to enterprise data sources. Attunity Federate: Virtual Data Federation (EII), integrating data on the fly from heterogeneous sources. Attunity Stream: Change data capture, allowing efficient and real-time data movement and processing.
The following diagram provides an overview of AIS and how they can be used to solve many integration needs for each enterprise platform. It also includes the Attunity Studio, a GUI-based tool that lets you configure the Attunity Servers in your system.
Figure 11 AIS Overview
The Attunity Integration Suite (AIS) provides a modular solution that allows organizations to address different tactical requirements quickly, while relying on a comprehensive platform that allows reusability and addresses many needs. The following section provides some examples on the uses for AIS and its benefits.
AIS Use
The Attunity Integration Suite provides many integration possibilities. This section describes some uses, related applications, or projects, and how AIS simplifies the access and integration. The following are some of the common uses for AIS:
SQL Connectivity
SQL is known skill set for application developers and a common interface that applications know and use in order to retrieve data. While relational databases provide SQL connectivity out of the box, legacy data sources and file systems do not. Attunity Connect can help in this area by making older non-relational data sources appear relational and providing access to them using standard SQL. Typical applications that require SQL connectivity include:
Reporting tools: for designing and providing reports to business users J2EE or .NET applications ETL tools: for bulk loading of source data
Some typical scenarios that use Attunity Connect for SQL Connectivity include connecting to Adabas, VSAM, IMS/DB, RMS, Enscribe, and ISAM data sources.
Application Connectivity
XML is now a standard interface for applications, by using standard APIs or Web Services. Many newer applications offer open interfaces, however legacy applications do not and interfacing with their embedded business logic is difficult. Attunity Connect defines virtual services on top of these legacy applications that provide seamless interoperability. Applications that require Service/XML-based application connectivity include:
EAI tools: for invoking business logic as part of an automated process J2EE or .NET applications: that need to reuse existing business logic Legacy Applications: that need to be extended and call off-platform services
Some typical scenarios that use Attunity Connect for application connectivity include CICS, IMS/TM, Tuxedo, COBOL, and RPG.
Adapters
Many enterprise application integration (EAI), enterprise service buses (ESB), and business process manager (BPM) tools need to integrate with existing applications and data sources. Attunity Connect removes the barrier to integrating with legacy applications and data sources by providing standard adapter interfaces and plug-ins to leading adapter frameworks. Adapters include inbound and outbound capabilities, that allow you to send messages to the adapter or receive messages from the adapters. Typical applications that require adapters include:
Integration Brokers: such as BizTalk Server, Oracle BPEL, BEA WLI, etc. ESB and BPM platforms
Typical usage scenarios employing Attunity Connect include application connectivity to CICS, IMS/TM, Tuxedo, COBOL, and RPG, as well as to enterprise data sources.
ETL - for Data Warehousing and complex data movement Data Replication - for rehosting data (for example, to enable reporting) Data Synchronization - for maintaining integrity between systems
Typical tools that are used include IBM WebSphere DataStage, Microsoft SQL Server Integration Services (SSIS), Oracle Warehouse Builder, Business Objects Data Integrator, and Sunopsis.
Operating Systems
OS Supported versions
Windows x86 (32-bit) Windows 2000, XP, Vista, and Windows Server 2003 Windows x64 (64-bit) Windows XP, and Windows Server 2003 Windows IA64 (Itanium) Linux RedHat Windows XP, Windows Server 2003 AS 3.0 through 5.0.
OS AIX Solaris HP UX
Supported versions Versions 5.2, 5.3 Version 2.8-2.10 11.11 (11i v1) and above only. Itanium systems are now supported by most Attuniy data sources. For HP UX on Itanium supported versions are 11.23, 11.31
OpenVMS (Alpha) OpenVMS (Itanium) NonStop (Himalaya) NonStop (Itanium) OS/400 z/OS
Versions 6.2-8.3 Versions 8.2-1 to 8.3 G06.08 to G06.26 H-Series Versions 5.1 to 5.3 Versions 1.1 through 1.8. Beginning with version 5.1.
Interfaces
Interface [Link] Supported versions .NET 2.0
Supported versions Version 2.0 Version 2.5 Version 2.5 Versions 1.0, 1.5
2
Setting up Attunity Connect, Stream, and Federate
This section contains the following topics:
Overview Setting up Machines Administration Authorization License Management Importing and Exporting XML in Attunity Studio
Overview
Attunity Studio is used to configure and manage access to applications, data, and events on all machines running AIS. You make configurations in the Design Perspective. This perspective has tabs for configuration and metadata. These tabs enable the following configuration:
Setting up access to machines running AIS. Configuring the daemon, which manages communication between AIS machines. Setting up the access to the application, data or event on these machines. Configuring metadata: To manage Attunity Metadata (ADD) for data sources that do not have metadata (such as DISAM), or have metadata that cannot be used by AIS. To view relational metadata. To extend metadata in ADD for relational data sources that require additional information not supplied by the native metadata (such as statistics for various relational data sources). To manage a snapshot of relational metadata converted to ADD. This is also called local copy metadata. To manage Application Adapter definitions (adapter metadata). To manage event definitions (event metadata).
You manage Attunity products in the Attunity Studio Runtime Manager perspective. Management includes the following:
2-1
A perspective consists of views and an editor area. Views are used to navigate and manage resources. The editor area is used to carry out the main tasks.
Figure 21 Studio Main Screen
Note:
You can use the properties that are displayed in the editor to find a machine where a data source or adapter is located, especially when several data sources or adapters on different machines have similar names. Identifying the location is useful when working on the Metadata tab of the Design Perspective.
To switch between the Design time and Runtime Manager perspectives, click Perspective (at the top right of the workbench) and select the required perspective from the list. For a list of buttons, see Workbench Icons.
Special Note:
To work with solutions in Attunity Studio, when using Turkish, add the following switches to the Target path in the Attunity Studio shortcut properties: -nl en
For example: "C:\Program Files\Attunity\Studio1\[Link] -nl en" When you open Attunity Studio for the first time, the Welcome Screen is displayed.
Setting up Machines
You use the Design perspective Configuration view to configure AIS Machines, and the applications, data, and events on those machines. To add a machine 1. Open Attunity Studio.
2.
In the Design perspective Configuration view, right-click the Machines folder and select Add Machines. The Add machine screen opens.
Host name/IP address: Enter the name of the machine on the network or click Browse to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the number of the port where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional).
2-3
You indicate the machines administrator when the machine is installed or by using ADD_ADMIN operation in the NAV_UTIL utility.
Password: Enter the password of the machines administrator. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field empty. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.
1. 2.
To edit a machine In Attunity Studio, in the Configuration view of the Design perspective, expand the Machine folder. Right-click the machine you want to edit and select Open. The Machine editor with the information for the existing binding opens in the editor area. You can make the following changes:
Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. Anonymous login: Select this if users can access the machine without password authentication. When this is selected the User name and Password fields are not available. User name: Enter the machine administrators user name. Password: Enter the machine administrators password. This is the password for the user entered in the User name field.
After you add a machine, it is displayed in the Configuration view under the Machine folder. You can edit the machines login information, or configure bindings, daemons, and users for each machine. For more information, see:
Setting up Data Sources and Events with Attunity Studio Setting up Daemons User Profiles and Managing a User Profile in Attunity Studio
To define an offline machine 1. In the Configuration view of the Design perspective, right-click the Machines folder and select Add Offline Design Machine. The Add offline design machine screen opens.
2. 3.
You can define all available resources on this machine. You can also set up metadata using a metadata import utility. Every resource is available on the design machine, no matter on which platform the resource needs to exist. For example, both HP NonStop data sources and z/OS data sources are available, even though on completion, you can only drag and drop the NonStop definitions (such as an Enscribe data source) to an HP NonStop machine.
Administration Authorization
You can provide access to Attunity Studio for:
Administrators: People granted access as administrators can add and edit resources in all areas of AIS. Designers: People granted access as designers can add adapters, data sources, and CDC agents and create and edit definitions for them. Users: People granted access as users have read-only access to all resources in AIS.
1. 2.
To grant administrative authorization in Attunity Studio In Attunity Studio, in the Configuration view of the Design perspective, expand the Machines folder. Right-click the machine you want to grant privileges to and select Administrative Authorization. The Administrative Authorization editor opens in the editor area. The name of the machine that you are granting access privileges to is shown on the tab at the top of the editor.
2-5
3. 4.
Click the Everyone checkbox at the top of the Administrator, Designer, or User sections to allow all people who use AIS access as the selected type of user. Clear the check box at the top of one or more of the sections to grant specific people access to that area. For more information, see Granting Access to Specific Users and Groups.
Granting Access to Specific Users and Groups To grant a User or group access rights, add them to the list for the type of rights you want to grant them. For example, you can grant user1 administrator rights by adding user1 to the list in the administrator section. To grant rights to all users and groups, select the Everybody check box for any of the three sections. To add users or groups From the Administrators, Designers, or Users sections in the Administration Authorization editor, click Add user and enter the name of a valid user in the Add user screen. Make sure that the name entered matches a valid user account. To add groups to the list, click Add group and enter the name of a valid group in the Add group screen. Make sure that the name entered matches a valid group account.
2.
1.
Click OK to close the screen. The name of the user or group is added to the field.
To rename a user or group 1. From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the user or group you want to rename and click Rename.
2. 3.
Change the name entered in the Rename user or Rename group screen to the name you want to use. Click OK to close the screen. The changes are entered in the field.
1. 2.
To remove a user or group From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the use or group that you want to remove. Click Remove. The user or group is removed from the field.
License Management
Before you can work with any product in AIS, you must register the product. You can register the product with a valid license file. The following sections describe how you can use license management.
Registering a Product
You need to register the software before you can access data sources on a machine. Your Attunity vendor should provide you with a text file called [Link] PAK file contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, which drivers you are authorized to use, and other information. After you make sure that the PAK file is installed on your machine, you must register it before you can use the product.
Notes:
Make sure you are connected to the Internet before carrying out the following procedure. When you register a product, the new license will overwrite the old license. If you want to register a new product and continue using any previously registered products, then request a single license for all of the products you are using.
To register a product 1. In Attunity Studio, in the Configuration view of the Design perspective, Expand the Machine folder.
2-7
2.
Right-click the machine with the license you want to register, point to License management, and select Register product. The Register Product screen opens.
3.
Click Browse and browse to find the license (PAK) file. It is usually located in the directory where AIS is installed. The XML content of the license file is displayed in the screen.
4.
Click Register. Attunity Studio contacts the Attunity registration server. A message is displayed stating whether the registration was successful. Contact your Attunity vendor if there is a problem with the registration or if you do not have a license file.
2.
Right-click the machine with the license you want to register, point to License management, and select View license information. The View License Registration screen opens.
3.
Click Save as to create a new license file. You can also read the license information in this screen.
In some cases exporting XML data using Attunity Studio does not export all of the data. If this happens, you can use the IMPORT and EXPORT operations in the NAV_UTIL utility.
To import XML 1. In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:
2-9
Machines (folder) Any specific machine Data Sources (folder) Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder)
2. 3. 4. 5.
Select Import XML definitions. The Import XML Definitions window opens. Click Browse to open the Import XML Definitions dialog box and browse to the file with the XML data you want to import. Click OK. The data opens at the bottom of the Import XML Definitions screen. Click Finish to close the screen and save the data.
1.
To export XML definitions In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:
Machines (folder) Any specific machine Data Sources (folder) Any specific data source Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder) Any specific daemon
2. 3. 4.
Select Export XML definitions. The Export XML Definitions window opens. Click Browse to browse to the location where you want to save the XML data. Click OK to save the data.
When you import or export definitions, the XML data for the level you are using is transferred, including the data for all the sublevels. For example, if you export data for a daemon, the data for the binding and all of its data sources, adapters, and events is transferred. However, if you export the data for an adapter, only the data for that adapter is transferred.
3
Binding Configuration
This section contains the following topics:
Binding Configuration Overview Setting up Bindings in Attunity Studio Binding Syntax Sample Binding Environment Properties
Server Binding
A Binding configuration on a server includes the following:
Definitions for data sources that are accessed using Attunity Integration Suite (AIS). Shortcuts to data sources on other server machines that can be accessed from the current Machine. Application Adapter definitions for applications that can be accessed using AIS, including application-specific properties. Event queue definitions for event queues that are managed using AIS, including event-specific properties. Environment properties that apply to all the data sources, adapters, and machines listed in the binding configuration. For more information, see Environment Properties.
Client Binding
A Binding configuration on a client includes the following:
Shortcuts to data sources on other Server Machines that can be accessed from the current machine. Environment properties that apply to all the data sources, adapters, and Machines listed in the binding configuration. For details, see Environment Properties.
You use Attunity Studio for Adding Bindings and binding configurations or for Editing Bindings that are already in use. NAV is the default binding configuration. You can use this configuration to define all the data sources and adapters you want to access via AIS.
The configuration supplied with the product installation includes the NAV binding. This configuration is used if a specific binding is not defined to access an application, data source, or event queue.
Use Attunity Studio to configure one or more bindings, each with a set of application adapters, data sources, and events. Each binding configuration has its own environment that defines the binding (such as cache sizes for storing information in memory during a session). The following sections describe the required tasks to define a binding:
Adding Bindings
You can set up a number of different bindings. Each binding may be for a different set of applications, data sources or events. You can also create different binding configurations for the same application adapters, data sources or events, with each having a different set of requirements. For example, you can set up separate configurations that allow different users access to specific resources. To add a new binding 1. Open Attunity Studio.
2. 3.
In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add the binding.
Note:
You can add a new binding configuration in a design machine in the offline design mode and later drag and drop the binding to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.
4. 5.
Right-click the Bindings folder and select New Binding. Enter a name for the binding in the New Binding window.
Note: Attunity Studio does not support renaming a binding. Changing the binding name can cause problems with the data sources, adapters, and events for that binding.
In the event that you want to change the binding name, you must create a new binding and copy the data sources, adapters, and events to the new binding.
6.
Click Finish. The new binding editor opens. See Editing Bindings for information on entering information in the binding editor.
Editing Bindings
The binding editor is used to set the binding environment and to define remote machines for a binding. The following sections describe how to edit the binding:
Setting the Binding Environment in Attunity Studio Defining Remote Machines in a Binding
In the Design perspective Configuration view, expand the Bindings folder. Right-click the binding you want to edit and select Open. The binding editor with the information for the existing binding opens in the editor area. Make any changes you want to the binding properties.
2.
On the Environment tab, you can do any of the following to work with the binding environment.
Select the Use NAV environment check box. When this is selected, the NAV environment is used for all of the Environment Properties. The property values are set to the NAV values at runtime. The editing controls for each property are disabled. If you want to change any of the Environment Properties, clear the check box. Click Copy NAV environment to use the default settings, which are the original settings for the NAV binding. This automatically sets all of the environment properties to the NAV values and it allows you to edit individual properties as needed. The property values are created as NAV values during design time. Click Restore default values to automatically restore the environment settings that were defined in the development environment. This means that the values will revert to the values as they were when the AIS solution was first deployed. All values that were changed in the current session or any previous session (including values changed with NAV_UTIL or Attunity Studio) will return to their original value. In addition, each section in the Environment Properties editor has its own Restore default values button. Click this button to restore the default values for the properties in that section only.
3.
Edit any of the environment properties displayed on the Environment tab. For an explanation of the properties, see Environment Properties.
To set up access to a remote machine 1. Right-click the binding you want to edit and select Open. The binding editor opens in the editor area.
2.
3.
4.
Host name/IP address: Enter the name of the machine on the network or click Browse, to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the port number where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional). User name: Enter the machine administrators user name.
Note:
Define the machine administrator when the machine is installed using Attunity Studio or by using the ADD_ADMIN operation in NAV_UTIL.
Password: Enter the machine administrators password. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field blank. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.
5.
Click OK. If the users with access to the remote machines are known, you can set the user profile for the machine by clicking Security on the Machines tab. If the users are not known, you can define them later from the Users folder in the Configuration view. For more information, see Managing a User Profile in Attunity Studio.
Binding Syntax
The Binding settings in XML format include the following statements:
A <remoteMachines> Statement, specifying the remote machines that can be accessed from the current machine by using <remoteMachine> statements. A <datasources> Statement, specifying the data sources that can be accessed by using <datasource> statements. This statement specifies the following: A name to identify the data source The data source type General information <config> statements, specifying specific properties for the data source driver
An <environment> statement, specifying the properties for the specific binding. For details, see Environment Properties. An <adapters> Statement specifying the Application Adapters that can be accessed using the <adapter> statement. This statement specifies the following: <adapter> Statement, specifying a name to identify the Application and the application type.
<remoteMachines> Statement
The <remoteMachines> statement lists the names of the accessible servers, using <remoteMachine> statements. These statements are only necessary when you connect to data sources through a shortcut on the client machine. In other cases (such as accessing an application, or a data source using JDBC), the location is specified as part of the Connect String.
<remoteMachine> Statement
The <remoteMachine> statement lists names and IP addresses of the remote machines with data sources and are accessed using data source shortcuts on the current machine. The names are used as aliases for the IP addresses in the <datasource> statements. This enables you to redefine the location of a group of data sources (on a given machine) by changing the IP address associated with this alias. The format is:
<remoteMachine name="alias" address="address" port="port_number" workspace="workspace" encryptionProtocol="RC4|DES3" firewallProtocol="none|nat|fixednat"/>
Where:
name: The name of the remote machine that is recognized by AIS. The names maximum length is 32 characters, and it must start with a character. This name cannot be the name of a data source specified in a <datasources> statement.
Note:
The name does not need to relate to the name of the machine on the network.
address: The IP address of the remote machine. port: The port on the remote machine where the AIS Daemon is running. If you do not specify a port number, the system allocates the default server port, 2551. workspace: The specific working configuration specified for this binding by the daemon. A Workspace must be defined in the daemon configuration on the remote machine. encryptionProtocol: The protocol used to encrypt network communications. AIS currently supports the RC4 and DES3 protocols. firewallProtocol: The firewall protocol used. Valid values are none, nat, or fixednat. The default is none. NAT (Network Address Translation) is a firewall protocol where internal IP addresses are hidden. It enables a network to use one set of IP addresses for internal traffic and a second set of addresses for external traffic. NAT translates all necessary IP addresses. However, using NAT requires every access by every client to go through the daemon port, even after a specific server process has been assigned to handle the client. Specifying fixednat for this parameter sets AIS to access this remote machine through a firewall using NAT with a fixed IP address. When the server address is returned to the client and
the client sees that the IP is not the IP of the daemon, it ignores the IP and uses the daemon's IP instead. It is recommended to use fixednat to access data via a firewall. For more information, see Firewall Support.
Example 31 <remoteMachines> statement <remoteMachines> <remoteMachine name="ALPHA_ACME_COM" address="[Link]" /> <remoteMachine name="SUN_ACME_COM" address="[Link] port="8888" workspace="PROD" /> </remoteMachines>
<adapters> Statement
This statement lists the accessible application adapters using the <adapter> statement.
<adapter> Statement
An <adapter> statement specifies the name and properties of an AIS application adapter. The basic format is as follows:
<adapter name="name" type="type" definition="definition_name"> <config .../> </adapter>
Where:
name: The name of the adapter. The maximum length is 32 characters. type: The type of the application adapter to be accessed. This value is different for each application adapter. Refer to a specific application adapter for the value of this parameter. definition: The name of the adapter metadata used to describe the adapter. If the value here is the same as the adapter name, it can be omitted.
Note:
Some adapters have an internal definition, and a value here must be omitted.
<config> Statement
A <config> statement specifies the configuration properties of an application adapter. The configuration information is specific to each adapter type. The basic format is as follows:
<adapter name="name" type="type"> <config attribute="value" attribute="value" ... /> </adapter>
Where:
3-8 AIS User Guide and Reference
attribute: The name of the configuration property. Attributes are adapter-dependent. For example, the preloaded attribute is set with an event router adapter or event to initiate the adapter as soon as the Daemon is started.
Note:
The pre-loaded attribute can also be set when the adapter definition is large, so that the time allocated to a server when the User opens a connection is shortened since the definition has been pre-loaded.
<datasources> Statement
This statement lists the accessible data sources using the <datasource> statement.
<datasource> Statement
A <datasource> statement specifies the name and type of the data source and the information required to connect to the data source. The basic format is as follows:
<datasource name="name" type="type" attribute="value"> <config .../> </datasource>
Where:
name: The name of the data source that is recognized by AIS. The maximum length is 32 characters. The name cannot include hyphens (-). It can include underscores (_). This name cannot be the name of a machine specified in a <remoteMachines> statement. type: The type of the data source to be accessed. This value is different for each data source driver. Refer to a specific data source driver for the value of this parameter. The value of this field when you define a data source shortcut (where the data source resides on another machine) is REMOTE. attribute: General data source attributes, such as read only. These attributes are set in Attunity Studio on the Advanced tab for the data source.
Note:
The localCopy and noExtendedMetadata attributes are set automatically on the Metadata tab of the Design perspective when changes are made to native metadata. For more details, see Native Metadata Caching and Extended Native Data Source Metadata.
Table 31 (Cont.) Data Source Supported Attributes AIS Studio Data source Advanced Tab Syntax name For further details about this field, see Using the Attunity Connect Syntax File ([Link]). Attribute syntaxName="value" Description A section name in the [Link] file that describes SQL syntax variations. The default syntax file contains the following predefined sections:
OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.
The name of the table owner that is used if an owner is not indicated in the SQL. true: The data source is in read-only mode. All update and data definition language (DDL) operations are blocked. Indicates the location of a repository for a data source. The name of a repository for a specific data source. The name is defined as a data source in the binding configuration with a type of Virtual and is used to store AIS views and stored procedures specific to the data source, when this is wanted in preference to the default SYS data.
Example 32 <datasources> statement <datasources name="NAV"> <datasource name="ADABAS" type="ADABAS"> <config dbNumber="3" predictFileNumber="7"/> <datasource name="DB2" type="DB2"> <config dbname="person2"/> </datasource> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="DISAM" type="ADD-DISAM"> 3-10 AIS User Guide and Reference
<config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="SYBASE" type="SYBASE"> <config server="SYB11_HP" dbName="personnel"/> </datasource> </datasources>
Note:
On the HP NonStop platform, the Repository Information objectStoreDir and objectStoreName attributes do not affect Alternate Key Files for the following data sources: Enscribe SQL/MP if a local copy or extended metadata is used.
These are always created in the NAVROOT subvolume with uniquely generated filenames.
<config> Statement
This statement specifies configuration properties of a data source. The configuration information is specific to each adapter type. The basic format is as follows:
<datasource name="name" type="type"> <config attribute="value" attribute="value" ... /> </datasource>
Where:
attribute: The name of the configuration property. value: The value of the configuration property.
Example 33 <config> statement <datasources> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> </datasources>
Sample Binding
This section shows a sample XML binding in XML format. You can also view the [Link] XML file in the XML editor. To open this file in the editor:
Right-click on the binding you want to view and select Open as XML.
This displays a graphical interface where you can define the various aspects of a solution. This interface lets you make changes easily without having to manually edit the XML file. For more information, see Editing XML Files in Attunity Studio. If you want to view the XML in its original format, click the Source tab after you open the binding in XML.
The following Binding configuration provides information for the NAVDEMO sample data source and for a local data source (an ORA_EXT Oracle database), one remote data source, and one adapter ( "MathLegacy", which uses the AIS Legacy Plug adapter):
<?xml version="1.0" encoding="ISO-8859-1"?> <navobj version="..."> <bindings> <binding name="NAV"> <remoteMachines> <remoteMachine name="SUN_ACME_COM" address="[Link]" workspace="PROD"/> </remoteMachines> <environment name="NAV"> <debug generalTrace="true"/> <misc/> <queryProcessor/> <optimizer goal="none" preferredSite="server"/> <transactions/> <odbc/> <oledb/> <tuning/> </environment> <datasources name="NAV"> <datasource name="NAVDEMO" type="ADD-DISAM"> <config newFileLocation="$NAVDEMO"/> </datasource> <datasource name="ORA_EXT" type="ORACLE8" connect="@ora8_ntdb"/> <datasource name="ORA" type="remote" connect="sun_acme_com"/> </datasources> <adapters name="NAV"> <adapter name="MathLegacy" type="LegacyPlug"> <config dllName="c:\legacy\prc_samples.dll"/> </adapter> </adapters> </binding> </bindings> </navobj>
Environment Properties
Each Binding configuration includes its own environment, specified in the environment properties.
Note:
When using an ADO front-end application, the environment used is the environment of the first binding configuration used in the program, even if the binding configuration used is changed during the program.
To display environment properties for the binding configuration in Attunity Studio, right-click the binding configuration and select Open. The environment properties are listed in the Environment tab. The following sections describe each category in the Environment Properties editor.
Debug General Language Modeling ODBC OLE DB Optimizer Query Processor Temp Features Transaction Tuning XML
Debug
This following list shows the debug properties. The debug properties control what information is reported for debugging purposes.
ACX trace: Select this to write the input XML sent to the adapter and the output XML returned by the adapter to the log file. GDB trace: Select this to write the driver transactions created using the AIS SDK to the log. For details refer to Attunity Developer SDK. General trace: Select this to write the general trace information used by to the log. The default writes only error messages to the log. Note: Changing the default setting can degrade AIS performance.
Query warnings: Select this to generate a log file of Query Processor warnings. Add timestamp to traced events: Select this to add a timestamp on each event row in the log. Trigger trace: Select this to log trigger information each time that a database executes a trigger. Adapter trace: Control trace: Query processor trace: Performance trace:
Binary XML log level: Select the binary XML log level from the list. The following logging levels are available:
Log file: Enter the full path and filename of the log file for messages. The default log file ([Link]) is located in the TMP directory under the directory where AIS Server is installed. To send log messages to the console, instead of a file, set logFile to a minus, "-", character. The following message types of are written to the log:
Error messages. Trace information about the query optimization strategy (when General trace is selected).
For HP NonStop, the default AIS log file is called NAVLOG and it is located in the subvolume where the AIS Server is installed. If the log file is location is described by a UNIX type path (such as /G/d0117/ac3300/navlog), then the log file can be viewed from other processes (while it is open). Otherwise, the log is not readable while it is open. To view the file use the following:
FUP COPY filename,, SHARE
For z/OS, the default AIS log file is [Link], where NAVROOT is the high level qualifier specified when AIS Server is installed.
Trace directory: Enter the directory where AIS writes the log generated by the optimizer files (with a PLN extension). The optimizer files include details of the optimization strategy used by AIS. By default, these files are written to the same directory as the log file (see Log file). Transaction log file: Enter the full path and filename of the log file that logs transaction activity. This log file is used during recovery operations. On Windows platforms, the default log file ([Link]) is written to the same directory as the [Link] file (which is defined by the debug logFile parameter). It is recommended to use the default log file and perform recovery from a PC. Transaction trace: Select this to write 2PC and XA transactions-related events to the log. Optimizer trace: Select this to write trace information and information about the Query Optimizer strategy to the log file. If this property is selected, the following properties are also enabled. Full trace: Select this to enable all optimizer traces. Trace groups: If using Full trace, you can select this to enable generated groups optimizer traces. Trace groups is unavailable if Full trace is not selected.
Transaction extended logging: Select this for the transaction manager to write additional information about transactions to the log.
General
The following list shows the general properties. The general properties control general configuration properties for the binding.
Compress object store: Select this to allow compressing objects in the repository if they use more than 2K storage. This property is automatically selected for a bindings created for a CDC Agent with a Staging Area.
Read V3 definition: Select this if you are upgrading AIS from version [Link]. This property is selected by default. If you do not need to use this behavior, clear the check box. Temporary directory: Enter the path to the directory where temporary files are written, including the temporary files created for use by hash joins and for sorting files. The default is the current directory. The following describes how to determine where your temporary directory should reside:
1.
Select a directory that contains temporary files only. You can then easily remove these files if necessary (for example, if the process stopped in the middle). Select a directory on a disk that has a significant amount of free disk space.
2.
Year 2000 policy: This property defines how two-digit years are converted into four-digit years. Enter a numeric value in this field. Two policies can be used:
Fixed Base Year: If this property is set to a value greater than, or equal to 1900, the Fixed Base Year policy is used. In this case, the property value is the first four-digit year after 1900 that can be represented by a two-digits. For example, if the value is set to 1905, the years 2000->2004 will be represented by 00->04. All other two digits will map to 19xx. Sliding Base Year: If this property is set to a positive value less than 100, the Sliding Base Year policy is used. In this case, the property value is the number of years ahead of the current year that can be represented by a two-digit number. With each passing year the earliest year that can be represented by a two-digit number changes to a year later. When the parameter is not set, or when it is set to a value outside the range of values defined for the above policies, the default value of 5 and the Sliding Base Year policy is used.
NAV_UTIL editor: Enter the text editor to use when you use NAV_UTIL EDIT. The default is the native text editor for the operating system. Journal file name: Enter the full path to the journal file (including the file name) for use with CDC on DISAM. The default journal file is located in the DEF directory of the AIS installation. Cache buffer size: Enter the number of bytes to be used for a memory buffer on a client machine, which is used by the AIS client/server to store read-ahead data. The default is 200000.
Language
The Language section lets you set the default language for applications in the binding. To set the default language for the binding From the Language list, select the National Language Support (NLS) supported language to use in this binding. Valid values are:
GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)
From the Codepage list, select the codepage that you want to use with this language. The code pages available are determined by the Language that is selected. If you have additional code pages available, you can manually enter them in this field. Note: If you change the language, the code page will also change. Check to be sure that you want to use the selected code page with the language you selected. If no codepage is selected, the default codepage for the selected language is used.
From the NLS string list, select the NLS string for this language and code page. The NLS strings available are determined by the code page that is selected. If you have additional NLS strings available, you can manually enter them in this field. The codepage is used by a field with a data type defined as nlsString. This parameter is used for a field with a codepage that is different than the machines codepage. This property includes the following values:
The name of the codepage. Whether the character set reads from right to left (as in middle-eastern character sets).
Modeling
The modeling section lets you define how to handle nonrelational data and arrays. For more information, see Handling Arrays.
Array metadata model: Select the virtual array flattening model for the binding. You can select one of the models below: virtualarrayTables: In this model, a virtual table is generated for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. This is the default for all cases except CDC. virtualarrayViews: In this model, the parent field is replaced with a unique key. Virtual views use the same metadata as virtual tables. This is the default model for CDC.
Reduce sequential flattening: Select this for sequentially flattened tables to not return a row that lists only the parent record, without the values of the child array columns. This option is available only if you select virtualarrayViews as your Array metadata model.
Reduce virtual views: Select this for virtual views to not return a row that lists only the parent record, without the values of the child array columns. By default the property is selected. Clear the check box to return the row. This option is available only if you select virtualarrayViews as your Array metadata model.
Generate unique index names: Select this to generate a unique name for every index on a table that is defined in a non-relational system. When this is selected, the names of all indexes on non-relational tables are exposed in the following format: table_name_KEYkey_number. For example, if a table is called X, the name of the first index would be X_KEY0.
ODBC
The following list shows the properties for ODBC. The ODBC properties set the parameters used when using ODBC to work with AIS.
Maximum active connections: Enter the maximum number of connections that an ODBC or OLE DB application can make through AIS. The default is 0. The greater the number of connections possible, the faster the application can run. However, other applications will run slower and each connection is counted as a license, restricting the total number of Users who can access data through AIS concurrently. For example, this is true when using MS Access as a front-end, because MS Access allocates more than one connection whenever possible.
Maximum active statements: Enter the value returned for the InfoType of the ODBC SQLGetInfo API. The default (0) means that there is no limit on the number of active statements. Force qualify tables: Select this to report the catalog and table name together as a single string (in this format: DS:table_name).
OLE DB
The following list shows the properties for OLE DB. The OLE DB properties set the parameters used when using OLE DB to work with AIS.
Trace: Select this to write the trace information used when working with OLE DB providers to the log. If not selected, only error messages are written to the log. Note: Changing the default setting can degrade AIS performance.
Suppress chapters: Select this to suppress chapters in ODBC. In this case, Chapters are exposed as regular columns. Use this property to prevent an SQL server error for a bad datatype. Maximum row handles: Enter the maximum number of hrows (row handles) that can reside in memory at one time under OLE DB. This property is related to the ADO CacheSize property. Set both of these properties to the same value. If the two properties are different, the smaller value is used and the other is ignored. OLE threads: Enter the number of open threads allowed when working with OLE transactions. These threads are used for operations received from the MSDTC. The minimum value is 5. the optimum value is 15. The maximum value is 25.
Optimizer
This following list shows the optimizer properties. The optimizer properties control how the query optimizer works in the binding.
Avoid scan: Select this to force the optimizer not to choose the scan strategy, if a different strategy can be used. Disable multi-index: Select this to disable the multi-index storage. For Adabas only. Disable cache without index: Select this to disable non-index caches. Disable flattener: Select this to instruct the query optimizer not to flatten queries including nested queries. Disable hash join: Select this to disable hash join optimization. When hash joins are enabled, a significant amount of disk space is required (see Hash maximum disk space). If the system does not have available disk space, use this option to disable hash join optimization. Disable index cache: Select this to disable index caching. Disable lookup cache: Select this to disable the lookup cache. Encourage lookup cache: Select this to define the optimizer to ignore the cache buffer size restriction (hashBufferSize) on a group consisting of a single table.
Disable pass thru: Select this to force the optimizer to execute a full optimization for the query even if it can be delegated to the relational backend database as is. Disable Tdp union: Select this to define the optimizer to handle separate Data sources as if they are on a different backend even if they are part of the same database or remote machine. Disable subquery cache: Select this to disable the cache for subqueries. Analyzer query plan: Select this to write the Query Optimizer plan to a plan file for analysis using the AIS Query Analyzer. Optimization goal: Select the optimization policy to use from the list. The following policies are available:
none: All row optimization is used. This is the default value. first: First row optimization is performed based on the assumption that the results produced by the query are used as the rows are retrieved. The query optimizer uses a strategy that retrieves the first rows as fast as possible, which might result in a slower overall time to retrieve all the rows. all: Optimization is performed based on the assumption that the results produced by the query are used after all the rows have been retrieved. The query optimizer uses a strategy that retrieves all the rows as fast as possible, which might result in a slower time to retrieve the first few rows.
Note: Aggregate queries automatically use all row optimization, regardless of the value of this parameter.
Hash maximum disk space: Enter the maximum amount of disk space (in MBs) that a query can use for hash joins. The default is -1 (which indicates unlimited, all the free space on the allocated disk). If a query requires more space than allocated via this parameter, the query execution will stop. The minimum value for this parameter is 20 MB.
Note: Temporary files are written per query. Therefore, if several users can execute queries at the same time, adjust the amount of space available, so that the total that can be allocated at any one time does not exceed the available space. HP NonStop: If AIS files reduced to disk are larger than 500 MB, use this parameter to enlarge the default size of the file that AIS opens. The default for this parameter on HP NonStop machines is 478.3MB, which consists of a primary extent size of 20K, a secondary extent size of 1000, and a maximum of 500 extents.
Preferred site: Select the machine where you want to process the query. Normally the query is processed as close to the data source as possible (either using the query processing of the data source, or if this is not available, the Query Processor on the same machine as the data source). If a situation arises in which it is more efficient to process the query on the Client Machine (for example, when the remote machine is heavily overloaded), you can tune AIS to process all or part of the query locally. The extent that performance is improved by processing all or some of the query locally can be determined only on a trial and error basis. Consider the following points when processing the query locally:
Before adjusting this parameter, check the log to see if other tuning is more appropriate. The options are:
server (the default): The query is processed on the server. nearServer: The query is processed mostly on the server with parts of the query processed on the client (determined by the specific query). nearClient: The query is processed mostly on the client with parts of the query processed on the server (determined by the specific query). client: The query is processed on the client.
Maximum tables in group: Default row cardinality: Enter the default row cardinality. If you want to set to a value other than 0, this value is used by the Optimizer as the default rows number for pass-through queries and for tables that have no statistical information. Maximum groups for reorder: No LOJ delegation: Select this if you do not want to delegate LEFT OUTER JOIN queries to the relation backend database. AIS will attempt to suppress LOJ queries. If not selected, every query can be delegated as is. Use recursive LOJ allocation: Select this to allow recursive optimization to queries including. By default, this is selected. If you want to disable the recursive LOJ delegation, clear the check box. LOJ recursive optimization limit: Value only if using the log allocation to true.
Disable semi-join: Select this to disable semi-join optimization. Semi-join in values factor: Enter the number of parameters that a semi-join strategy sends to an RDBMS.
Disable order -by-index strategy: Select to disable the order-by-index strategy. In this strategy the order of the results is achieved by accessing the table by the index that contains its segments in the ORDER BY clause so that QP does not sort the results. If not selected, this strategy is not used and sorting is done by QP. Note
Binding Configuration 3-19
that this strategy is rarely selected and using this property affects only queries with an ORDER BY and with no WHERE clause on indexes. All other cases will use the same strategies as they do when this property is not selected.
Parallel Processing
This following list shows the parallel processing properties. The parallel processing properties control how parallel processes are handled in the binding.
Disable threads: Select this to disable multi-threading. If this is selected, the following properties are disabled. Disable threaded read ahead (QP): Select this to disable read-ahead functionality. Disable query read ahead (QP): Select this to disable read-ahead functionality for components using Query Processor services. ODBC async execution property to enable ODBC asynchronous execution Disable QP parallel execution: Select this to disable parallel processing for query execution. This option is available only if both Disable threaded read ahead (QP) and Disable query ready ahead (QP) are not selected. Hash parallelism: Select this to read both sides of hash joins at the same time. By default, this property is selected. If you do not want this behavior, clear the check box.
Query Processor
This following list shows the query processor properties. The query processor properties control how the query processor processes requests in the binding.
Disable command reuse: Select this to disable Query Processor caching the executed state of a query for reuse. Disable DS property cache: Select this to disable caching data source properties. Disable insert parameterization: Select this to disable parameterization constants in INSERT statements. Disable metadata caching: Select this to disable caching object metadata. If this is selected, the object metadata is taken from the from the original data source instead of the cache. Disable query parameterization: Select this to not convert constants into parameters when accessing data sources. Disable row mark field fetch: Select this for OLE DB getRows errors to be marked and reshown on every getRows, if the rowset is active. Compile after load: Select this to always compiles an AIS procedure or view after it is read. Ignore segments bind failure: This property determines how AIS responds when the execution of one of the segments of a segmented data source fails:
Select this to Log a message and continue execution. This is the default setting. Clear the check box to Log a message and stop execution. By default, this property is selected. If you want to stop execution after sending a message, clear this check box.
Prompt database-user password: Select this to configure AIS to prompt the user for security information when accessing a data source. Use alternate qualifier: Select this to use the @ symbol instead of a colon (:) when connecting to multiple data sources. Note: You should use this value when building an application using PowerBuilder from Sybase Inc. or Genio from Hummingbird Ltd.
Use table filter expression: Select this to enable the use of tables that have filter expressions specified in their metadata. For details of filters in ADD, see The <table> Statement for information on the filter property or see the Metadata General Tab for information on using the Filter expression in Attunity Studio. Write empty string as null: Select this to replace empty strings in a SET clause of an UPDATE statement or in a VALUES list of an INSERT statement with null values. Optimistic for update: Select this to use optimistic locking as the default locking behavior on queries with a FOR UPDATE clause. Disable compilation cache: Select this to disable saving successfully compiled statements in the cache. Maximum SQL cache: Enter the maximum number of SQL queries that can be stored in cache memory. This propertys value is ignored if Disable compilation cache is selected. The default is 3. First tree extensions: Enter the maximum size allowed for an SQL query after compilation. The default is 150. Maximum columns in parsing: Enter the maximum number of columns that a query can reference. The default is 500. Maximum segmented database threads: Enter the maximum number of open threads allowed, when working with segmented databases. Minimum number of parameters allocated: Enter the minimum number of parameters that can be used in a query. Continuous query retry interval: Enter the number of seconds that the query processor waits before executing a query again, when no records are returned. The default is 2. Continuous query timeout: Enter the number of seconds that the query processor will continue to issue queries, when no records are returned. The default is 3600 (one hour), which indicates that after an hour without new messages the continuous query will end. Enter 0 to indicate that there is no timeout and the continuos query will not end automatically. Continuous query prefix: Enter a prefix to replace the $$ prefix that is used to identify the continuous query special columns. For example, if you enter ##, then the continuous query alias is ##StreamPosition and the control command alias is ##ControlCommand. Arithmetic fixed precision: Enter an integer determine the precision scale factor for floating decimal position. The default is 0, which indicates that the exact arithmetic function is not used. When the value is set to a small positive integer, the special precise floating point arithmetic is used in the query processor. The value determines the precision scale factor (for example, a value of 2 indicates two digits decimal precision). Setting this parameter can be done at a workspace level and it affects all queries running
at that workspace with no change to the query or to the underlying data source. The query processor ADD(), SUBTRACT() and SUM() functions that currently use double arithmetic for both floating and decimal types will use this logic. When the value is set to the default, 0, the exact arithmetic function is not used. This property is used to set the Exact Arithmetic function. The qpArithmeticFixedPrecision property is an integer value that determines the fixed precision the AIS query processor uses for precise floating point arithmetic. It is used to create an accurate result when using the SUM function. Because floating point datatypes are not accurate their results over time does not correspond to the expected arithmetic sum. In other words, in the floating point representation, values such as 0.7 cannot be represented precisely. If there are eight precision digits, there is usually imprecision in the least significant digit so the number is actually approximately 0.699999995. The qpArithmeticFixedPrecision property corrects this imprecision by using an exact floating point.
Parser depth: Enter the maximum depth of the expression tree. The default is 500. Token size: Enter the maximum length of a string in an SQL query. The minimum value is 64. the default is 350. Insert from select commit rate: Enter the commit rate to use when executing an INSERT-FROM-SELECT operation. If a value more than 0 is entered, a commit is performed automatically after inserting the indicated number of rows. For example, if the value is 5,a commit is performed every time 5 rows are inserted. Disable SQS cache: Select this to always read compiled AIS procedures and views from a disk. In this case, they are not saved in the cache. Procedures cache size: Enter the number of AIS stored queries created with a CREATE PROCEDURE statement that can be kept in cache memory. This propertys value is ignored if Disable SQS cache size is selected. Expose XML fields: Select this to display data returned for a query as XML, representing the true structure of the result. This is useful when querying a data source table that contains arrays or variants. For additional information, see the SELECT XML Statement. XML field name: Enter the name used in a query to indicate that the data is returned as XML, instead of the keyword XML. This is available only if Expose XML fields is selected.
Temp Features
The temp features section lets you add temporary properties to the binding. These properties may be defined in the AIS documentation or release notes, or you can define any additional property. The temporary property is added to the [Link] file, which defines the binding environment. To set a temporary feature Enter an ID for the feature. This is the name given to the Environmental property that configures the feature.
Transaction
This following list shows the transaction properties. The transaction properties control how transactions are handled in the binding.
Commit on destroy: Select this to commit all single-phase commit transactions opened for a data source, if a connection closes while the transaction is still open. Disable 2PC: Select this to disable Two-phase Commit capabilities, even in drivers that support two-phase commit. User commit confirm table: Select this to use the commit-confirm table for data sources that support single-phase commit. Recovery delay: Enter the number of minutes from the start of a transaction before any recovery operation on that transaction can be attempted. The default is 15. Time limit: Enter the time to wait for a transaction to complete before an error is returned. This parameter is also used when executing a RECOVERY, and it then indicates the number of minutes from the last transaction activity to wait before a forced activity can be executed.
Conversions: You can select one of the following: No conversion: Select this if you want all transactions to remain as sent. This is selected by default. Convert all to distributed: Select this to convert all simple Transaction Managers into distributed transactions. Convert all to simple: Select this to convert all distributed transactions into simple transactions.
Tuning
This following list shows the tuning properties. The tuning properties are set to increase system efficiency in the binding.
Dsm maximum buffer size: Enter the maximum size of a cache memory. This cache is used when memory is required on a temporary basis (as when AIS sorts data for a query output, for a subquery, or for aggregate queries). This cache size is not used for hash joins and lookup joins (see also, Hash buffer size). The default is 1000000. Dsm maximum hash file size: Dsm maximum sort buffer size: Enter the maximum size of the sort buffers. Use this parameter instead of Dsm maximum buffer size for sorts only. The default is 1000000. Dsm middle buffer size: Enter the maximum number of bytes for the index cache. This cache is not used for hash joins and lookup joins. The default is 1000000. File pool size: Enter the maximum number of files that can be opened in the file pool. The default is 10. File pool size per file: Enter the size of the file in the pool. The default is 3. File close on transaction: Select this if you want the File Pool to close when a transaction is committed. Use global file pool: Select this to use a global file pool is. When the workspace server mode parameter is set to multiClient or reusable, this parameter also
Binding Configuration 3-23
indicates whether the file pool closes upon the client disconnection. See Server Mode.
Hash buffer size: Enter the number of bytes of cache memory that is available for each hash join or lookup join. The default is 1000000. Hash max open files: Enter the maximum number of files that a query can open at one time for use when performing hash joins. The number assigned to this parameter must not exceed the system maximum. The default is 90. Note: The hash join optimization strategy results in a number of files being opened to perform the join. The larger the table size, the more files are opened. By adjusting this parameter you can disable hash joins on very large tables, while allowing hash joins for small tables. (See Disable hash join, for information on disabling hash optimization for all table joins).
Hash primary extent size: Enter the primary extent size (MVS and Tandem only). Hash secondary extent size: Enter the secondary extent size (MVS and Tandem only). Hash max extents: Enter the maximum extent size (Tandem only). Hash enable RO: Select this for the QP to store the first hash bucket in memory instead of a sequential file. Hash refresh EoF: Select this to refresh EoF requests (Tandem only).
XML
This following list shows the XML properties. The XML properties control how XML files are handled in the binding.
COM maximum XML in memory: The maximum size, in bytes, for an XML document held in memory. The default is 524288. COM maximum XML size: The maximum size of an XML document passed to another machine. The default is 524288. Note: When you increase this value for this property, you may need to increase the value for the maxXmlSize property in the daemon. For more information on daemons, see Setting up Daemons.
COM XML transport buffer size: Enter the maximum size of the internal communications buffer. The default value (-1) indicates there is no size limit. XML date format: Enter the date format to use for XML. The options are:
ISO (the default): The date format is: YY-MM-DDThh:mm:ss[.ss..] ODBC: The date format is: YYYY-MM-DD HH:MM:SS[.NNN...]
Replace invalid XML characters: Select this to replace invalid XML characters with a ?. It is used for diagnostic and troubleshooting purposes. XML trim character columns: Select this to enable padded spaces to be trimmed from XML string columns when the record format of is fixed. By default this is selected, and padded spaces are trimmed for fixed size character columns. If you do not want this behavior, clear this check box.
Parameter Name
adminTrace basedDate This property is used to customize the based_date data type having the form: basedDate="yyyymmdd[/dttype[/dtl en[/multiplier]]]" where:
yyyymmdd: Start date dttype: The name of the data type (int4 is the default). dtlen: The number of digits in the data type (if not atomic). multiplier: The number of increments per day.
Example: 19700101/int4//24 indicates the number of hours since January 1, 1970. basedDateNullability true If true, when the based_date value is 0 it is considered Null. If false, the type is not nullable.
Languages
This section provides additional, detailed information regarding each language supported. The languages supported are:
ARA (Arabic) ENG (English) FR (French) GER (German) GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)
ARA (Arabic)
If the codepage parameter is blank, the default codepage on all supported platforms is AR8ISO8859P6, with the exception of HP NonStop platforms, where the default codepage is ARCII. The Windows codepage is 1256.
ENG (English)
The default. If the codepage parameter is blank, the default codepage on all supported platforms is ASCII, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is EBCDIC. The Windows codepage is 1252.
FR (French)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.
GER (German)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.
GREEK (Greek)
If the codepage parameter is blank, the default codepage on all supported platforms is WEISO8859P7, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC875. The Windows codepage is 1253.
HEB (Hebrew)
If the codepage parameter is blank, the default codepage on all supported platforms is IW8ISO8859P8, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is IW8EBCDIC424. The Windows codepage is 1255.
JPN (Japanese)
If the codepage parameter is blank, the following are the default codepages on the supported platforms:
Platform HP NonStop IBM z/OS IBM OS/400 OpenVMS UNIX, excluding Sun Solaris UNIX Sun Solaris Windows Default Japanese Codepage JA16SJIS JA16DBCS JA16DBCS JA16VMS JA16SJIS JA16EUC JA16SJIS
KOR (Korean)
If the codepage parameter is blank, the default codepage on all supported platforms is KO16OSC5601, with the exception Windows platforms, where the default codepage is KO16MS949 (949) and IBM OS/400 and z/OS platforms, where the default codepage is KO16DBCS.
SPA (Spanish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDICLATIN. The Windows codepage is 1252.
TUR (Turkish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P9 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDIC1026. The Windows codepage is 1254.
Note:
The XML representation of the environment properties are displayed in Attunity Studio in the XML editor. To view the XML, right-click the binding you are working with and select Open as XML.
4
Setting up Daemons
This section contains the following topics:
Daemons Defining Daemons at Design Time Reloading Daemon Configurations at Runtime Checking the Daemon Status Starting and Stopping Daemons Adding and Editing Workspaces
Daemons
Daemons manage communication between machines running AIS. The daemon is responsible for allocating Attunity server processes to clients. A daemon runs on every machine running AIS. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace, and provides the clients with the required servers. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process.
Note:
The configuration supplied with the product installation includes the default IRPCD daemon. This configuration is used when no other daemon is configured to access the machine that is requested by a client machine.
For more information, see AIS Runtime Tasks from the Command Line.
Setting up Daemons
4-1
Adding a Daemon
The following describes how to add a new daemon to your system using Attunity Studio. When you want to add a new daemon in the Design perspective, you use standard configuration information or copy the configuration information from another daemon. When you edit the Daemon, you can make custom changes to its configuration. To add a new daemon Open Attunity Studio. In the Design Perspective, Configuration view, expand the Machines folder and then expand the machine where you want to add the daemon. Right-click the Daemons folder and select New Daemon. The New Daemon dialog box opens.
1. 2. 3.
4. 5.
Enter a name for the new daemon. Select one of the following:
Create empty daemon with default values Copy Properties from another daemon If you choose to copy the properties of an existing daemon, click browse and select the daemon where you want to copy the properties.
6.
Click Finish. The Daemon editor opens on the right of the workbench. This editor contains three tabs, which are described in the Editing a Daemon section below.
Notes:
A machine can have a more than one daemon running at the same time, each on its own port. You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions. The daemon editor may contain four additional tabs for workspace information. To display both the daemon and workspace configuration, right-click a workspace under the daemon and select Edit Workspace For a description of these tabs, see Adding and Editing Workspaces.
Editing a Daemon
You can edit the information in the following Daemon editor tabs:
Control: In this tab you enter general details about the server timeout parameters and monitoring rules. Logging: In this tab you enter the logging details such as, the log file format and location, and the parameters to log and trace. Security: In this tab you enter the daemons administrative privileges and access privileges.
To open the daemon editor 1. In the Design Perspective Configuration view expand the Machines folder and then expand the machine where you want to add the daemon.
2. 3.
Expand the daemon folder. Right-click the daemon you want to edit and select Open. The Daemon editor opens on the right of the workbench. Click each tab to edit the information. The tab fields are described below.
Note:
Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.
Control
The Control tab for the daemon lets you define general daemon control properties. The following figure shows the Daemon control tab:
Setting up Daemons
4-3
From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range
The daemon restarts automatically if it fails for any reason (any error that causes the daemon process to terminate, such as network process lost or the CPU running the daemon crashes and the backup daemon is defined on another CPU). All available and unconnected servers are terminated and any connected servers are marked and terminated on release. Also the backup starts a backup for itself. The backup appends a new log file to the log of the original daemon, adding a line indicating that a backup daemon was started. The language that the daemon supports. This setting is used when working with a client with a codepage different from the server codepage. See Basic NLS Settings.
Default language
Table 41 (Cont.) Daemon Control tab Field Maximum XML request size Maximum XML in memory Timeout Parameters Call timeout The timeout period for short calls for all daemons. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in this field for the daemon configuration/workspace. Connect timeout The time the client waits for a daemon server to start. If the daemon server does not start within this period, then the client is notified that the server did not respond. The value specified for this parameter serves as the default timeout for all the workspaces listed in the daemon configuration. The default value for this parameter is 60 seconds. Notes:
Description The maximum number of bytes that the daemon handles for an XML document. The maximum amount of space reserved for the XML in memory.
Entering the timeout in a workspace overrides the value set in this field for that workspace. Even if the XML source does not list this parameter in the workspace section, the workspace gets it using the default value. If you want to prevent a workspace from using the default value, you must enter a value of zero for this parameter in the workspace section.
The maximum amount of time any daemon client may be idle before the connection with the server is closed. Note: Entering the timeout in a Workspace overrides this setting for that workspace.
Logging
You can set up daemon logging for the following:
The daemon log records daemon operations, such as RPC calls and error messages. You can select the type of information you want included in the log file
Note:
This section does not apply to z/OS platforms, where logging information is written directly to the process.
Setting up Daemons
4-5
In this tab, define the daemon log file settings, the log file structure and the location where the log is saved in the Daemon Logging tab. You can also define the data that is logged and traced in the file.
Note:
Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.
The following table describes the fields in the Daemon Logging tab.
Table 42 Field Logging options Daemon log file location Enter how the daemon produces its log data. The full path must be specified. You can use wildcards as part of this file to indicate specific information. Daemon Logging Tab Description
Table 42 (Cont.) Daemon Logging Tab Field Server log filename format Description Defines the name and location of the server log file. The field must specify the full path name. If no directory information is provided for the log file, then it will be located in the login directory of the account running an AIS workstation. You can enter the following wildcards in this field to generate the following information:
%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)
Daemon operations Trace and debug options Daemon RPC function calls Log ACX Extended RPC trace
Select this if you want to log all daemon RPC function calls. Select this if you want to log requests and processes. Generates a verbose message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. Generates system-specific tracing of various operations. Generates a timestamp for every entry to the server log file. Generates a message in the server log file for each socket operation. Select this if you want to log low-level RPC operations. Disables the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Generates a message in the server log file for each RPC function called. This is useful for troubleshooting the server. Enables debugging messages on the server. Sets the binary XML log level. Your options are:
System trace Timing Sockets Trace information No timeout Call trace RPC trace Binary XML log level
Setting up Daemons
4-7
Notes:
AIS supports a subset of UNIX commands that enables file manipulation using standard UNIX commands for the AS/400. Log files are stored under the UNIX file system section of the AS/400 machine. The following types of files are stored under the native OS/400 file system:
When manipulating configuration files or log files, wrap the path/filename in single quotes (), to ensure that the slash (/) used in the UNIX file system syntax is handled correctly. (The slash is an OS/400 special character.)
Security
The Security tab for daemons is used to:
Grant administration rights for the daemon. Determine access to the computer.
Note:
Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.
The following table describes the fields in the Daemon Security tab:
Table 43 Field Administrators privileges All users Daemon Security Tab Description Identifies the users (accounts) allowed to perform administrative tasks (tasks that require administrative login). Enables all users to access the daemon and change the settings. When this is selected, add the names of users (accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is not in the list, any user who has logged on to the daemon can administer the workspace Selected users only Identifies the names of users (accounts) and groups that can be administrators.1 If a user is not specified, the account from which the daemon was started is the administrator account. The daemon does not require the user to log in to the account on the system, but to log in to the daemon using the account name and password. Machine access Allow anonymous login Manages access to the computer. Indicates whether workspaces allow anonymous logins (without user name/password entries). For the optimal level of security, do not select this option and define a username for the Daemon Administrators parameter. If unchecked, then no workspace can have an anonymous client. If checked, then a particular workspace allows anonymous clients. Enables login passwords to be cached. This enhances performance by reducing login times for future connections from the same client in a session. Indicates the encryption method used to send information across the network. The default is an asterisk (*), meaning that all methods are acceptable. If an encryption method is specified, it must be used. Currently, AIS supports the RC4 and DES3 protocols. Enter the authentication domain name.
The name is prefixed with@, to utilize the operating system GROUP feature.
Click Add user and enter the name of a valid user in the Add user dialog box. Make sure that the name entered matches a valid user account.
Setting up Daemons 4-9
To add groups to the list, click Add group and enter the name of a valid group in the Add group dialog box. Make sure that the name entered matches a valid group account.
3.
Click OK. The name of the user or group is added to the field.
1. 2. 3.
To rename a user or group Select the user or group you want to rename and click Rename. Change the name entered in the Rename user or Rename group dialog box to the name you want to use. Click OK. The changes are entered in the field.
1. 2.
To remove a user or group Select the use or group that you want to remove. Click Remove. The user or group is removed from the field.
2.
Right click the daemon and select Status. A dialog box with the daemon status is displayed. The daemon status contains the following information:
Daemon platform IRPCD process ID IRPCD log file IRPCD configuration Logging detail Number of logins Number of active daemon clients Number of active client sessions Max. number of concurrent client sessions
In the Runtime Explorer view, expand the Daemon folder. Right-click the daemon you want to shut down and select Shutdown Daemon.
serverMode="reusable" serverLogFile="%[Link]%i" reuseLimit="20" nAvailableServers="10" minNAvailableServers="4" anonymousClientAllowed="false" administrator="*" /> <workspace name="ACMEReportingServer" workspaceAccount="report" startupScript="machine_dependent"1 serverMode="singleClient" serverLogFile="%[Link]%i" nAvailableServers="3" minNAvailableServers="1" anonymousClientAllowed="false" administrator="*" /> </workspaces> <control "ServerLogfile=/users/nav/%A%U%[Link]" /> <security anonymousClientAllowed="false" administrator="sysadmin" /> <logging logFile="[Link]" logClientDomain="0" detail="errors" /> </daemon> </daemons>
Note:
The daemon configuration is displayed in Attunity Studio by editing the daemon, in the Source tab.
Adding a Workspace
When you define a new Workspace, you can copy the values of an existing workspace on the same Daemon or have AIS set its default values. To add a new workspace 1. Open Attunity Studio.
2. 3. 4.
In Design Perspective Configuration view, expand the Machine folder and then expand the machine with the daemon where you want to add the workspace. Expand the daemon folder. Right-click the daemon where you want to add the workspace and select New Workspace.
The startupScript values in these examples is machine dependent. For example, for z/OS the startup script might be startupScript=[Link], for OpenVMS startupScript="dka0:[[Link]]NAV_SERVER.COM" and for Windows startupScript="nav_util svc"
Note:
You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.
5.
Name: The name used to identify the workspace. The workspace name is made up of letters, digits, underscores (_) or hyphens (-)
Note:
On machines running HP NonStop or z/OS, limit the name of a workspace to five characters so that the system environment file, workspaceDEF, does not exceed eight characters. Workspace names greater than five characters are truncated to five character and the default workspace, Navigator, will look for a system environment called NavigDEF.
6.
Create empty workspace with default values Copy properties from another workspace If you copy the properties from another workspace, the fields below the selection become active. You must indicate the workspace from where you want to copy the properties. Enter the following information:
<name of the workspace> in <name of the daemon where the workspace is located> on <name of machine where the daemon is located>. or you can click the browse button and browse to select the workspace you want to use. The above information is added automatically.
7.
Click Next to open the Select Scenario window. Select the type of applications the daemon works with from the following options:
Application server using connection pooling. Stand-alone applications that connect and disconnect frequently. Applications that require long connections, such as reporting programs and bulk extractors. Custom (configure manually). If you select this option, the Workspace editor opens. See
8.
Click Next to open the next window. Select one of the following. The options available depend on the scenario selected:
The minimum number of server instances available at any time: This is the minimum number of connections that are available at any time. If the number of available connections drops below this number, the system will create new connections. (Available if you select Stand-alone applications that connect and disconnect frequently). The maximum number of server instances available at any time: This is the maximum number of connections that are available at any time. If the number of connections used reaches this number, no additional server connections can be made. (Available if you select Stand-alone applications that connect and disconnect frequently). The average number of expected concurrent connections: This lets the system know how much the average load will be and helps to distribute the resources correctly. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). The maximum number of connections: This is the most connections that will be available. If the number of requests exceeds this number, an error message is displayed that informs the user to try again when a connection becomes available. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). How many connections you want to run concurrently. This sets the number of connections that will run at the same time. (Available if you select Applications that require long connections, such as reporting programs and bulk extractors).
9.
Click Next and enter the amount of wait time for the following parameters. If your system is not too overloaded, you can leave the default times.
How long to wait for a new connection: Enter the amount of time (in seconds) to wait for a connection to be established before the system times out. For example if you want a wait time of one minute enter 60 (the default). If you enter 0, the time is unlimited. How long to wait for a response that is usually fast: Enter the time (in seconds) to wait for a response from the system before the system times out. For example if you want to wait for one minute, enter 60. The default is 0, which indicates unlimited wait time.
10. Click Next to open and enter the workspace security information in this window.
You can determine which users or groups can access the workspace you are defining. For more information, see <xref to Managing Security>.
11. Click Next to open the summary window. Review the summary to be sure that all
the information entered is correct. If you need to make any changes, click Back to go back to previous steps and change the information.
12. Click Finish to close the wizard and add the new workspace to the Configuration
view.
Editing a Workspace
After you add a Workspace, you can make changes to the workspaces configuration. You can edit the information in the following workspace editor tabs:
General: Specifies general information including the server type, the command procedure used to start the workspace, the binding configuration associated with this workspace (which dictates the data sources and applications that can be accessed) the timeout parameters, and logging information. Server Mode: Contains the workspace server information including features that control the operation of the servers started up by the workspace and allocated to clients. Security: Contains administration privileges, user access, ports available for access to the workspace and workspace account specifications.
1. 2. 3. 4. 5.
To edit a workspace Open Attunity Studio. In the Design Perspective Configuration view, expand the Machines folder and then expand the machine where you want to edit the workspace. Expand the daemon folder. Expand the daemon with the workspace you want to edit. Right-click the workspace you want to edit and select one of the following:
Workspace Setup Wizard: Opens the wizard that was used to add a new workspace (see Adding a Workspace). Make any required changes to the wizard settings to change the workspace definition. Open: Opens the editor. The editor includes the information that was entered in the New Workspace wizard. Click the following tabs to edit the information: General Server Mode Security
Note:
The default daemon configuration supplied with AIS includes the default Navigator Workspace. This workspace is automatically used if no workspace is selected.
General
You enter general information about the Workspace operations in the General tab. This information includes the server type, the command procedure used to start the workspace and the binding configuration associated with this workspace. The following figure shows the General tab:
Figure 46 The General Tab
Notes:
You can also change daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager. For z/OS logging, the default is to write the log entries to the job only
Workspace name
The name used to identify the workspace. Note: The default configuration includes the default Navigator workspace. This workspace is automatically used if a workspace is not specified as part of the connection settings.
A description of the workspace. The full path name of the script that starts the workspace server processes. The script specified here must always activate the nav_ login procedure and then run the server program (svc). If you do not specify the directory, the startup procedure is taken from the directory where the daemon resides. AIS includes a default startup script, which it is recommended. Enter the script name only because the server is activated as a started task. The workspace server type:
Server type
The name of a specific binding configuration on the server machine that you want to use with this workspace. Notes:
For HP NonStop the name of the binding must be five characters or less. For z/OS the name of the binding must be five characters or less and the name must be surrounded by single quotes. If the high-level qualifier is not specified here, [Link] is assumed, where NAVROOT is the high-level qualifier specified when Attunity Server is installed.
Enter a name of a virtual database that this workspace accesses if applicable. A virtual database presents a limited view of the available data because only selected tables from either one or more data sources are available, as if from a single data source. For more information, see Using a Virtual Database. If a value is entered in this field, only the virtual database can be accessed using this workspace. Note: Entering a value in this field restricts access from a JDBC Client Interface, ODBC Client Interface or OLE DB (ADO) Client Interface during runtime.
Timeout parameters
The following properties define the time the client waits for the workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond. Entering a timeout value for these properties overrides the default setting entered in the Control tab. The maximum amount of time a workspace client can be idle before the connection with the server is closed. The time the client waits for a workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond.
Table 44 (Cont.) General Tab Field Call timeout Description The timeout period for short calls for all workspaces. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in Call timeout field for the daemon configuration. Logging and Trace Options Specific log file format Defines the name and location of the server log file if you want the data written to a file instead of SYSOUT for the server process. The parameter must specify the name and the high level qualifier. You can enter the following wildcards in this field to generate the following information:
%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)
Logging
Specifies the type of tracing. The following tracing options are available:
No timeout: Select this to disable the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Call trace: Select this to generate a message in the server log file for each RPC function called. This is useful for troubleshooting the server. RPC trace: Select this to enable debugging messages on the server. Sockets: Select this to generate a message in the server log file for each socket operation. This is useful for troubleshooting client/server communication by providing a detailed trace of every client/server communication. Extended RPC trace: Select this to generate a more detailed message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. System trace: Select this to generate operating system-specific tracing. Timing: Select this to generate a timestamp for every entry to the server log file.
Query governing restrictions Max Number of Rows in a Table That Can Be Read Select the maximum number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error.
Table 44 (Cont.) General Tab Field Max Number of Rows Allowed in a Table Before Scan is Rejected Description Select the maximum number of table rows that can be scanned. This parameter has different behavior for query optimization and execution.
For query optimization, the value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). For query execution, a scan is limited to the value set. When the number of rows scanned exceeds the number entered, the query returns an error.
Server Mode
You enter the features that control the operation of the servers started up by the workspace and allocated to clients in the Server Mode tab. For example, you can configure the Workspace to use connection pooling and to start up a number of servers for future use, prior to any client request, instead of starting each server when a request is received from a client.
Figure 47 The Server Mode Tab
Notes:
You can also change Daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager.
The table below describes the fields in the Server Mode tab:
Table 45 Field Workspace server mode Server Mode Tab Description Specifies the type of new server processes that the daemon starts up. The daemon supports the following server modes:
singleClient: Each client receives a dedicated server process. The account in which a server process runs is determined either by the client login information or by the specific server workspace. This mode enables servers to run under a particular user account and isolates clients from each other, as each receives its own process. However, this server mode incurs a high overhead due to process startup times and can use a lot of server resources as it requires as many server processes as concurrent clients.
multiClient: Clients share a server process and are processed serially. This mode has low overhead because the server processes are already initialized. However, because clients share the same process, they can impact one another, especially if they issue lengthy queries. The number of clients that share a process is determined by the Clients per server limit field. Notes: This mode is not available on HP NonStop machines. Do not use this property when accessing a database that supports two-phase commit through XA.
multiThreaded (Windows only): Clients are allocated a dedicated thread in a shared server process. This mode has low overhead since the servers are already initialized. However, because clients share the same process, they may impact one another, especially if the underlying database is not multi-threaded. The number of multi-threaded clients that share a process is set in the Clients per server limit field (the maximum number of concurrent clients a server process for the current workspace accepts) in the Attunity Studio Design perspective configuration tab. This value is set in the daemon configuration settings maxNClientsPerServer parameter. Notes: Multiple multi-client and multi-threaded servers can be started at the same time for optimal performance. Do not use this property when accessing a database that supports two-phase commit through XA.
reusable: An extension of single-client mode. Once the client processing finishes, the server process does not die and can be used by another client, reducing startup times and application startup overhead. This mode does not have the high overhead of single-client mode because the servers are already initialized. However, this server mode can use a lot of server resources as it requires as many server processes as concurrent clients. Do not use this mode with a database that supports two-phase commit through XA. In this case, define a new workspace for the data source, so that all the other data sources you are accessing use reusable servers. The other modes can be set so that the server processes are reusable. The number of times a process can be reused is controlled by the Reuse limit field value in Attunity Studio (the maximum number of times a server process can be reused or how many clients it can server before it finishes). Reuse of servers enhances performance since it eliminates the need to repeat initializations. However, reuse runs a risk or using more memory over time. The default for the Reuse Limit field value is 0, which means that there is no limit.
Port range
Select the range for specific firewall ports through which you access the workspace. Determines the range of ports available for this workspace when starting server processes. Use this option when you want to control the port number, so that Attunity Connect can be accessed through a firewall. Enter the port range in the following fields:
From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range
Select this to use the port range that is defined in the daemon. This is defined in the Port range for servers field in the daemon Control tab.
Maximum number of Enter the maximum number of server processes that can run at the server processes same time. Limit server reuse Select this if you want to limit the number of servers that can be reused. If this is selected, the Reuse limit parameter is available. If Limit server reuse is selected, in the field next to the check box, enter the maximum number of times a server can be reused. Select the maximum of clients accepted in a server process. A one-client server can be reused after its (single) client has disconnected. Reuse of servers enhances startup performance because it avoids the need to repeat initialization. This parameter is not available if the Limit server reuse parameter is not selected. This parameter is not available if the server mode value is singleClient. Limit concurrent clients per server Select this to limit the number of clients that a server can accept for the current workspace process. If this is not selected, the number of clients is unlimited.
Table 45 (Cont.) Server Mode Tab Field Description If Limit concurrent clients per server is selected, in the field next to the check box, enter the maximum number of clients that a server process for the current workspace accepts. The default for this field is None, indicating that the number of clients for each server is unlimited. This field is available if the server mode value is multiClient or multiThreaded. Specify Server Priority Set the priority for servers. For example, a workspace for applications with online transaction processing can be assigned a higher priority than a workspace that requires only query processing. The lower the number, the higher the priority. For example, workspaces with a priority of 1 are given a higher priority than workspaces with a priority of 2. Note: This is unavailable if Use default server priority is selected. Use default server priority Keep when daemon ends Sets the priority to 0. There is no specific priority for this workspace. Clear this check box to set a priority in the Specify Server Priority parameter. Select this to kill all servers started by that daemon when a daemon is shutdown, even if they are active. Select this if you want the servers for the workspace to remain active, even after the daemon has been shut down. If selected, it is the responsibility of the system operator or manager to ensure that the servers are eventually killed. This must be done at the system level.
Server Provisioning Number of prestarted Initial number of servers: The number of server processes that are servers in pool prestarted for this workspace when the daemon starts up. When the number of available server processes drops lower than the value specified in the Minimum number field, the daemon again starts server processes until this number of available server processes is reached. The default for this field is 0. Number of spare servers The minimum number of server processes in the prestarted pool before the daemon resumes creating new server processes (to the value specified in the Initial number of servers field). If this field is set to a value higher than the Initial number of servers field, the daemon uses the value specified in the Initial number of servers field. The default for this field is 0. The maximum number of available server processes. Once this number is reached, no new nonactive server processes are created for the particular workspace. For example, if a number of server processes are released at the same time, so that there are more available server processes than specified by this field, the additional server processes higher than this value are terminated. The default for this field is zero, meaning that there is no maximum.
Security
Configure the security level for a workspace in the Workspace editor Security tab. This lets you set the security options for the workspace only. To set security on the daemon level, see Security. The Security tab is used:
To grant administration rights for the workspace To determine access to the workspace by a client
Table 46 (Cont.) Security Tab Field Authorized Workspace users Description Indicate which users have permission to use the workspace. Select one of the following
All users: Any user who has logged on to the daemon may use the workspace Selected users only: Select this to allow only users (or accounts) with specific permission to use the workspace. When this is selected, add the names of users (or accounts) and groups that can be use the workspace in the field below. See Administering Selected User Only Lists for information on adding users and groups to the field. Note: If no user is specified, any user who has logged on to the daemon may use the workspace.
Authorized Administrators
Identifies the users (accounts) with administrator privileges. Select one of the following:
All users: Indicates that anyone can access the workspace and change the settings. Selected users only: Select this to allow only users (or accounts) with specific permission to be administrators. When this is selected, add the names of users (or accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is specified, any user who has logged on to the daemon may administrator this workspace.
Note:
You can also use the Allow Listing parameter. Select this if you
To set this parameter you must use the XML view. Right-click the daemon with the workspace you are working with and select Open as XML. Find this parameter in the XML editor to change it. For more information, see Editing XML Files in Attunity Studio.
You can have more than one workspace, each with different binding configurations. If you want to use a binding other than the default binding, on UNIX and OpenVMS platforms you can select the binding as part of the startup script for the workspace.
Disabling a Workspace
Workspaces can be disabled. When you disable a workspace, server processes are not started and a client requesting the disabled workspace receives an error. To disable a workspace in Attunity Studio 1. In the Design perspective Configuration view, right-click the workspace you want to disable.
2.
Select Disable.
Enter the User name and Password for the user with authorization rights for this workspace. For more information about setting rights and security privileges, see Security.
5
Managing Metadata
This chapter includes the following sections:
Data Source Metadata Overview Importing Metadata Managing Metadata Using Attunity Metadata with AIS Supported Data Sources Procedure Metadata Overview Importing Procedure Metadata Using the Import Wizard Procedure Metadata Statements ADD Supported Data Types ADD Syntax
CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources (z/OS Only) RMS Data Source (OpenVMS Only)
Text Delimited File Data Source VSAM Data Source (z/OS) (VSAM Under CICS and VSAM Drivers (z/OS Only)) OLEDB-FS (Flat File System) Data Source
An Adabas C Data Source is available if the Adabas Predict metadata is not available (or efficient). Metadata can be imported, saved and managed in the Attunity Studio Design perspectives Metadata tab.
Importing Metadata
You can use the Attunity Studio Import wizards or standalone import utilities to generate metadata. If an import wizard or standalone utility is not available, you can create the metadata manually.
Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Batch CDC (z/OS Platforms) (VSAM Under CICS and VSAM Drivers (z/OS Only))
For other data source drivers, if a COBOL copybook is available, metadata can be generated from the COBOL in Attunity Studio. Otherwise, the metadata has to be manually defined in the Attunity Studio Design perspective Metadata tab. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can save an import procedure and use it again.
Note:
Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata
Each data source has different import requirements. Some may have different amount steps than others and require different information. Each wizard guides you through the import. The input files for each import (such as COBOL copybooks) are needed during the import to define the input and output structures used by the application adapter. These files are sent to the machine running Attunity Studio using the FTP protocol, as part of the import procedure. For additional information on importing metadata, refer to the specific data source.
Adabas DDM import (DDM_ADL): This utility produces Attunity metadata from non-Predict metadata (nsd files) BASIC mapfiles import (BAS_ADL): This utility produces Attunity metadata from BASIC mapfiles. DBMS Import (DBMS_ADL): produces Attunity metadata from a DBMS database. HP NonStop Enscribe Import (ADDIMP): This utility produces Attunity metadata for HP NonStop Enscribe data from a DDL subvolume and/or COBOL copybooks. You can generate the metadata from COBOL copybooks in Attunity Studio. HP NonStop Enscribe Import (TALIMP): This utility produces Attunity metadata for HP NonStop Enscribe data sources from TAL datafiles and a DDL subvolume. MS CDD Import (CDD_ADL): This utility extracts the information stored in an RMS CDD directory into ADD metadata. You can generate the metadata from COBOL copybooks in Attunity Studio.
Managing Metadata
You manage metadata in Attunity Studio. Click the Metadata tab in the Design perspective to view and modify Attunity Metadata for Data Sources. To view and modify metadata for data sources In the Design perspective Configuration view, right-click the data source for which you want to manage the metadata. Select Edit metadata from the shortcut menu. The Metadata tab opens with the selected data source displayed in the tree.
Note:
1. 2.
You can also open the Metadata tab and right-click the Data sources folder in the Metadata view to add the data source that you want to import metadata for to the tree.
3.
Right-click the resource (such as the data source table) in the Metadata view and select Edit.
Data source tables are edited using the following tabs, which are at the bottom of the screen:
General Tab: Defines general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Specifies the table columns and their properties. For example, the column data type, size and scale. Indexes Tab: Enables you to specify the indexes of a table. The indexes are described by the order of the rows they retrieve and the data source commands used and the index type. Statistics Tab: Enables you to specify statistics for the table, including the number of rows and blocks of the table.
Note:
Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources. For example, the metadata for an RMS record is referred to as the metadata for an RMS table.
Change the relevant values for the table to be extended in the Statistics tab. The table symbol in the tree is marked with an asterisk to show that the metadata has been extended.
Note:
The information in other tabs are for reference only and cannot be edited. The noExtendedMetadata property in the data source definition in the binding configuration is set to false.
You can sometimes improve performance using Attunity metadata instead of the native metadata. In this case, you can export a snapshot of the native metadata to Attunity Connect and use this local copy of the native metadata when accessing the data source.
Examples of when this is beneficial include when native metadata is not up to date or information, such as statistics, are not available. You can see a snapshot of the metadata in Attunity Studio. To make a copy of data source metadata 1. Display the metadata for the data source in the Design perspective Metadata tab of Attunity Studio.
2. 3. 4.
Right-click the data source and select Manage Cached Metadata from the popup menu. Select the tables that you want to use a local copy and move them to the right pane. Click Finish. The tables are displayed under the data source.
Note::
The table symbol changes from the relational data source symbol to a data source symbol that requires Attunity metadata. The localCopy property in the data source definition in the binding configuration is set to true.
Only the tables that have cached metadata are displayed in the tree. To revert back to using non-cached metadata, you can either right-click individual tables and choose Delete Cached Table from the popup menu or, for all the tables, right-click the data source and choose Set Metadata followed by Native Metadata from the popup menu.
Note:
If the native metadata change, then from using a snapshot of the native metadata is not recommended.
For other procedures, the metadata is created manually, as described in Manually Creating Procedure Metadata. If COBOL copybooks describing the procedure input and output structures are available, you can import the Metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can also save an import procedure for reuse.
Note:
Imported metadata is independent of its origin. Changes made to the source metadata after the import, are not made to the Attunity metadata.
The input files for the import (such as COBOL copybooks) to define the input and output structures used by the Application Adapter These files are copied to the machine running Attunity Studio as part of the import procedure. The names of the applications (such as the IMS/TM transaction or CICS program) to be executed via the procedure.
The <procedure> Statement The <parameters> Statement The <dbCommand> Statement The <dbCommand> Statement The <fields> Statement
An attribute list. Input parameters for procedures A <field> statement which includes the fields list.
Syntax
<procedure name="proc_name" attribute="value" ...> <dbCommand>...</dbCommand>
<fields> <field name="field_name" attribute="value" .../> ... </fields> <parameters> <field name="param" attribute="value" ... /> ... </parameters> </procedure>
The proc_name entry must conform to standard ANSI 92 SQL naming conventions. You must include a <field> statement. Use a <parameters> statement to specify input parameters.
Example 51 <procedure> statement <procedure name="math_simple" filename="prc_samples"> <dbCommand>LANGUAGE=C</dbCommand> <fields> <field name="sum1" datatype="int4"> <dbCommand>order=1</dbCommand> </field> <field name="subtract" datatype="int4"> <dbCommand>order=2</dbCommand> </field> <field name="multiply" datatype="int4"> <dbCommand>order=3</dbCommand> </field> <field name="divide" datatype="int4"> <dbCommand>order=4</dbCommand> </field> </fields> <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters> </procedure>
<procedure> Attributes
The attributes are listed in the following table:
Table 51 Attribute Name alias <procedure> Attributes Syntax alias="name" Description Replaces the procedure name with a logical procedure name. Names greater than 39 characters are truncated from the left.
Table 51 (Cont.) <procedure> Attributes Attribute Name description Syntax description="optional_ user_supplied_ description" Description Specifies an optional textual description.
filename
filename="full_filename" Specifies the full name and location of the file. where full_filename includes the full path to the file.
name
name="name"
Syntax
<parameters> <field name="param" attribute="value" ...> <dbCommand>...</dbCommand> </field> <field name="param" attribute="value" .../> ... </parameters>
Example 52 <parameters> statement <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters>
Syntax
<dbCommand>text</dbCommand>
Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>
For details of the specific syntax requirements for a procedure, see the specific procedure.
Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>
The following code defines one field (N_NAME) and its two attributes (data type and size):
<field name="n_name" datatype="string" size="25" />
<field> Attributes
The attributes are listed in the following table:
Table 52 Attribute Name datatype <field> Attributes Syntax datatype="datatype" Description Specifies the data type of a field. For the supported data types, see ADD Supported Data Types. scale scale="n" Specifies the number of characters or digits. For example: <field name="SALARY" datatype="numstr_s" size="10" scale="2" /> size size="n" where n is the number of characters or digits. The digit must be greater than 0. name name="name" Specifies the name of the field. This attribute must be specified. For example: <field name="EMP_ID" datatype="int4" /> Specifies the size of the field.
Syntax
group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>
A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.
Example 55 <group> statement <procedure name='math_all_structs' filename='prc_samples'> <dbCommand>LANGUAGE=C</dbCommand> <fields> <group name='MATH_STRUCT'> <dbCommand>ORDER=1</dbCommand> <fields> <field name='SUM1' datatype='int4'/> <field name='SUBTRACT' datatype='int4'/> <field name='MULTIPLY' datatype='int4'/> <field name='DIVIDE' datatype='int4'/> </fields> </group> </fields> <parameters> <group name='MATH_IN_STRUCT' > 5-10 AIS User Guide and Reference
<dbCommand>ORDER=2</dbCommand> <fields> <field name='OPER1' datatype='int4'/> <field name='OPER2' datatype='int4'/> </fields> </group> </parameters> </procedure>
<group> Attributes
The attribute is listed in the following table:
Table 53 Attribute Name name <group> Attributes Syntax name="name" Description Specifies the name of the field. This attribute must be specified. For example: <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields>...</fields> </group>
Different nuances of the same data. Different usage of the same physical area in the buffer.
This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variants:
Managing Metadata
5-11
In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation screen enables you to replace any variant with the fields of a single case. The metadata generated following a metadata import, appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant>
In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction.
This is true, unless a selector is specified in the import manipulation screen. Refer to Attunity Studio Guide and Reference for additional information.
ADD Syntax
The following is the ADD syntax to use for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>
The metadata generated by Attunity Studio following a metadata import appears as follows:
<filed name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9" </fields> </case <case name="ORDER_DETAILS" value="D" <fields <field name="PART_NO" datatype="numstr_u" size "9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields </case> </variant>
Usage Notes
From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the User to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.
Managing Metadata
5-13
Note:
2.
Right-click the required variant and select Structures, and then select Mark selector. The Select Selector screen opens.
3. 4.
Select the selector for the variant from the list of available selectors in the COBOL copybook, and then click OK. Repeat for all the required variants, and then click OK.
Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>
Example 58 <case> statement <variant name='VAR_DIVIDE_DATATYPE' selector='DIVIDE_DATATYPE'> <dbCommand>ORDER=5</dbCommand> <case name='CASE_1_1' value='L'> <fields> <field name='DIVIDE_LONG' datatype='int4'/> </fields> </case> <case name='CASE_1_2' value='F'> <fields> <field name='DIVIDE_FLOAT' datatype='single'/> </fields> </case> <case name='CASE_1_3' value='D'> <fields> <field name='DIVIDE_DOUBLE' datatype='double'/> </fields> </case> </variant>
<case> Attributes
The attribute is listed in the following table:
<case> Attributes Syntax name="name" Description Specifies the name of the case. Note: When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here.
value
value="value"
Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. Note: When a selector attribute is specified in the <variant> statement, a value attribute must be specified here.
Platform and data source-dependent data types run only on their respective platforms.
ADD Supported Data Types ODBC Type SQL_ TIMESTAMP SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP SQL_DATE JDBC Type Details ADABAS date format z/OS ADABAS packed decimal z/OS ADABAS numeric string ADABAS timestamp format Date packed into a 4 character string. Format: DMYY Example: 23-July-1998 is represented by four bytes, 19, 98, 7, and 23
apt_time
DBTYPE_TIMESTAMP
SQL_ TIMESTAMP
Managing Metadata
5-15
Table 55 (Cont.) ADD Supported Data Types ADD Type based_date OLE DB Type DBTYPE_ DBTIMESTAMP ODBC Type SQL_ TIMESTAMP JDBC Type Details Customize this data type by defining an environment variable (UNIX) or logical (OpenVMS) having the form: NVDT_ BASEDDATE=yyyymmdd[/dttyp e[/dtlen[/multiplier]]] where:
yyyymmdd: start date dttype: the name of the data type (int4 is the default). dtlen: the number of digits in the data type (if not atomic). multiplier: the number of increments per day.
Example: 19700101/int4//24 specifies the number of hours since Jan 1 1970. binary bit DBTYPE_BYTES DBTYPE_I1 SQL_BINARY SQL_TINYINT Unknown data type, string type, length must be specified A single bit within a byte. Size: 1 byte Format: datatype="bit" onBit="n", where n specifies which bit (within a byte) the field uses. If more than one bit is defined, the additional bits may be defined sequentially within the same byte (or bytes, if the number of bits requires this much space)
Table 55 (Cont.) ADD Supported Data Types ADD Type bits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details A signed number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:
n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.
The maximum number of bits you can map is 32. cstring DBTYPE_STR SQL_VARCHAR A null-terminated string of alphanumeric characters; maximum length must be specified. An extra byte is required for the null flag. A CorVision date-time format. ODBC date format. Date in a string having the form YYMMDD. Date in a string having the form YYYYMMDD. DB2 UDB date format (OS/400 machine) DB2 UDB date-time format (OS/400 machine). DB2 UDB time format (OS/400 machine). Packed decimal. Maximum number of digits: 31 Maximum fractions: 11 Length: int (number of digits/2) + 1 bytes dfloat DBTYPE_R8 SQL_DOUBLE Double floating point number (D_FLOAT). Size: 8 bytes Range: 0.29E-38 to 1.7E38 Precision: 16 digits
SQL_ TIMESTAMP SQL_DATE SQL_DATE SQL_DATE SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_NUMERIC
Managing Metadata
5-17
Table 55 (Cont.) ADD Supported Data Types ADD Type double OLE DB Type DBTYPE_R8 ODBC Type SQL_DOUBLE JDBC Type Details Double floating point number (G_FLOAT). Size: 8 bytes Range: 0.56E-308 to 0.90E308 Precision: 15 digits filler DBTYPE_BYTES SQL_BINARY Allocation for future use, string type; length must be specified. A fixed null-terminated string of numeric characters; length must be specified. An extra byte is required for the null flag IEEE double floating point number. IEEE single floating point number. Binary image (BLOB). Date in a four byte integer. Format: YYMMDD or YYYYMMDD Example: 23-Jul-1998 has the form: 980723 or 19980723. int1 DBTYPE_I4 SQL_TINYINT Signed byte integer. Size: 1 byte Range: -128 to +127 int2 DBTYPE_I2 SQL_SMALLINT Signed word integer. Size: 2 bytes Range: -32768 to +32767 int3 DBTYPE_I4 SQL_INTEGER Signed integer. Size: 3 bytes int4 DBTYPE_I4 SQL_INTEGER Signed long integer. Size: 4 bytes Range: -2147483648 to +2147483647 int6 DBTYPE_NUMERIC SQL_INTEGER Signed integer. Size: 6 bytes int8 DBTYPE_NUMERIC SQL_NUMERIC Signed quadword. Size: 8 bytes Range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 isam_decimal DBTYPE_NUMERIC SQL_NUMERIC CISAM and DISAM packed decimal.
fixed_cstring
DBTYPE_NUMERIC
SQL_NUMERIC
Table 55 (Cont.) ADD Supported Data Types ADD Type jdate OLE DB Type DBTYPE_DBDATE ODBC Type SQL_Date JDBC Type Details Julian date. Size: 2 bytes
logical
DBTYPE_I4
SQL_INTEGER
Signed long integer. Values: 1 or true (not case sensitive) are inserted as 1. Any other value is false, and inserted as 0.
Magic PC date format. Magic PC time format. z/OS date format. z/OS date-time format. z/OS time format. String based on language and driven by table. A null-terminated string of numeric characters; maximum length must be specified. An extra byte is required for the null flag. Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11 Note: the number of fractions includes the decimal point.
numstr_bdn
DBTYPE_NUMERIC
SQL_NUMERIC
numstr_lse
DBTYPE_NUMERIC
SQL_NUMERIC
HP NonStop signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11
numstr_nl
DBTYPE_NUMERIC
SQL_NUMERIC
Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11
Managing Metadata
5-19
Table 55 (Cont.) ADD Supported Data Types ADD Type numstr_nlo OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_NUMERIC JDBC Type Details Signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_nr DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Sign is the last character of the string. Maximum number of digits: 31 Maximum fractions: 11 numstr_s DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 The number must be right justified (for example, " 1234N" is -12345). The number can be left padded by either spaces or zeros. If a scale is provided, it is a fixed positional scale; no decimal point is provided in the data (for example, a value of "1234E" with scale 2 is interpreted as "123.45"). numstr_tse DBTYPE_NUMERIC SQL_NUMERIC HP NonStop signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_u DBTYPE_NUMERIC SQL_NUMERIC Unsigned numeric string. Maximum number of digits: 31 Maximum fractions: 11 numstr_zoned DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Maximum number of digits: 31 Maximum fractions: 11 ole_date ole_decimal ole_numeric ora_time DBTYPE_DBDATE DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_ DBTIMESTAMP SQL_DATE SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP OLE DB date format. OLE DB packed decimal. OLE DB numeric string. Oracle time format.
Table 55 (Cont.) ADD Supported Data Types ADD Type oracle time padded_str_ date padded_str_ datetime padded_str_ time phdate OLE DB Type DBTYPE_ DBTIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_DBDATE ODBC Type SQL_ TIMESTAMP SQL_CHAR SQL_CHAR SQL_CHAR SQL_Date JDBC Type Details Oracle time format. Padded date format. Not null terminated. Padded date format. Not null terminated. Padded date format. Not null terminated. Size: 2 bytes
Bits 0-6: (non-century) year Bits 7-10: number of month Bits 11-15: day of month
scaled_int1
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_int2
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_int3
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_int4
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_int6
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_int8
DBTYPE_NUMERIC
SQL_NUMERIC
scaled_uint1
DBTYPE_NUMERIC
SQL_DOUBLE
Managing Metadata
5-21
Table 55 (Cont.) ADD Supported Data Types ADD Type scaled_uint2 OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_ NUMERIC(5) JDBC Type Details Unsigned word integer. Size: 2 bytes Range: 0 to 65534 Maximum: 5 scaled_uint4 DBTYPE_NUMERIC SQL_ NUMERIC(10) Unsigned long integer. Size: 4 bytes Range: 0 to 4,294,967,294 Maximum: 10 single DBTYPE_R4 SQL_REAL Single floating point number (F_FLOAT). Size: 4 bytes Range: 0.29E-38 to 1.7 E38 Precision: 6 digits str_date DBTYPE_STR SQL_CHAR Atomic date string. Size: 10 characters Format: YYYY-MM-DD str_datetime DBTYPE_STR SQL_CHAR Atomic date-time string. Size: 23 characters Format: YYYY-MM-DD HH:MM:[Link] str_time DBTYPE_STR SQL_CHAR Atomic time string. Size: 8 characters Format: HH:MM:SS string DBTYPE_STR SQL_CHAR String of alphanumeric characters; length must be specified. Date in a string. Format: YYYY-MM-DD tandem_ datetime DBTYPE_DBTIME SQL_TIME Date and time in a string. Format: YYYY-MM-DD:HH:MM:[Link] FFFF DBTYPE_TIMESTAMP SQL_ TIMESTAMP SQL_CHAR SQL_ TIMESTAMP SQL_ TIMESTAMP Date and time in a string. Format: HH:MM:SS Text data (BLOB). ODBC time format. ODBC date-time format.
tandem_date
DBTYPE_DBDATE
SQL_DATE
tandem_time
Table 55 (Cont.) ADD Supported Data Types ADD Type ubits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details An unsigned number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:
n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.
The maximum number of bits you can map is 31. uint1 DBTYPE_UI1 SQL_TINYINT Unsigned byte integer. Size: 1 byte Range: 0 to +254 uint2 DBTYPE_I4 SQL_INTEGER Unsigned word integer. Size: 2 bytes Range: 0 to +65534 uint4 DBTYPE_NUMERIC SQL_ NUMERIC(11) Unsigned long integer. Size: 4 bytes Range: 0 to +4,294,967,294 uint6 DBTYPE_NUMERIC SQL_NUMERIC Unsigned integer. Size: 6 bytes unicode DBTYPE_WSTR SQL_VARCHAR A null-terminated alphanumeric unicode string; maximum length must be specified 16-bit count, followed by a string. 32-bit count, followed by a string OpenVMS date-time format
ADD Syntax
This section describes the Attunity Connect Data Dictionary. The following statements are described:
Managing Metadata
5-23
The <fields> Statement The <field> Statement The <group> Statement The <variant> Statement The <case> Statement The <keys> Statement The <key> Statement The <segments> Statement The <segment> Statement The <foreignKeys> Statement The <foreignKey> Statement The <primaryKey> Statement The <pKeySegments> Statement
Syntax
<table name="table_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... > <dbCommand>...</dbCommand> </field> ... </fields> <keys> <key name="param"> attribute="value" ...> <segments> <segment name="param" attribute="value" ... /> ... </segments> </key> ... </keys> </table>
where table_name is the record/table name. It can be made of a maximum of 40 characters. The <table> statement consists of the following components:
Table Attributes The <fields> Statement. This statement includes the fields list. Optionally, The <keys> Statement, which contains a list of keys.
Notes:
The table_name entry must conform to standard ANSI 92 SQL naming conventions. When you define the structure of a table for a non-relational data source, you must include a <fields> statement. When both <keys> and <fields> statements are present, the <keys> statement must come after the <fields> statement.
Example 59 <table> statement syntax <table name="nation" organization="index" filename="d:\demo\nation" datasource="DEMO"> <fields> <field name="n_nationkey" datatype="int4" /> <field name="n_name" datatype="string" size="25" /> <field name="n_regionkey" datatype="int4" /> <field name="n_comment" datatype="cstring" size="152" /> </fields> </table>
Table Attributes
A table can have the following attributes:
alias: Replaces the table name with a logical name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"
basedOn: Specifies the table (or virtual table) on which the current table is based. This attribute is generated automatically when an array in a record is generated as a virtual table. Syntax:
basedOn="table_name::array_name"
where: * * table_name: The name of the table which contains the array. If the array is nested in another array, this value is the name of the parent array. array_name: The name of the array in the table.
Example:
<table name="EMP_CHLDRN" organization="index" basedOn="EMPLOYEE::CHILDREN" datasource="DEMO" />
datasource: Specifies the data source name as specified in the binding configuration. The repository for this data source is used to store the ADD information. This attribute must be specified. Syntax:
Managing Metadata
5-25
datasource="datasource_name"
Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />
delimited: Specifies the character that delimits fields. In order to get the delimiter character into the data you must have the entire field quoted. See quoteChar further in this section. If you do not specify this attribute, ADD assumes that a comma (,) functions as the delimiting character. Syntax:
delimited="character"
Example:
<table name="nation" filename="d:\demo\nation" organization="index" delimited="/" datasource="DEMO" />
filename: The filename attribute specifies the full name and location of the file. Syntax:
filename="full_filename"
Data source drivers require the file suffix, except for the CISAM and DISAM drivers, where the suffix must not be specified.
Flat files and text delimited on z/OS Platforms When defining metadata for a flat file or text delimited file, use the following syntax: filename="high_level_qualifier.filename" For example: filename="[Link]"
Example (RMS):
filename="DISK$2:[DB][Link]"
Example (DISAM):
filename="d:\demo\nation"
filter: Adds a WHERE clause to every query accessed using this table or procedure. This attribute is useful when more than one logical table is stored in the same physical file. If a query relates to data with a filter attribute defined, then the Attunity Connect Query Processor handles the query including the WHERE clause and will return an error if the query is invalid because of the added WHERE clause. To use the filter attribute, you must set the useTableFilterExpressions environment property to true. You specify this parameter in the queryProcessor node of the environment properties for the relevant binding configuration, in Attunity Studio Design perspective Configuration view. Syntax:
filter="sql_expression"
where sql_expression is a valid SQL expression combining one or more constants, literals and column names connected by operators. Column names must be prefixed with $$$ and the column must exist in the current table. Example:
<table name="nation" filename="d:\demo\nation" organization="index" filter="$$$.RECORD_TYPE = 80" datasource="DEMO" />
nBlocks: Specifies the approximate number of blocks in the table. It is used by Attunity Connect to optimize query execution.
Note: The nRows attribute must be specified if the nBlocks attribute is specified. If neither nRows nor nBlocks is specified for a table, queries over the table might be executed in a non-optimal manner.
Syntax:
nRows="numeral"
organization: Specifies the organization of a file system data provider. The organization can be one of the following: indexed, sequential, relative, or unstructured. The default is sequential. Syntax:
organization="index" | "sequential" | "relative" | "unstructured"
Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />
Note:
Attunitys test-delimited and the flat file drivers, both support a sequential organization.
Managing Metadata
5-27
Use unstructured for unstructured Enscribe files that are not indexed. Note that you must include a filler field, of size one, when having an odd record size in an even unstructured Enscribe file. Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The hash symbol (#) is used to specify a pseudo column. For example:
SELECT * FROM colleges WHERE # = 6 INSERT INTO colleges(coll_id,coll_name,coll_status,#) VALUES(111,New collage,2,5) DELETE FROM colleges WHERE # = 15
quoteChar: Specifies the character that quotes a string field. In order to quote a field the entire field must be quoted (leading and trailing white space before and after the start and end quote characters, respectively, are allowed). In particular you cannot start or end quoting data in the middle of a field since the quote characters will be interpreted as part of the data. In order to have a quote character in a quoted field you must escape it by preceding it with a backslash. Syntax:
quoteChar="character"
Example:
<table name="nation" filename="d:\demo\nation" organization="index" quoteChar="" datasource="DEMO" />
recordFormat: Used only with RMS data, to identify the underlying RMS record format. This attribute is for information purposes only. Within Attunity Connect, all records are treated as fixed length. Syntax:
recordFormat="undefined" | "fixed" | "variable"
Example:
<table name="nation" datasource="RMS" filename="DKA100:[[Link]][Link]" recordFormat="fixed" organization="index" />
size: Specifies the maximum size, in bytes, of a record. This attribute is useful when you only want to use part of the record. This attribute is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, do not specify a size attribute. This attribute is not supported by the flat files driver. Syntax:
size="n"
Example:
<table name="nation" filename="d:\demo\nation" organization="index" size="500" datasource="DEMO" />
tableId: The record number within a DBMS user work area. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only)) generates ADD metadata from DBMS and automatically creates a tableId attribute.
Note:
Syntax:
tableId="record_number"
Syntax
<dbCommand>text</dbCommand>
Examples
<dbCommand>CMD=^PAK(K1,1)</dbCommand>
The BASIC_ADL utility for generating ADD metadata from DBMS metadata creates a dbCommand statement. For example, for a field, the dbCommand has the following form:
<dbCommand> field-type/set-name/record-name-of-paired-table/ realm-of-paired-table/ insertion-mode/retention-mode </dbCommand>
Managing Metadata
5-29
Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>
Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>
Example
The following code defines one field (n_NAME) and its two attributes (datatype and size):
<field name="n_name" datatype="string" size="25" />
Field Attributes
A field can have the following attributes:
autoIncrement: When set to true, specifies that this field is updated automatically by the data source during an INSERT statement and shouldnt be explicitly specified in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremented each time a new order is entered to the data source. Syntax:
autoIncrement="true|false"
Example:
<field name="ORDER_NUM" datatype="string" size="6" autoIncrement="true" />
chapterOf: Used for DBMS metadata and specifies that the set member field is a chapter of an owner table. This attribute must be included when accessing a set member as a chapter in an ADO application. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only) generates ADD metadata from DBMS and automatically creates this attribute. Syntax:
chapterOf="owner_table"
Example:
<field name="_M_CLASS_PART" datatype="string" size="29" chapterOf="CLASS" nullable="true"/
compressedArray: When set to true, any array that has a counter can be marked with the attribute compressedArray. Compressed arrays can be groups or single fields and they store only the members that have a value. Thus, a compressed array with a maximum of 100 elements that has only 5 elements in a particular record will include only the 5 elements in the physical file. Records are compressed and decompressed for READ and WRITE operations.
Note:
Syntax:
compressedArray="true|false"
emptyValue: Specifies the value for the field during an insert operation, when a value is not specified. Syntax:
emptyValue="value"
Example:
<field name="RECORD_TYPE" emptyValue="80" datatype="int4" />
explicitSelect: When set to true, specifies that this field is not returned when you execute a "SELECT * FROM..." statement. To return this field, you must explicitly ask for it (in a query such as "SELECT NATION_ID, SYSKEY FROM NATION" where SYSKEY is a field defined with explicitSelect). Syntax:
Managing Metadata 5-31
explicitSelect="true|false"
Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />
Note:
You can disable this attribute by specifying the disableExplicitSelect attribute for the data source in the binding.
hidden: When set to true, specifies that the field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. Syntax:
hidden="true|false"
Example:
<field name="CURRENT_SALARY" hidden="true" datatype="decimal" size="9" />
name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"
Example:
<field name="EMP_ID" datatype="int4" />
nonSelectable: When set to true, specifies that the field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. Syntax:
nonSelectable="true|false"
Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonSelectable="true" />
nonUpdateable: When set to true, specifies that the field cannot be updated (the default is false). Syntax:
nonUpdateable="true|false"
Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonupdateable="true" />
nRows: Specifies the approximate count of distinct column values in the table. It is used by Attunity Connect to optimize the query execution.
Syntax:
nRows="numeral"
nullable: When set to true, specifies that the field can contain NULL values. Syntax:
nullable="true|false"
Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />
nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key or segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, then the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is set to true, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field= NULL
Syntax:
nullSuppressed="true|false"
Example:
<field name="AGE" nullSuppressed="true" datatype="int4" />
nullValue: Specifies the null value for the field where the data source does not support null values, thereby providing a means of assigning a "null" value. A select statement returns the value as NULL. Syntax:
nullValue="value"
Managing Metadata
5-33
offset: Specifies an absolute offset for the field in a record. When used with a field whose data type is BIT, the offset can be stated for the first BIT data type. All following BIT data types refers to the same offset if possible. When the last mapped bit is the 8th bit, the next bit is mapped to the first bit in the next byte. Syntax:
offset="n"
Example:
<field name="EMP_ID" offset="3" datatype="int1"/> <field name="CHECK_DIGIT1" offset="3" datatype="bit" onBit="1" /> <field name="CHECK_DIGIT2" datatype="bit"/>
onBit: Specifies the position of the bit in a field with data type BIT or the starting position in a field with data type BITS. Syntax:
onBit="n"
where: * * For the BIT data type: Specifies which bit the field uses. For the BITS data type: Specifies which bit (within a byte) to start from. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.
where: * For decimal and numeric data types: The number of digits that are fractions. The number of fractions must not be greater than the number of digits. The default value is 0. For scaled data types: The number of digits. The number must be negative.
Example:
<field name="SALARY" datatype="numstr_s" size="10" scale="2" />
where n is the number of characters or digits. The digit must be greater than 0. For the BITS data type, n specifies the number of used bits, starting from the value specified in the onBits attribute.
subfieldOf: The value for this attribute is generated automatically when you generate ADD metadata from ADABAS data that includes a superdescriptor based on a subfield. A field is created to base this index on, and set to the offset specified as the value of the subfieldStart attribute.
If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldOf="parent_field"
subfieldStart: The offset within the parent field where a subfield starts. If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldStart="offset_number"
Syntax
<group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>
A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.
Example
An array containing information about an employees children can be defined as follows:
<group name="CHILDREN" dimension1="4" > <fields> <field name="DATE_OF_BIRTH" datatype="vms_date" /> <field name="NAME" datatype="string" size="16" /> </fields> </group>
The CHILDREN structure has 4 occurrences numbered from 0 to 3. Each occurrence consists of two fields, DATE_OF_BIRTH and CHILD_NAME. The size of the single structure occurrence is 20 (4 bytes for DATE_OF_BIRTH and 16 bytes for CHILD_ NAME), and the total size of the CHILDREN array is therefore 80.
Group Attributes
A group can have the following attributes:
alias: Used to replace the default virtual table name automatically generated for an array. Virtual table names are generated by appending the array name to the parent name (either the record name or a parent array name). Thus, when an array includes another array the name of the nested array is the name of the record and the parent array and the nested array.
Managing Metadata
5-35
When the default generated virtual table name is too long to be usable or over 39 characters, specify an alias to replace the long name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"
Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> ... </group>
counterName: Specifies the name of a field that counts the number of the elements stored in an array. Syntax:
counterName="field_name"
where field_name is the name of a field that counts the number of elements stored in the array. The counterName attribute cannot be used with an Attunity Connect procedure.
Note:
For an ADABAS database, you dont need to define a counter field since one is created automatically with the name C-arrayname, where arrayname is the multiple value field name or periodic group field name in ADABAS.
Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\ddescription: isam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>
Syntax:
dimension1="n"
where n indicates the number of elements in the array. The dimension1 attribute cannot be used with an Attunity Connect procedure.
Note:
This syntax is for a one-dimensional array. For a two-dimensional array, you specify dimension2="n".
Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\disam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>
To create the EMP_CHLDRN child table, import the table metadata to the Attunity Connect repository (by right-clicking the data source in Attunity Studio Design perspective Metadata view and selecting Import XML definitions).
name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"
Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> ... </fields> </group>
Managing Metadata
5-37
Different nuances of the same data. Different usage of the same physical area in the buffer.
This section describes the common use cases of variants and how they are represented in the variant syntax. The following variant types are available:
COBOL Example:
20 PARTNUM PIC X(10). 20 PARTCD REDEFINES PARTNUM. 30 DEPTCODE PIC X(2). 30 SUPPLYCODE PIC X(3). 30 PARTCODE PIC X(5).
In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation Panel enables replacing any variant with the fields of a single case. The metadata generated by Studio following a Metadata import appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant> 5-38 AIS User Guide and Reference
COBOL Example:
10 ORDER. 20 RECTYPE PIC X. 88 ORD-HEADER VALUE H. 88 ORD-DETAILS VALUE D. 20 ORDER-HEADER. 30 ORDER-DATE PIC 9(8). 30 CUST-ID PIC 9(9). 20 ORDER-DETAILS REDEFINES ORDER-HEADER. 30 PART-NO PIC 9(9). 30 QUANTITY PIC 9(9) COMP.
In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction. This is true, unless a selector is specified in the import manipulation panel. See Working with Metadata in Attunity Studio for additional information.
ADD Syntax: The following is the ADD syntax used for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>
The metadata generated by Attunity Studio following a metadata import appears as follows:
<field name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9"/> </fields>
Managing Metadata
5-39
</case> <case name="ORDER_DETAILS" value="D"> <fields> <field name="PART_NO" datatype="numstr_u" size="9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields> </case> </variant>
Usage Notes
From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the user to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.
Right-click the variant and select Structures, and then Mark selector. The Select Selector Field screen opens.
3. 4. 5. 6.
Select the selector for the variant from the list of selectors in the COBOL copybook. Click OK. Repeat as needed to set variants with selectors. Click OK.
Variant Attributes
A variant can have the following attributes:
name: Specifies the name of the variant. This attribute must be specified. Syntax:
name="name"
Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields>
</case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>
selector: Specifies the name of a field whose value determines which of the alternate variant definitions is used in the current record (row). When a selector attribute is specified, a value attribute must be specified in the <case> statement. Syntax:
selector="field_name"
Example:
<field name="SEX" datatype="string" size="1" /> <variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>
Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>
Case Attributes
A case can have the following attributes:
Managing Metadata
5-41
name: Specifies a name for the case. When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here. Syntax:
name="name"
Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>
value: Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. When a selector attribute is specified in the <variant> statement, a value attribute must be specified here. Syntax:
value="value"
Example:
<case name="CASE_1_1" value="M"> <fields> <field name="M_SCHL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHL" datatype="string" size="20" /> </fields> </case>
Syntax
<keys> <key name="key_name" attribute="value" ...> <segments> 5-42 AIS User Guide and Reference
<segment name="segment_name" attribute="value" ... /> ... </segments> </key> <key name="key_name" ...> <segments> <segment name="segment_name" ... /> ... </segments> </key> </keys>
Example
<keys> <key name="nindex" unique="true"> <segments> <segment name="n_nationkey" /> </segments> </key> </keys>
Syntax
<key name="key_name" attribute="value" ...> <dbCommand>...</dbCommand> <segments> <segment name="segment_name" attribute="value" ... /> ... </segments> </key>
Key Attributes
A key can have the following attributes:
bestUnique: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute on keys containing a field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Fields that represent a bookmark include ROWID in Oracle, DBKEY in DBMS and ISN in ADABAS. Syntax:
bestUnique="true|false"
Example:
<key unique="true" bestUnique="true" hashed="true"> <segments>
Managing Metadata
5-43
clustered: When set to true, indicates that this key reflects the physical organization of the table and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
clustered="true|false"
Example:
<key clustered="true"> <segments> <segment name="DBKEY" /> </segments> </key>
descending: When set to true, specifies whether the order of the current key is descending. If this attribute is not specified for the key, it can be specified per segment of the key. The default is ASCENDING. Syntax:
descending="true|false"
Example:
<key unique="true" descending="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" /> </segments> </key>
hashed: When set to true, indicates that this is a hash key and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
hashed="true|false"
Example:
<key unique="true" bestUnique="true" hashed="true"> <segments> <segment name="ISN" /> </segments> </key>
hierarchical: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute for DBMS databases, on keys containing a DBKEY field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Syntax:
hierarchical="true|false"
Example:
<key unique="true" hierarchical="true" > <segments> <segment name="DBKEY" /> </segments> </key>
indexId: Identifies the physical key for the record. You can use this attribute only with the key and not with a segment. The use of the field is data source dependent. Syntax:
indexId="previously_defined_key"
For an Enscribe alternate key, the indexId attribute is the ASCII value corresponding to the 2 bytes of the key specifier surrounded by quotes. For details, see Enscribe Data Source (HP NonStop Only).
nRows: Specifies the approximate count of distinct key values in the key. It is used by Attunity Connect to optimize query execution. For a unique key, the nRows value must be equal to the nRows value for the record (that is, the number of distinct key values is the same as the number of rows). Syntax:
nRows="numeral"
nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is specified, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field=NULL
unique: When set to true, indicates that each key entry uniquely identifies one row and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
unique="true|false"
Example:
Managing Metadata 5-45
Syntax
<segments> <segment name="segment_name" attribute="value" .../> ... </segments>
Syntax
<segment name="segment_name" attribute="value" ... <dbCommand>...</dbCommand> </segment>
Segment Attributes
Each segment can have the following attributes:
descending: When set to true, specifies the order of the current segment is descending. The default is ASCENDING. Syntax:
descending="true|false"
Example:
<key unique="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" descending="true" /> </segments> </key>
name: Specifies the name of the segment. This attribute must be specified. Syntax:
name="name"
Example:
<key unique="true" nRows="30"> <segments>
nRows: Specifies the approximate count of distinct segment values in the key. It is used by Attunity Connect to optimize query execution. Syntax:
nRows="numeral"
nullsLast: When set to true, causes value comparison operations to treat nulls as the greater of the values being compared. Syntax:
nullLast="true|false"
nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). Syntax:
nullSuppressed="true|false"
The <foreignKeys> statement is not available in the Attunity Studio Design perspective Metadata view.
Syntax
<foreignKeys> <foreignKey name="key_name" attribute="value" ...> <fkeySegments> <fkeySegment attribute="value" ... /> </fkeySegments> </foreignKey> </foreignKeys>
The name of the foreign key. The external table reference by the foreign key. The primary key of the referencing table. The segments of the foreign key. The referential integrity rule to be implemented when the primary key field in the referencing table is either updated or deleted.
Managing Metadata 5-47
Note:
The <foreignKey> statement is not available in the Attunity Studio Design perspective Metadata view.
Syntax
<foreignKey name="key_name" referencingTable="external_table" referencingPKey="external_table_ primary_key" updateRule="cascade|restrict|setNull"> deleteRule="cascade|restrict|setNull"> <fkeySegments> <fkeySegment fname="local_table_field" pName="referenced_table_field"/> ... </fkeySegments> </foreignKey>
Example
<foreignKeys> <foreignKey name="fkey1" referencingTable="table2" referencingPKey="table2_pkey" updateRule="cascade" deleteRule="setNull"> <fKeySegments> <fKeySegment fName="col2" pName="table2_col3" /> <fKeySegment fName="col5" pName="table2_col2" /> </fKeySegments> </foreign_key> <foreignKey name="fkey2" referencingTable="table3" referencingPKey="table3_pkey" updateRule="restrict" deleteRule="restrict"> <fKeySegments> <fKeySegment fName="col1" pName="table3_col1" /> </fKeySegments> </foreignKey> </foreignKeys>
foreignKey Attributes
Each foreignKey can have the following attributes:
name: The name of the foreign key. referencingTable: The name of the external table referenced by the foreign key. referencingKey: The primary key of the external referencing table. updateRule: The referential integrity rule for the foreign key when the primary key of the referenced table is updated. The specific option is determined by the application accessing the data: cascade restrict setNull
deleteRule: The referential integrity rule for the foreign key when the primary row of the referenced table is deleted. The specific option is determined by the application accessing the data: cascade restrict setNull
fName: The name of the external field referenced in this foreign key segment. pName: The name of the local field referenced in this foreign key segment.
The <primaryKeys> statement is not available in the Attunity Studio Design perspective Metadata view.
Syntax
<primaryKey name="name"> <pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments> </primaryKey>
The <pKeySegment> statement is not available in the Attunity Studio Design perspective Metadata view.
Syntax
<pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments>
Example
<primaryKey name="pk"> <pKeySegments> <pKeySegment segment="col1"> <pKeySegment segment="col2"> </pKeySegments> </primaryKey>
Managing Metadata
5-49
pKeySegment Attributes
The pKeySegment statement can have the segment attribute, which specifies the name of a segment constituting the primary key of a table.
6
Working with Metadata in Attunity Studio
This chapter includes the following sections:
Overview Managing Data Source Metadata Importing Data Source Metadata with the Attunity Import Wizard Working with Application Adapter Metadata
Overview
AIS uses metadata to access and read a data source. AIS can use the native metadata for many data sources, such as Relational Data Sources or Adabas. However, some data sources require Attunitys own metadata (ADD). Metadata can be imported, saved, and managed in the Attunity Studio Design perspective, on the Metadata tab. The following data sources require Attunity metadata:
CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) Text Delimited File Data Source VSAM Data Source (z/OS) OLEDB-FS (Flat File System) Data Source
6-1
To view and edit data source metadata 1. In the Design perspective Configuration view, expand the Machines folder and then expand the machine where you want to add the data source.
2. 3. 4.
Expand the Bindings folder and then expand the binding with the data source metadata you are working with. Expand the Data Source folder. Right-click the data source for which you want to manage the metadata and select Show Metadata View. The Metadata tab opens with the selected data source displayed in the Configuration view.
5.
Right-click the resource (such as the data source table) in the Metadata view and select Edit.
Data source tables are edited using the following tabs, which are at the bottom of the editor screen:
General Tab: Enter general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Edit information about the table columns and their properties. For example, the column data type, size, and scale. Indexes Tab: Edit information about the indexes of a table. The indexes are described by the order of the rows they retrieve, the data source commands used, and the index type. Statistics Tab: Enter the statistics for the table, including the number of rows and blocks in the table. Source Tab: View the metadata in its XML representation.
Note:
Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources (File-system Data Source). For example, the metadata for an RMS record is referred to as the metadata for an RMS table.
General Tab
The General tab lets you maintain information about the table, such as the table name and the way the table is organized.
Note:
Index: Select this if the source data has an index column. All searches are according to the index column. Sequential: Select this if the source does not have a key and all requests search the table columns sequentially. Relative: Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The (#) symbol indicates a pseudo column.
6-3
Table 61 (Cont.) Metadata General Tab Field name Maximum record length Description The maximum size of the record in bytes. The value for this field is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, leave this field empty. Delimited Quote Character Record Format Enter a character to act as a delimiter in the metadata. If nothing is entered, a comma (,) is used as the delimiter. The character that quotes a string field. Select how the record is formatted according to its size.
Undefined: There is no set definition for the record size. Fixed: The record length is fixed. Variable: The record length varies.
Enter special commands for the data source you are working with. You can create a filter by entering a WHERE clause. You should use a filter when more than one logical table is stored in the same physical file. To enter a filer, click Set filter expression and enter an expression in the field below. The following is an example of the WHERE clause syntax: "$$$.expression" Note: To use a filter, you must select use Table Filter Expressions in the Query Processorsection of the Environment Properties in Attunity Studio.
Columns Tab
The Columns tab lets you specify ADD metadata that describes the table columns. This tab is divided into the following:
Size Scale
6-5
Table 62 (Cont.) Metadata Column Tab Definitions Field name Dimension Description The maximum number of occurrences of a group of columns that make up an array. The (+) to the left of a column indicates a group field. This type of field will have a Dimension value. Click (+) to display the group members. Offset Fixed offset An absolute offset for the field in a record. This column lets you determine whether to calculate the offset. There are two options:
Calc offset: If you clear this check box, the absolute offset for each of the columns is calculated. Fixed offset: When you select this check box, you will have a fixed offset. The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select the check box in this column to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. By selecting the check box, or by editing the offset value you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.
Primary Key
The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 63 Button Insert Up Down Rename Delete Find Definition Section Buttons Description Inserts a column to the table. You can insert a new column. If the table has arrays, you can add a new child column. Moves your selection to the column directly above where the currently selected column. Moves your selection to the column directly below where the currently selected column Lets you rename the selected column. Deletes the selected column. Click this button to open a list of all columns in the database. Select a column and click OK to select it in the table.
Column Properties
You can change the property value by clicking in the Value column. Follow these steps for displaying the column properties. To display the column properties Select a column from the Column Definition (top) section. The properties for the column are displayed at the bottom of the tab.
The following table shows some of the properties available for selected columns.
Table 64 Property Alias Metadata Properties Description A name used to replace the default virtual table name for an array. Virtual table names are created by adding the array name to the record name. When an array includes another array the name of the nested array is the name of the record and the parent array and the nested array. When the default generated virtual table name is too long, use an Alias to replace the long name. The current field is updated automatically by the data source during an INSERT statement and is not explicitly defined in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremental each time a new order is entered to the data source. A short note or description about the column. Click the button to enter data source-specific commands for the column. The value for the field in an insert operation, when a value is not specified. When true, the current field is not returned when you execute a SELECT * FROM... statement. To return this field, you must explicitly ask for it in a query, for example, SELECT NATION_ ID, SYSKEY FROM NATION where SYSKEY is a field defined with explicitSelect. You cannot use an asterisk (*) in a query where you want to retrieve a field defined with the explicitSelect value. You can disable this value by entering the disableExplicitSelect value in the data source bindings Environment Properties. Hidden Non Selectable The current field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. When true, the current field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. If true, the current field cannot be updated. This value allows the current field to contain NULL values. The null value for the field during an insert operation, when a value is not specified. This property shows that the set member field is a chapter of an owner field. A value for this property must be used when accessing a set member as a chapter in an ADO application. This property is used for DBMS metadata OnBit Subfield of The position of the bit in a BIT field and the starting bit in a BITS field. The value is generated automatically when you generate ADD metadata from Adabas data that includes a superdescriptor based on a subfield. A field is created to base this index on, set to the offset specified as the value of the Subfield start field. If no value is entered in the Subfield start field, the subfield is set by default to an offset of 1.
Autoincrement
6-7
Table 64 (Cont.) Metadata Properties Property Subfield start Description The offset within the parent field where a subfield starts.
Indexes Tab
The Indexes tab lets you indicate ADD metadata describing the table indexes.
Note: This tab contains information only if the Organization field in the General tab is set to Index.
Figure 63 Data Source Metadata Indexes Tab
This tab has two sections. The first section lets you define the index keys for the columns in the table. The bottom of the tab lists the properties for each of the columns at the top.
Table Information
The following table describes the fields for the top part of the tab, which defines the indexes used for the table.
Table 65 Field Name Order Index Definitions Fields Description The name of the column used as an index for the current table. The row order that the index gets back.
Table 65 (Cont.) Index Definitions Fields Field DB Command Description Data source-specific commands for the index.
The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 66 Button Insert Rename Index Delete Index Definition Buttons Description Inserts an index to the table. Lets you rename the selected index. Deletes the selected index.
Properties
You can index properties for each index column. Follow these steps for displaying the properties for each index. To display the index properties Select a column from the Index Definitions (top) section. The properties for the column are displayed at the bottom of the tab. This properties displayed at the bottom of the tab describe the index or segment. The properties available depend on the data source.
Statistics Tab
The Statistics tab lets you specify metadata statistics for a table. Statistics can be updated with the Update Statistics utility.
6-9
Table
Enter the statistical information for the table in this section.
Rows: Enter or use the arrows to select the approximate number of rows in the table. If the value is -1, the number of rows in the table is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that this table is empty. Blocks: Enter or use the arrow to select the approximate number of blocks in the table.
Note:
If no value is entered for the number of rows or the number of blocks, queries over the table may not be executed effectively.
Columns
Enter the cardinality for each of the columns in the table in this section.
Column Name: The columns in the table. Cardinality: The number of distinct values for the column. If the value is -1, the number of distinct values for the column is unknown (a value was not supplied
and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct values for the column.
Indexes
Enter the cardinality for the columns in each of the tables indexes in this section.
Indexes and Segments: The indexes and segments in the table. Cardinality: The number of distinct key values in the index. If the value is -1, the number of distinct key values in the index is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct key values in the index.
Update Button
Update: Opens the Update Statistics window. AIS collects information about tables, indexes, and column cardinalities. These statistics can be used to optimize a query using the Query Optimizer, which finds the most efficient way to perform a query across multiple machines. This is done using the metadata statistics.
Figure 65 Metadata Statistic Update Window
where:
Type: The type of statistic information being added. Estimated: An estimation of the amount of statistical information returned. Estimated with Rows: An estimation of the amount of statistical information returned. Including an estimation of the number of rows in the table. Specify the number in the text box. This number is used to shorten the time to produce the statistics, assuming that the value specified here is the correct value, or close to the correct value.
Note:
When the number of rows in the table is not provided, the number of rows used is determined as the maximum value between the value specified in the tuning DsmMaxBufferSize environment property and the value set in the nRows attribute (specified as part of the metadata for the data source)
Exact: The exact statistical information returned. Note that this can be a lengthy task and can lead to disk space problems with large tables.
Resolution: The level of the statistical information returned. Default: Only information about the table and indexes is collected. Information for partial indexes and columns is not collected. All Columns and Indexes: Information about the table, indexes, partial indexes and columns is collected. Select Columns and Indexes: Lets you select the columns and indexes you want to collect statistics for. In the enabled list of columns and/or indexes left click those columns you want included (you can use shift-click and control-click to select a number of columns and/or indexes).
The statistics are updated on the server, enabling work to continue in Attunity Studio. A message is displayed in Attunity Studio when the statistics on the server have been updated.
Modelling Tab
The Modelling tab lets you enter information about the virtual view policy for arrays. These parameters are valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.
The configurations made in this editor are for the selected table, only. The same parameters are configured on the data source level in the data source editor.
Figure 66 Data Source Metadata Advanced Tab
Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter.
For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.
Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns.
false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter.
For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.
Adabas C Data Source Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS)
For Importing Procedure Metadata, an import wizard is available for these procedure data sources:
Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata
In the Design perspective Configuration view, expand the Machines folder and then expand the machine with the data source where you are importing the metadata. Expand the Bindings folder and expand the binding with the data source metadata you are working with.
3.
4. 5.
Expand the Data Source folder and. Right-click the data source that you are working with and select Show Metadata View. The Metadata view opens with the selected data source displayed.
6.
Right-click Imports under the data source and select New Import. The New Import dialog box is displayed, as shown in the following figure:
7. 8. 9.
Enter a name for the import. The name can contain letters, numbers and the underscore character. Select the Import type from the list. The import types in the list depend on the data source you are working with. Click Finish. The Metadata import wizard opens with the Get Input Files screen, as described in the Selecting the Input Files step.
2. 3.
If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the Machine. To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.
4.
5.
Select the files to import and click Finish to start the transfer. The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen of the wizard, as shown in the following figure.
6.
To manipulate table information or the fields in the table, right-click the table and select the option you want. The following options are available:
Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. The table attributes are described in Table Attributes. XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.
7.
Applying Filters
This section describes the steps required to apply filters on the COBOL Copybook files used to generate the Metadata. It continues the Selecting the Input Files step. To apply filters 1. Click Next, the Apply Filters step is displayed in the editor.
2.
Apply filters to the copybooks, as needed. The following COBOL filters are available:
COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.
3.
Selecting Tables
This section describes the steps required to select the tables from the COBOL Copybooks. The following procedure continues the Applying Filters step.
1.
From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The following figure shows the Select Tables screen.
The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
2.
Select the tables that you want to access (that require Attunity metadata) and then click Next to go to the Import Manipulation step.
Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen. To manipulate the table metadata 1. From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.
2.
Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.
3.
The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:
Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant.
Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.
The following table lists and describes the available operations when you right-click a table entry:
Table 67 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.
Rename Set data location Set table attributes XSL manipulation Remove
You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.
You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 68 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description
The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.
Table 68 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.
Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.
The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.
Table 68 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.
Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.
Set dimension
Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.
Set size
Table 68 (Cont.) Field Manipulation Screen Commands Command Field menu Add Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen: Description
Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.
Move up or down
Select a row and use the arrows to move it up or down in the list.
Select the Rename field to make the Name field active. Change the name and then click outside of the field.
Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.
Table 68 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:
First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.
Flatten group
Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.
Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.
Table 68 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.
Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.
Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.
Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.
Click Finish.
In the Design Perspective Configuration View, expand the Machines folder and then expand the machine where you want to add the data source. Expand the Bindings folder and then expand the binding with the adapter metadata you are working with. Expand the Adapters folder and then right-click the adapter for which you want to manage the metadata. Select Show Metadata View from the shortcut menu. You can create and edit the following adapter metadata properties:
Adapter Metadata General Properties: Enter and edit information about the adapter, such as the adapter name and the way in which you connect to the adapter. You make these changes in the Design perspective, Metadata view. Adapter Metadata Schema Records: The input and output record structure for a record in the adapter definition.
Adapter Metadata Interactions: Enter details of an interaction. The interaction Advanced tab is displayed for some adapters only, such as the Database adapter and includes more details about the interaction.
Right-click the interaction that you want to edit, and select Open. The Interaction General Properties editor is displayed.
Description
Table 69 (Cont.) Adapter Metadata General Tab Field Version Header Authentication mechanism Description The schema version. A C header file with the data structures for the adapter. This header file is used with the C API for applications. The authentication method. Select from:
Max Request Size Max Active connections Max Idle timeout Adapter Specifications
The maximum size, in bytes, for an XML ACX request or response. Larger messages are rejected with an error. The maximum number of machines that can connect to the adapter at a time. By default, this number is unlimited. The maximum amount of time without activity before the connection is timed out. By default, this is 600 seconds. The maximum number of simultaneous connections for an adapter (per process).
In the Design Perspective Metadata View, expand the Adapters folder. Right-click Schema and select New record. Enter a name for the new record definition. Click OK. The Schema Record editor is displayed. You define field grouping in the Schema Record editor.
6. 7.
Click New Field to add new fields for the record, describing the data structure for the adapter. Continue adding fields until the record structure is complete.
Name: The name of the field Type: The data type of the field. See the Valid Data Types table for a list of the valid data types. Length: The size of the field including a null terminator, when the data type supports null termination (such as the string data type).
Specifications
Defines specific field properties. To display the properties, select the specific field in the Fields list.
The following table describes the valid data types that can be used when defining these specifications in the Schema Record editor.
Table 611 Binary Date Float Numeric[(p[,s])] Time Valid Data Types Boolean Double Int Short Timestamp Byte Enum Long String
In the Design Perspective Metadata View, expand the Adapters folder. Expand the Schema. Right-click the record you want to edit and select Open. The Schema Record editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Schema Records.
In the Design Perspective Metadata View, expand the Adapters folder. Right-click the Interactions folder and select New. Enter a name for the new Interaction. Click OK. The Interaction editor is displayed. You define interaction properties and its input and output properties in the Schema Record editor.
6.
Make any changes in the editor that you need. The table below describes the Interactions editor.
sync-send-receive: The interaction sends a request and expects to receive a response. sync-receive: The interaction sends a request and does not expect to receive a response. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.
Identifies the input and output records. Specific properties for the interaction. When an Interaction Advanced tab is used, this section is not displayed. The parameters shown are for a Legacy Plug Adapter.
In the Design Perspective Metadata View, expand the Adapters folder. Expand the Interactions folder. Right-click the interaction you want to edit and select Open.
The Interaction editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Interactions.
The fields in this tab are dependent on the specific adapter. In the above screen the tab is displayed for the Database adapter and enables building or modifying an SQL statement. For information on how this tab relates to the Database adapter. See Adapter Metadata Schema Records.
Right-click Procedures, and select New Procedure. Enter a name for the new procedure definition, and click OK. The editor opens with the General properties tabs displayed.
4.
Click Procedures to view the defined procedures. Right-click the required procedure (or double-click it) to view or edit its properties. The editor opens and is different depending on which procedure driver you are working with. See the section that describes the driver you are working with for more information:
CICS Procedure Data Source Procedure Data Source (Application Connector) Natural/CICS Procedure Data Source (z/OS)
For other procedures, the metadata is created manually, as described in Adapter Metadata Schema Records. You start importing procedure metadata in the same way as Importing Data Source Metadata with the Attunity Import Wizard.
7
Handling Arrays
This section describes the support methods that Attunity Connect applies to deal with arrays. It includes the following topics:
Representing Metadata
Before looking at the different methods of handling arrays, you should understand how metadata is represented in Attunity Studio. The following figure shows an example in COBOL that illustrates arrays and nested arrays. When you import this metadata into Attunity Studio, the import process creates an ADD definition that is equivalent to the original structure, usually mapping the fields one-to-one.
The following figure shows how Attunity Studio represents the same data in XML.
On the Columns tab, Attunity Studio represents the same metadata as shown in the following figure.
Finally, if you run the NavSQL > native STUDENT command from the NavSQL utility, the same metadata is represented as shown in the following figure.
Columnwise Normalization Virtual Tables Virtual Views Sequential Flattening (Bulk Load of Array Data) ADO/OLE DB Chapters XML
Columnwise Normalization
Some small arrays are merely a short hand notation of a sequence of simple columns. For example, an address field with three lines on the screen may have been represented in a COBOL copybook by an OCCURS clause that is used three times, as a
Handling Arrays 7-5
short hand notation of writing three columns, Address_1, Address_2, and Address_3.
Figure 75 Simple Array in COBOL
This is not a real case of hierarchical dependency of data that needs to be modeled in the relational world. The simplest approach for dealing with this simple class of arrays is to replace them with simple columns during the import process. This process of replacing a small array by a sequence of columns is referred to as columnwise normalization. Columnwise normalization is useful for simple arrays that share any of the following characteristics:
They show a small number of occurrences. They do not possess a counter. They contain only one field.
You can choose to normalize an array columnwise in the Import Manipulation panel in Attunity Studio. The following figures show an example.
Figure 76 Selecting Columnwise Normalization in the Import Manipulation Panel
Virtual Tables
Exposing arrays as virtual tables is a commonly used technique to handle arrays. It generates a virtual table for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. For example, an array called course in a table called student is represented by the virtual table STUDENT_ COURSE. Attunity Studio generates, updates, and removes virtual tables automatically, according to the status of their parent table, and names them by appending the array name to the parent name. When an array includes another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName
The number of nested-array levels is not limited. Virtual tables includes the following columns:
The array member fields from the original structure. A column called _PARENT, which identifies the parent record. This column is a string representation of the parent records bookmark. A column called _ROWNUM, which is the ordinal that identifies the array record within the parent table.
The _PARENT and _ROWNUM columns are generated automatically and cannot be updated. Together, they uniquely identify each row in the virtual table. To identify the array field in the parent record when joining parent and child tables, you can use the _ PARENT field of the virtual table. You cannot edit the definition of virtual tables; you can only manipulate the parent table. Attunity Studio indicates virtual tables by using a differently colored icon in the Metadata view, as shown in the following figure.
Right-clicking a table in the Metadata view and selecting SQL View opens the SQL view. The following figure shows a sample table that includes the _PARENT and _ ROWNUM fields and replaces the entire array ASSIGNMENTS by a varchar(64) field that includes a string representation of the bookmark.
Figure 79 SQL View of the Virtual Table STUDENT_COURSE
When you look at an actual record, the same fields_PARENT, _ROWNUM, and NUMOF_ ASSIGNMENTSare displayed, as shown in the following figure. You can use the ASSIGNMENTS field to join with the STUDENT_COURSE_ASSIGNMENTS virtual table.
The metadata that the ADD stores for the virtual table is only a pointer to the parent table, as indicated by the attribute name, basedOn. It is filled in from the parent table upon first access. You can join the parent and the child by using the courses and _PARENT fields as the join criteria, as shown in the following figure.
Figure 711 Joining Parent and Child
Equally, you can join parent, child, and grandchild, as shown in the following figure.
Figure 712 Joining Parent, Child, and Grandchild
When joining a table and an array table, AIS reads every physical record only once, thus maximizing the joins efficiency in terms of I/O. The virtual table method supports all DML operations, with the following limitations:
Arrays without counters (DEPENDING ON clauses) only support UPDATE commands. INSERT commands ignore the _ROWNUM column and add data at the end. DELETE commands that remove data from the middle of an array move the following elements up and update the counter automatically. Therefore, if you want to delete all the elements from an array, you cannot perform the following loop: for (i=1; i<=counter; i++) delete from array where _PARENT=x and _ROWNUM=i This loop does the following:
1. 2. 3.
It deletes the first member of the array. It moves up the second member of the array so that it becomes the first member. It deletes the second member of the array, which was originally the third member. It does not delete the original second member because that is now the first member of the array.
Alternatively, you can use the following loops: while (counter>0) delete from array where _PARENT=x and _ROWNUM=1 This loop keeps deleting the first member of the array until no member is left to move up. The counter is updated by AIS. while (i=counter; i>0; i--) delete from array where _PARENT=x and _ROWNUM=i This loop deletes the members of the array from last to first. It is slightly more efficient because it avoids the moving of members.
Note:
Every write operation causes physical I/O on the parent record. If you need to use a lot of DML operations on arrays, consider using the XML approach instead.
Virtual Views
Virtual views are a more recent method of handling arrays in AIS. They eliminate the need for a _PARENT field, replacing it by primary key fields from the parent. This makes the processing of array records easier for most applications. Per binding, you can handle arrays either as virtual tables or as virtual views; you cannot mix the two models in the same binding. However, when you switch between virtual tables and virtual views, you do not need to make any changes to the metadata because they both use the same table definitions in the ADD. During the import procedure, Attunity Studio generates virtual views and names them by appending the array name to the parent name. When an array includes
another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName
The number of nested-array levels is not limited. Virtual views includes the following:
The array member columns from the original structure. The fields from the parents first unique key or all parent fields, depending on the selection during the import process. If all parent fields are included in the virtual view, the parents indexes are available in the view definition and can be used for efficient optimization strategies.
Note:
If the view does not include all parent fields, the primary key fields (if the primary key is not the parents first unique key). A column called <array>_ROWNUM, which is the ordinal that identifies the array record within the parent table.
The unique key and <array>_ROWNUM columns are generated automatically. Together, they uniquely identify each row in the virtual view and form a unique key. When working with virtual views, consider the following limitations:
Virtual views are read-only. Virtual views currently do not support arrays within variants that have a selector field.
Including all parent fields in the virtual view greatly reduces the need for performing join operations because this in itself is an implicit join. However, if you do perform a join, it is not as efficient as a join of virtual tables because AIS reads each record twice, not once. This means that you are better advised to use Virtual Tables if you need to perform complex, explicit joins. In general, though, the query processor can devise efficient access strategies because AIS copies all relevant indexes from the parent to the virtual view. Attunity Studio indicates virtual views by using a differently colored icon in the Metadata view, as shown in Display of Virtual Tables in Attunity Studio.
The sequentially flattened view of a complex table is referred to as single table. You can choose to create a single table in Attunity Studio, by selecting Sequential View during the Metadata Model Selection step.
Figure 713 Selecting a Single Table View
The flattened table is called <table>_ST, where <table> is the name of the parent table and ST indicates a single table. For example, if a parent table is called STUDENT, the single table is called STUDENT_ST. The structure of the single table is identical to the original tables structure, except that AIS removes all array dimensions and adds some control fields. When reading a record, AIS performs a tree traversal of the parent and its array hierarchy. Each record in the resulting recordset deals with a specific array member; other arrays are nulled out, and the _LEVEL control field indicates the current array in the tree traversals hierarchy. The sequentially flattened single table includes the following columns:
The parent fields, that is the non-repeating fields. The array fields for all arrays within the parent. A column called __LEVEL, which indicates the current child level. The value comprises a concatenation of parent and child names. A column called __SEQUENCE, which indicates the sequence of the single table row in the physical, non-relational table. A value of 1 indicates a parent record. For each array, an optional column called <array>_ROWNUM, which identifies the row in the array. This column is generated automatically for the array.
A row for each parent record only, without reference to the arrays. All rows with an empty __LEVEL column and a value of 1 in the __SEQUENCE column indicate a parent record only. A record for each array record.
The following figure shows the metadata that sequential flattening produces for this data source.
Figure 715 SQL View of the Single Tables Metadata
The next figure shows the actual single table. It contains a column for each row in the SQL view above.
Figure 716 STUDENT_ST with All Parent and Child Records
Details of the first course. The first row shows the course itself, and the next three rows show the course assignments. Once the first course has been retrieved, the next course is retrieved with its assignments. Following all the courses, the next array (BOOK) is retrieved.
During a bulk load, the rows are read one by one and written to the appropriate target file for each row, based on the __LEVEL field.
ADO/OLE DB Chapters
ADO and OLEDB introduced the concept of an embedded rowset, called a chapter, as part of the API. A chapter is a result set embedded in a single column within the primary resultset that you can drill down to. This model works well with complex legacy data that includes arrays. AIS supports chapters and lets you update them. From a metadata perspective, the array is represented by a chapter column and contains a chapter identifier. As such, the chapter functions as a handle to the data that it links to. Any ADO-based programs, particularly Visual Basic programs, can use chapters to handle arrays, such as the Attunity-provided SQL sample utility, ChapView. The following figure illustrates how ChapView handles chapters.
A look at a Microsoft Visual Basic code snippet using ADO helps understand how chapters are embedded.
Figure 718 Visual Basic Code Snippet Using ADO
oRST is the main recordset. For the embedded recordset, oRSTChapter is used. However, both are simple recordset objects.
. . .
The embedded recordset is the value of the COURSE column in the main recordset. From here on, everything is simple Visual Basic recordset syntax.
Note:
Chapters are not supported with virtual views and sequential flattening. However, you can use chapters and virtual views in the same Visual Basic program, as needed.
For more information on chapters, see the OLE DB and ADO documentation.
XML
Forcing non-relational structures that include arrays to fit the relational mould is an endless source of complexity, with many potential efficiency issues. The method of using XML to handle arrays is a way to get around the restrictions of the relational model while maintaining the use of SQL. It makes conventional use of SQL, but instead of specifying a list of columns in the select clause, it transfers the records data to the client application as a BLOB column that contains the XML representation of the data. This method works for both read and write operations. By using XML, you can have a client application write an entire complex structure in a single I/O operation. Approaching the same task with virtual tables would result in multiple I/O operations (typically one I/O operation per array member). To be able to use XML, you need to make sure that the environment property exposeXmlField in the misc section of the binding definition is set to true. By default, this property is set to false. You can use the XML method from any supported client interface, including JDBC, ODBC, and ADO. It is also commonly used in the context of the query or database adapter. If you use a query adapter, the XML documents that result from a select * clause and a select XML clause are almost identical. However, the select XML clause produces a natural XML representation while the select * clause produces a hierarchical rowset representation of the record, as shown in the following figures.
Similarly, you can combine DML with XML to insert a complex record with arrays during a single I/O operation. For example, you could run the following query to add another record to the STUDENT table:
<update sql="insert into navdemo:student(xml) values(?)" executionMode="insertUpdateXml"> <inputParameter type="xml"> <STUDENT ID="1" FIRST_NAME="JOHN" LAST_NAME="SMITH" DATE_OF_ BIRTH="1984-12-01"> <COURSES COURSE_ID="1" COURSE_TITLE="MATH 1" INSTRUCTOR_ID="1"> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES-1" DUE DATE="2004-03-14" GRADE="4.0"/> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES" DUE_ DATE="2004-03-14" GRADE="4.5"/> </COURSES> <BOOKS ISBN="1234" RETURN_DATE="2004-05-02"/> </STUDENT> </inputParameter> </update>
Making use of XML is recommended when your application environment is Web-based. This method has the lowest overhead and performs better than chapters in Web environments.
8
Using SQL
This section contains the following topics:
Overview of Using SQL Batching SQL Statements Hierarchical Queries Copying Data From One Table to Another Passthru SQL Writing Queries Using SQL Locking Considerations Managing the Execution of Queries over Large Tables
SQL specific to the data source you want to access. Attunity SQL, which is based on standard ANSI 92 SQL. The full syntax for Attunity SQL is described in Attunity SQL Syntax. The Attunity SQL incorporates enhancements, including SQL access to a data source that does not natively support SQL.
Whatever version of SQL you use, you can always use the Attunity SQL extensions to incorporate additional features. For information about customizing the way the SQL is processed, based on the data source being accessed, see Using the Attunity Connect Syntax File ([Link]). You can test SQL interactively to ensure the correct results, using NAV_UTIL EXECUTE.
Parameters are passed as a group for all the queries in the same order as they appear in the individual queries.
You cannot use this syntax to batch SQL statements from a Java application or via the NAV_UTIL EXECUTE utility or ADO Demo application supplied with Attunity Connect.
ADO Notes
Set the Multiple Results connection property to enable multiple results (before executing the compound query):
[Link]("Multiple Results") = 1
The results of the first query are displayed. To see the results of the next query, request NextRecordSet.
Hierarchical Queries
This section contains the following topics:
Generating Hierarchical Results Using SQL Accessing Hierarchical Data Using SQL Flattening Hierarchical Data Using SQL Using Virtual Tables to Represent Hierarchical Data Hierarchical Queries From an Application
A hierarchical query is a query whose result is a hierarchy of rowsets linked by chapters (see Chapter), reflecting parent-child relationships. For example, Customers and Orders may constitute a hierarchical rowset, with each chapter of the child Orders rowset corresponding to all of the orders of one customer in the parent Customers rowset. Rowsets with arrays of structures as columns (which are supported by certain providers) are modeled such that the rows of an array constitute the children of a column in the containing parent row. Hierarchical queries enable you to do the following:
Arrange rowsets resulting from a query in a hierarchy, reflecting a parent-child relationship. Use nested SELECT statements to do this. Manipulate data that is stored hierarchically in a data source (such as information stored in arrays in RMS). Currently arrays are supported in the Adabas, CISAM, DISAM, DBMS, Enscribe, RMS, and VSAM drivers. You can handle this type of data in the following ways:
By including the hierarchical data as chapters, reflecting a parent-child relationship. By flattening the hierarchical data (see Flattening Hierarchical Data Using SQL). By using virtual tables to represent the data (see Using Virtual Tables to Represent Hierarchical Data). You can use virtual driver columns in a DBMS database to produce a chaptered result.
See Hierarchical Queries From an Application to see how the SQL is incorporated in an application.
8-2 AIS User Guide and Reference
Chapter
A chapter is a group of rows within a hierarchy the chapter constitutes a collection of children of some row and column in a parent rowset. The column in the parent rowset is called a chapter column and contains a chapter identifier. The name of the column is also the name identifying the child rowset (which is meaningful only in the context of the parent rowset).
Example
The following hierarchical query produces a child rowset:
SELECT C_name, {SELECT O_orderkey, {SELECT L_partkey, L_linenumber FROM lineitem WHERE L_orderkey = O_orderkey} AS items FROM torder WHERE O_custkey=C_custkey} AS orders FROM customer
The main (root) rowset has two columns. The second column (orders) is a chapter. The result has a three-tier hierarchical structure as displayed in this figure.
Figure 81 Hierarchical SQL Query Producing Child Rowset
In the SQL Utility, click a field in the second column ([CHAPTER]) to display the contents of the chapter. The second column can be opened to display another (child) rowset. This child rowset includes items, a chaptered column that can be opened to display another child rowset (showing the L_partkey and L_linenumber columns for the opened chapter) as displayed in this figure:
Figure 82 Hierarchical SQL Query Producing Multiple Child Rowsets
You can display chapters for only one parent row at a time. For example, you can display the set of orders for only one customer at a time. You can perform drill-down operations from an ADO, ODBC and JDBC-based application. For details, see Hierarchical Queries From an Application.
For example, the following hierarchical query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee:
SELECT emp_id,(SELECT name,age FROM e->hchild) FROM disam:emp_struct e
For more details about nesting a SELECT statement in the FROM clause of another SELECT statement, refer to Flattening Hierarchical Data Using SQL. Without an alias the query lists for each employee all of the children of all of the employees:
SELECT emp_id,(SELECT name,age FROM disam:emp_struct->hchild FROM disam:emp_struct
The chaptered data is specified as part of the source syntax of the FROM clause. You can use chaptered data:
In an outer SELECT statement to flatten the chaptered data. Anywhere a FROM clause is used.
Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 81 All Employees Children
Access to the hchild rowsets is via the parent rowset emp_struct. Apart from this access, the employee data is not required by the query.
When child rowsets can be accessed as normal tables, it is generally more efficient to access them directly. The presumption in the previous example is that the child information is accessible only through a parent rowset (such as emp_struct). If both the father and the mother of a particular child are employees, this child will appear in two different chapters (once through the father rowset and again
through the mother rowset). This query may therefore return the information for this child twice. If this behavior is undesirable, use the SELECT DISTINCT* syntax.
Example 82 Childrens Schools
The following query counts the number of distinct schools where the children of all employees studied:
SELECT COUNT(DISTINCT school_name) FROM emp_struct->hchild->hschool
This example illustrates a two-level hierarchy, with the child rowset hchild itself serving as a parent of the child rowset hschool.
Example 83 Employees with Children
The following query retrieves information about employees who have at least one child:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild)
Note that E->hchild refers to the children of the given employee, not of all employees, because the nested SELECT clause refers to the same rowset as the outer SELECT clause. The emp_struct rowset appears in the same role in both the outer and the inner query, with the name E (whose use is mandatory).
Example 84 School Age Children
The following query retrieves information about all children who have attended at least one school:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)
The same scope rules as those of example 3 apply: the nested subquery SELECT * FROM C->hschool is executed for the given row of hchild and serves to filter out the children who have not attended any school.
Example 85 Multiple Hierarchies
To better understand the scope issues when multiple hierarchies are involved, consider the following queries:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild->hschool)
and:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)
The nested subquery looks similar in both of these queries, but in the first query the outer rowset that defines the context is the emp_struct row, while in the second query it is the row of the child of an employee. Thus, the first query specifies employees such that at least one of the employees children went to at least one school, and the scope of the subquery is all schools of all children of this employee. The second query specifies all children that went to school, and the scope of the subquery is all of the schools of this child.
use parentheses to delimit the nesting. This is equivalent to specifying a left outer join (LOJ) between the parent rowset and a child rowset, and the resulting rowset reproduces each row of the parent, combining it with one row for each of its children. The nested SELECT statement can reference a child rowset (using the parent->child syntax) only in its FROM clause.
Using an Alias
In order to list the hierarchical data with the parent data only, you must use an alias for the child data. Without an alias the query lists for each parent row all of the children of all of the parent rows. Compare the following queries:
SELECT emp_id,(select name from employee->hchild)from employee
and
SELECT emp_id,(select name from e->hchild)from employees e
The first query, without an alias, lists for each employee all of the children of all of the employees. The second query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee.
Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 86 Number of Children in School Per City
The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM emp_struct->hchild->hschool A WHERE [Link] = [Link]) FROM cities B
This query demonstrates the use of a nested query within a SELECT statement, using tables and not aliases.
Example 87 Employee and Child Information
The following query retrieves the employee ID, address, the number and names of children of each employee:
SELECT emp_id, child_counter, (SELECT name FROM E->hchild), Address FROM emp_struct E
Employees who have no children (such as employee 1122 in the example) appear and the corresponding child row is Null (and the output is like that of a left outer join). When more than one child rowset is specified in the query (by including more than one nested SELECT statement), the parent row is reproduced a sufficient number of times to accommodate the largest number of the parents' children, and the data from all children appears in parallel. Thus, child rows are paired randomly, resulting in side-by-side columns for the child rowsets. When one of the child rowsets is out of rows, its columns are padded with NULLs. See example 3, below.
Example 88 Employees Salary and Child Information
For each employee, the following query retrieves the employee ID, the number of the employees children, the first two salaries of the year and the names of the children:
SELECT emp_id, child_counter, (SELECT sal FROM E->Sal WHERE month IN ('JAN', 'FEB')), (SELECT name FROM E->hchild) FROM emp_struct E
Note:
When multiple levels of hierarchies are involved, the flattening operation is repeated by nesting a SELECT statement inside another nested SELECT statement. Conceptually a left outer join is performed between the parent and child rowsets as above, and then another left outer join is applied to the result and to the grandchild rowset. See Example 4, below.
The following query retrieves the employee ID, address, the number and names of the children of each employee and the number of different schools at which each child studied:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E
In this example the query at the deepest level of nesting is an aggregate (COUNT DISTINCT), which returns a single value. A request for the names and addresses of the schools would result in the child information being repeated for each school where the child studied.
Example 810 Multiple Hierarchical SQL Query
To order the retrieved information according to the employee ID and the child name, use a query like the following:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY emp_id, name
Example 811
To order the rowset according to the child name and the number of distinct schools where the child studied, use a query like the following:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY 3, 4
You cannot specify ORDER BY inside a nested query. Ordering of the result rowset is done in the main query only. Columns can be referenced by name or by ordinal number. Ordinal numbers are determined by the order the columns appear in the result set.
Example 812 Multiple Hierarchical SQL Query
The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM employees->hchild->hschool A WHERE [Link] = [Link]) FROM cities B
The array fields. A column called _parent, which contains the bookmark of the parent field. This column is generated automatically for the array and is read-only. A column called _rownum, which identifies the row in the virtual table. This column is generated automatically for the array and is read-only.
The fields _parent and _rownum together uniquely identify each row in the virtual table. Use the _parent field of the virtual table to identify the array field in the parent record when joining parent and child tables. Example A record NATION stores data about countries for a mail order application and includes an array structure REGIONS. The array structure is converted into a virtual table NATIONS_REGIONS. The following SQL accesses data from the virtual table NATIONS_REGIONS:
SELECT * FROM NATION N, NATION_REGIONS R WHERE R._PARENT = [Link]
Once a virtual table has been created, it can be used in applications in the same way as any other table. Note that this SQL statement accessing a virtual table is equivalent to the SQL statement using the -> syntax to access the array data in the table:
SELECT * FROM NATION->REGIONS
Right-click the data source in Attunity Studio and select Edit Data Source. In the advanced tab set the Use runtime columnwise flattening property. This property flattens the array within the table by expanding the array into separate fields with each row in the array displayed with the row number appended to the field name.
An array with the Use runtime columnwise flattening property set is read-only.
Drill-down Operations in an ADO Application Drill-down Operations in an ODBC Application ODBC Drill-down Operations Using RDO ODBC Drill-down Operations Using C Drill-down Operations in a Java Application
You can use hierarchical queries specified with Attunity SQL to execute drill-down operations on data within the application. How the hierarchical query for drill-down operations is implemented varies according to the applications API:
8-10 AIS User Guide and Reference
For ADO applications: Use the standard ADO methods and properties (see Drill-down Operations in an ADO Application below. For ODBC applications: Use functionality provided with Attunity Connect (see Drill-down Operations in an ODBC Application). For Java: Use standard Java (see Drill-down Operations in a Java Application).
' Set active connect [Link] = "Provider=AttunityConnect;" [Link] sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only [Link] sSQL, oConn, adOpenForwardOnly, adLockReadOnly While Not [Link] ' Get the Customers chapter Set oChild = oRST("Customers").Value While Not [Link] ' Code that manipulates the chapter resultset [Link] Wend ' Release chapter Set oChild = Nothing [Link] Wend Set oRST = Nothing Set oConn = Nothing
ODBC Drill-down Operations Using RDO: In RDO you use standard methods and properties for rdoConnection, rdoQuery, rdoResultset and rdoColumn objects and their collections. ODBC Drill-down Operations Using C: In C you use standard ODBC APIs with standard arguments.
Additionally, a number of functions are provided that you can incorporate in an application in order to utilize hierarchical queries. These functions use new descriptor types that have been added to the ODBC API SQLColAttributes: SQL_COLUMN_IS_ CHAPTER_ and SQL_COLUMN_ORDINAL_.
To determine which column is a chapter, use the descriptor type SQL_COLUMN_ IS_CHAPTER_. To determine the ordinal position for any column, use the descriptor type SQL_ COLUMN_ORDINAL_.
GetChapterInfo: Returns chapter information. For details, see GetChapter(RDO). OpenEmbeddedRowset Opens an embedded rowset for a parent chapter column. For details, see OpenEmbedded Rowset (RDO).
' Resultset for chapter ' Query for chapter (required by OpenEmbeddedRowset)
' An array which indicates that a specific column is a chapter ' Number of chapter columns ' Cursor Name
' Set active connect, Attunity-Demo is the DSN Set oConn = rdoEnvironments(0).OpenConnection("Attunity-Demo", [Link], False, "UID=;PWD=;") ' SQL statement sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only Set oRST = [Link](sSQL) ' Redims according to the number of columns ReDim IsChapterCol([Link] - 1) 'Calling Attunity Connect help routine to set the necessary info GetChapterInfo oRST, sCursorName, IsChapterCol(), cChaptCols While Not [Link] ' Open Customers chapter - lock type of child is derived from parent OpenEmbeddedRowset sCursorName, oRST("Customers"), oConn, quChild, rsChild, [Link] If rsChild Is Nothing Then ' Failed to open chapter MsgBox "Failed to open the chapter!" Exit Sub End If 8-12 AIS User Guide and Reference
While Not [Link] ' Code that manipulates the chapter resultset [Link] Wend ' Release chapter Set oChild = Nothing [Link] Wend Set oRST = Nothing Set oConn = Nothing
Notes:
Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets. To identify a chapter column, use the Attunity Connect RDO function GetChapterInfo. If cChaptCols > 0 is returned by GetChapterInfo, the parent resultset includes chapter columns. The information about child rdoResultset objects and parent chapter columns is set in a special array. Go to the needed row in the parent rdoResultset by using the [Link], [Link] or [Link] methods. Call the parent chapter column using the OpenEmbeddedRowset function.
If you want to use the Requery method on the child rdoResultset objects, save information about the embedded rowsets in an array. Use a user structure similar to the following: Type TypeChildResultset clParentOrdinal As Integer ' Ordinal position of ' parent chapter column rsChild As rdoResultset quChild As New rdoQuery End Type You need to save only the ordinal position of a parent chapter column. You can refer to any rdoColumn object using the following syntax: [Link](clParentOrdinal 1). Close all child rdoResultset objects before closing the parent rdoResultset object.
The number of chapter columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind the parent column with child rowsets).
Syntax
GetChapterInfo rsParent, sCursorName, isChapter(), cChaptCols
where
rsParent (input): A parent rdoResultset object. sCursorName (output): A string containing the parent cursor name. isChapter() (output): An array of long isChapter flags for each column of the parent resultset. The flag is set to 1 for a chapter column and unset for any other column. You can use the isChapter array with each column of the parent rdoResultset to determine whether the current column is a chapter. The isChapter flag is set by calling for each column of parent rdoResultset the ODBC API SQLColAttributes with the additional descriptor type SQL_COLUMN_IS_ CHAPTER_. If a chapter column exists, the cursor name is retrieved by calling the ODBC API SQLGetCursorName.
cChaptCols (output): A long value returns the number of chapter columns in the parent resultset.
In its pfDesc argument, SQLColAttributes returns 1 for chapter columns and 0 for non-chapter columns.
OpenEmbedded Rowset (RDO) OpenEmbedded Rowset opens an embedded rowset for a parent chapter column. This method does not return data from the embedded rowset. Syntax: OpenEmbeddedRowset sCursorName, clParent, cn, quChild, rsChild where
sCursorName (output): A string with the parent cursor name. clParent (input): An rdoColumn object representing the parent chapter column. Cn (input): An rdoConnection object. quChild (input/output): A child rdoQuery object. The first time the chapter is opened for this parent column, quChild is NULL. rsChild (input/output): A child rdoResultset object. The first time the chapter is opened for this parent column, rsChild is NULL.
For z/OS Platforms: The sample program is in [Link], where NAVROOT is the high-level qualifier of the Attunity Server installation When handling chapter data, note the following:
Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets, using the Attunity-provided function GetChapterInfo. GetChapterInfo returns 1 when the parent resultset includes chapter columns. Open each chapter using the Attunity OpenEmbeddedRowset function. When you call this function with the prepared child statement, the function changes only the active chapter bookmark and calls SQLExecute.
Note:
GetChapterInfo (C)
GetChapterInfo returns the following chapter information:
The number of columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind it with child rowsets).
Syntax
int GetChapterInfo(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmt, UCHAR* szCursorName, COLDESC *pColDesc, SWORD cCols, long *cChapters)
where:
henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. szCursorName (input/output): The parent cursor name. Allocate szCursorName before calling GetChapterInfo. pColDesc (input/output): An array of descriptor information for all columns. cCols (input): The number of columns in the resultset. cChapters (output): The number of chapter columns in the parent resultset.
To save standard and additional descriptor information for each column, use the Attunity user structure stColumnDescriptor (see stColumnDescriptor User Structure (C)).
OpenEmbedded Rowset OpenEmbeddedRowset opens a chapter (embedded rowset). This function does not return data from the embedded rowset. Syntax: int OpenEmbeddedRowset(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmtParent, UCHAR* szParentCursorName, COLDESC *pOneColDesc) where:
henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. SzParentCursorName (input): The parent cursor name. POneColDesc (input/output): The pointer to the descriptor information of the parent chapter column.
This code checks both the column type and the column type name. In the code you need to check only one of them. To retrieve a chapter column, call getObject and cast it to a ResultSet as follows:
ResultSet rsChild = (ResultSet)[Link](column);
Because of the need to cast the returned object, first make sure that the column is a chapter column.
Passthru SQL
SQL UPDATE, INSERT, DELETE, DDL statements anFor all SQL During a SessionFor all SQL During a Sessiond SELECT statements can be passed directly to a relational data source, instead of being processed by the Query Processor. A data retrieval query can include joins where the data from one or more of the data sources is processed directly by the data source instead of the Query Processor. This is particularly useful when dealing with a data source whose commands are not in standard SQL and whose data you want to join with data from other data sources HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query. For statements that do not return rowsets, you can pass SQL directly to the data source in one of the following ways:
For a Specific SQL Statement: For a specific SQL statement from within the application. For all SQL During a Session: For all SQL during a session from within the application.
All SQL statements (both statements that do not return rowsets and statements that do return rowsets) can be passed as part of the SQL itself. This is particularly useful when dealing with a non-SQL data source, whose data you want to join with other SQL data. Passthru queries are not supported for complex objects, such as BLOBs.
Via ADO
To enable passthru SQL for a specific statement, use the Command object and pass it the Operating_Mode parameter. All SQL during this connection will bypass the Query Processor. To reset the connection to channel the SQL through the Query Processor, reset the Operating_Mode parameter to NULL. Example The following example code shows an ADO connection to an Oracle database. Using the Command object, individual lines of SQL can be passed directly to the database. You specify the connection with the Operating_Mode parameter set to Passthru. All SQL, until the Operating_Mode parameter is reset to NULL, subsequently bypasses the Query Processor.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object
Private Sub Bypass_Qpex2() Dim Dim Dim Dim Dim rs As New [Link] conn As New [Link] conn1 As New [Link] cmd As New [Link] cmd1 As New [Link]
-------------------------------------- An example of using a Passthru Command --------------------------------------[Link] = "Provider=AttunityConnect" [Link] [Link] = "Oracle" [Link] = conn1 [Link]("Operating_Mode") = "Passthru" [Link] = "ALTER TABLE mytbal ADD new_column INTEGER" [Link] ------------------------------------------- Resetting the Passthru mode for the command -------------------------------------------[Link]("Operating_Mode") = "" Set cmd1 = Nothing [Link] Exit Sub End Sub
Via ADO/OLE DB
Use the Connection object and pass it the Operating_Mode parameter set to Passthru in order to enable passthru SQL. All SQL during this connection will bypass the Query processor. Example The following example code shows an ADO connection to an Oracle database. The connection is specified with the Operating_Mode parameter set to Passthru.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object
Private Sub Bypass_Qpex1() Dim Dim Dim Dim Dim rs As New [Link] conn As New [Link] conn1 As New [Link] cmd As New [Link] cmd1 As New [Link]
----------------------------------------- An example of using a Passthru Connection -----------------------------------------[Link] = "Provider=AttunityConnect; Operating_Mode=Passthru" [Link] [Link] = "Oracle" [Link] = conn [Link] = "ALTER TABLE mytbal ADD new_column INTEGER" [Link] Set cmd = Nothing [Link]
Via ODBC
Pass connection information to the SQLDriverConnect method, which includes the passthru parameter. For Windows Platforms: Via the Microsoft ODBC Data Source Administrator, with the Attunity ODBC connection wizard. Create or edit a DSN then select the Batch update passthru check box in the Remote Server Binding Page.
Example
ConnectString = "DRIVER=Attunity Connect Driver; DefTdpName=ORACLE;Binding=nav;Passthru=1;"
where disam and disam1 are data sources defined in the binding configuration. The query to the disam1 database is passed directly to the database, bypassing the Query Processor. Note the use of parameters in the example. You can also
specify parameters in a non-returning rowset query (see Passthru Query Statements (bypassing Query Processing) for the syntax). Standard ANSI 92 SQL has been extended so that expressions involving columns of tables that appeared previously in the FROM list are used (such as from the nation table in the above example) HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query.
You cannot use reserved keywords in SQL for table and column names (see Reserved Keywords). The table below displays SQL query size limitations:
SQL Query Size Limitations Maximum Length
Table 81 Limitation
Length of an identifier (table or column name) 64 Length of a string1 Number of parameters in a query Level of nesting of function calls Level of nesting of nested queries Length of a LIKE mask operand Length of a comment line
1
The length can be modified in one of the following ways: By specifying a value for the tokenSize parameter within the <queryProcessor> group in the Attunity Server environment. Specify a question mark (?) instead of the string. When prompted for the data type of the value specify C (for cstring).
Comments can be included as part of the SQL. Comments are bounded by /* and */. If a comment is greater than the limit of 350 characters, break it up over a number of lines. Quotation marks ("") or square brackets ([]) can be used to quote identifiers such as table names and column names.
Use views and stored query procedures (CREATE VIEW Statement and CREATE PROCEDURE Statement). Use forward-only and read-only cursors. Batch SQL statements together in a single statement. For details, see below.
Use the HINT clause, specifying the optimization strategy you want used for the query (see Attunity SQL Syntax).
You can test connections and SQL interactively using NAV_UTIL EXECUTE.
Locking Considerations
This section includes the following topics:
Locking Modes
SQL UPDATE and DELETE statements are automatically executed in pessimistic locking mode. With SQL SELECT statements, the following locking modes are supported:
With chapters, child rowsets cannot be updated if the parent rowset is locked. Refer to the specific driver for additional locking information.
Optimistic Locking
With optimistic locking, records are locked just before an update operation. Before locking the row, Attunity Connect checks that another user hasn't changed the specified data. Optimistic locking has the following advantages over pessimistic locking:
Pessimistic Locking
With pessimistic locking, records are locked as they are read. Pessimistic locking is slower than optimistic locking. When connecting to data via Microsoft Jet or SQL Server, you must open a transaction before issuing the query.
No Locking
With no locking, records are not locked and are read-only.
SQL_CONCUR_VALUES: Optimistic locking (the locking itself is done during SQLSetPos if fLock = SQL_LOCK_EXCLUSIVE is specified). SQL_CONCUR_ROWVER: This is treated as if you specified SQL_CONCUR_VALUES.
SQL_CONCUR_LOCK: Pessimistic locking. SQL_CONCUR_READ_ONLY: Read-only mode. This is the default value.
adLockReadOnly (default): Read-only mode. adLockPessimistic: Pessimistic locking. adLockOptimistic: Optimistic locking. adLockBatchOptimistic: Optimistic batch updates. Required for batch update mode as opposed to immediate update mode.
These locking values should be used with ADO version 2.1 and higher.
Max Number of Row in a Table That Can Be Read: This parameter restricts the number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error. Max Number of Rows Allowed in a Table Before Scan is Rejected: This parameter restricts the number of table rows that can be scanned. This parameter impacts on the query both during query optimization and execution, as follows.
During query optimization: The value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). During query execution: A scan is limited to the value set. When the number of rows scanned exceeds the number stated, the query returns an error.
You must refresh the daemon as well as reloading the configuration after changing values in the WS Governing tab.
Property
Limitations
To optimize a query and select the largest part of a query that can be sent to the backend, the optimizer reorders the tables joined. The optimizer follows the following Outer join rules when reordering the tables to maintain outer join semantics:
Tables on the right side of a left outer join cannot be moved up the LOJ or to the LOJs left side. Tables from the left side of a left outer join that are not included in the outer join condition can be moved and joined to the outer join result. Tables from the left side of a left outer join that are included in the outer join condition cannot be joined to the outer join result.
Query Optimization
The query optimizer will attempt to send the greatest part of a query possible to the database on the backend instead of to the query processor. This can be achieved easily on joins that are executed on tables from the same database. Attunity Connect can execute outer joins on two or more databases, however these queries must be made by the Attunity query processor.
Property
The property noLojDelegation is set to false by default, which indicates that the Optimizer tries to delegate outer joins to the backend database. When this property is set to true, outer joins are always performed by the Attunity query processor even if part of the LOJs can be delegated to the backend. This is helpful for troubleshooting problems with outer join optimization or to preserve legacy optimization.
From the Configuration view on the left side, expand the machine folder. Expand the machine with the binding that has the optimizer settings you want to change. Right-click the binding that has the optimizer settings you want to change and select Edit Binding. The Binding editor opens on the right of the screen with the Properties tab open.
5. 6. 7.
From the Environment Properties list, expand Optimizer. Click the right side of the Value column for the noLojDelegaton property and select true from the drop-down list. Save the changes.
9
Working with Web Services
This section contains the following topics:
Web Services Overview Preparing to use Web Services Deploying an Adapter as a Web Service Undeploying Web Services Viewing Web Services Logging Web Service Activities
Web Services Prerequisites Setting up Attunity Studio to Work with Web Services
Apache Tomcat Web server, version 4.1 The [Link] file JDK 1.4. You must make sure that this full Java SDK is installed. Apache Tomcat will not work if only the Java Runtime Environment (JRE) is installed.
Install JDK version 1.4x on your computer. Apache Tomcat will not work with the Java Runtime Environment (JRE) only. You can download JDK 1.4 from [Link] Install Apache Tomcat version 4.1. Attunity Studio will only work with this version of Tomcat. Download Apache Tomcat from [Link] It is best to use the zip file. Extract the contents of the zip file to a folder on your main root drive, such as program files.
Copy the [Link] file to the webapps folder of the Tomcat root folder. For example, C:\Program Files\Apache\tomcat\webapps. This file should be supplied with your Attunity Studio installation. If not, you can download it from the Apache site. Run [Link] from the tomcat\bin folder. Depending on where you installed your Apache Tomcat, the actual path will look something like C:\Program Files\Apache Software Foundation\Tomcat 4.1\apache-tomcat-4.1.34-LE-jdk14\bin. Test that Tomcat is running. Open a browser and enter the following URL: http:\\localhost:port 8080. If you changed your Tomcat default port to another number, for example, 80, enter that number instead of 8080.
Provide the connection information for the Apache AXIS servlet. Apache AXIS is an implementation of SOAP (Simple Object Access Protocol). Define the Web service and select the interactions to include in the service.
1. 2. 3. 4. 5.
To deploy an adapter as a Web service Open Attunity Studio, from the windows Start menu, select Programs, then select Attunity and then click Attunity Studio. Select the Design perspective. In the Configuration manager, expand the machine with the adapter that you want to set as a Web service. Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to deploy as a Web service and find Web Services and then select Deploy. The Web Service Deployment wizard opens. This wizard has the following steps:
Connection Information for the Axis Servlet Define a new Web Service for an Adapter Select the Interactions Summary Window
Host: Enter the name of the server hosting the axis servlet. Enter the port information:
Port: The port used by the axis servlet. If your Apache Tomcat Web server is defined as port 8080, select the Use default port check box. Axis path: The path to the Axis servlet. In you copied the servlet into the Apache Tomcat webapp folder, you can use the default setting.
Enter the username and password to access the axis servlet. If you are using an anonymous connection, select the Anonymous connection check box.
Click Next. The Define a new Web Service for an Adapter step is displayed.
9-3
Name: Enter a name for the Web service. Each Web service must have a unique name that has only letters, digits and underscores. To reuse a name, you must undeploy the Web service with this name before assigning the name to the new Web service. See Undeploying Web Services.
Description: Enter a general description of this Web service. This is optional. Namespace: Enter a name for the namespace, if you want it to have a unique name. Select the Namespace enabled check box if you want to enable and use this namespace. If you do not enter a namespace, the namespace is created from the host name and Web service name.
Enter the username and password for users who can access the adapter at runtime. If you are using an anonymous connection, select the Anonymous connection check box. Click Advanced Options if you want to use any of the advanced options. The Advanced Options settings has the following tabs:
Timeout (sec): The Web server timeout in seconds. Firewall protocol: Select one of the following: None: To use no protocol. FixedNat: Select this if the machine you are working with has a fixed Network Address Translation (meaning that it is always connected to the same external IP address when accessing the Internet.
Trace enabled: Select this to enable tracing the communication between AIS and the Web service. If enabled, specify a path and name for the trace file. Encryption: Select this if you are using encryption. Enter the information for the encryption used.
When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The Pooling Tab or The Map Tab to enter information there.
9-5
Maximum active connections: The maximum number of connections that can be active at the same time. Maximum idle connections: The maximum number of connections that can be available in the pool. Behavior when a connection is not available: Select one of the following to indicate what behavior you want if a connection is not available: Fail: The interaction fails Grow: Create a new connection Block: Block the interaction
Time between idle connection examination runs (msec): The time (in milliseconds) to wait before checking if any connections are idle. Minimum idle time before removed from pool (msec): The minimum amount of time (in milliseconds) a connection can be idle before being closed.
When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Map Tab to enter information there.
New connection parameters are entered as a key. For example, to open a new server for each operation, create a new key called newserver and enter 1 in the Value column. To enter a new key Right-click in the Key column and select Add. Type in a name for the key. Click in the Value column in the same row as the key to enter a value.
1. 2. 3.
To delete a key, right click on the key you want to delete and select Delete. When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Pooling Tab to enter information there.
9-7
You can add interactions to include in the Web service, and also remove them from the Web service. To add or remove Web services, select the Web service and click on one of the buttons described in the following table.
Button Description Select an interaction from the left column and click this button to move it into the right column and include it in the Web service. Click this button to move all interactions from the left column to the right column and include all of the interactions in the Web service. Select an interaction from the right column and click this button to move it into the left column and remove it from the Web service. Click this button to move all interactions from the right column to the left column and remove all of the interactions from the Web service.
When you are finished selecting interactions to include in the Web service, click Next. A Summary Window showing the Web service settings is displayed.
Summary Window
The summary window shows you the configurations that you entered in each step of the wizard. Look over the information in this window to be sure it is correct. Click Back to return to any previous step and edit the information, if necessary.
When you are sure all the information is correct, click Finish to deploy the Web service.
Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to undeploy as a Web service and find Web services, then select Undeploy. The Web Service Undeployment wizard opens.
4. 5.
Enter the Connection Information for the Axis Servlet. Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it.
6. 7.
Select the Web services you want to remove. Click Finish to undeploy the selected Web services.
Expand the Bindings folder and then expand the binding with the adapter.
Working with Web Services 9-9
3.
Right-click the adapter with the Web services and interactions you want to view, and find Web services, then select List. The Web Service Undeployment wizard opens.
4.
Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it. This is the same wizard that is displayed in when Undeploying Web Services, however you can only view the Web services.
The log file location. The level of error messages written to the log. The order of the error messages in the log.
Changes to the log file are set in [Link] which is located in: WebserverRoot\WebApps\Axis\Web-INF\classes\[Link] Where WebserverRoot is where the Web server is installed, for example: C:\Program Files\Apache Group\Tomcat 4.1\Webapps\axis\ Web-INF\classes\[Link]
Enter the error type as shown in the example below: [Link]=error_type, LOGFILE
Part II
Attunity Connect
This part contains the following topics:
Introduction to Attunity Connect Attunity Integration Suite Architecture Flows Implementing a Data Access Solution Setting up Data Sources and Events with Attunity Studio Implementing an Application Access Solution Setting Up Adapters Application Adapter Definition
10
Introduction to Attunity Connect
This section contains the following topics:
Logical Architecture
(including diagram/s and what runs on each of the machines, ties in to next section)
The engines provide comprehensive transaction support to the extent allowed by the sources support for two-phase commit functions. AIS installations include the following engines:
Data Engine
The data engine accesses, updates, and joins enterprise information from data sources as if they were all relational databases. At the same time, it takes advantage of its query optimizer to determine the fastest way to carry out these tasks, minimizing the load on IT resources, networks, and systems. Because the data engine uses a relational model, it normalizes the data, converting hierarchical structures into tables without redundant data. By combining the relational model with the SQL language, the data engine allows applications to issue the same complex query to multiple data sources without tuning it to each target source. The relational approach also simplifies access via commercial tools and applications that interoperate with relational sources. Clients can use industry-standard JDBC, ODBC, ADO/OLE DB and .NET interfaces to submit SQL requests to the data engine. By using either the Database or Query application adapter, you can also use JCA, XML or COM as the client interface. When the data engine receives and parses an SQL request, it first determines which data source is involved, where the data resides, and how the source handles data. The data engine determines how to carry out the process based on metadata that it retrieves from a local cache, from the repository, or dynamically from the backend data sources. Then, the data engine generates a query execution plan in the form of a tree. Whenever possible the data engine passes the entire request to the underlying data source. In this case, the engine translates between standard ANSI SQL 92 and the underlying databases SQL dialect. The data engine can also accept pass-through queries to nonstandard SQL functions supported by the target source. If a data source offers limited SQL capabilities, the data engine implements missing functions as needed. If the data source offers no SQL capabilities at all, the data engine breaks the request into simple retrieval operations that an indexed or sequential table can read.
Query Optimizer
The data engine includes the query optimizer, which minimizes execution time and resource consumption. The query optimizer enhances the data engine's initial query execution plan based on the query structure, network structures, the target data sources capabilities and locations, and the statistical information available for each table. To maximize the efficiency of query execution, the query optimizer uses various caching and access techniques, including read-ahead, parallelism, and lookup-, hash-, and semi-joins. It flattens views, breaks out and propagates simple predicates down the tree, reorders joins, directs join strategies, selects indexes, and performs other related tasks. If the target data source is distributed across multiple machines, the data engine and query optimizer together generate a distributed execution plan that minimizes network traffic and round-trips. Performance Tuning Tools Database administrators can review and control the optimization strategies that the optimizer uses. Using the query analyzer, IT staff can monitor accumulated statistics and heuristic information to evaluate the success of the
10-2 AIS User Guide and Reference
optimization strategies. These tools enable users to evaluate and understand the way specific queries work by specifying hints, flags, optimization goals (first-row or all-rows optimization), and other query properties, such as requests for scrollable or updateable cursors.
Data Sources
Native data source drivers utilize the native API of each data source. Attunity Connect tailors these drivers to the individual data model and performance characteristics of the particular data source. These source-specific drivers share common logic. These drivers deal with metadata, describing the information offered by the data sources and mapping the underlying data model and functionality into relational or ISAM (Index Sequential Access Methods) models. Drivers for nonrelational data sources support sequential and indexed access, array structures and hierarchical structures, and other functions without normalization or other time-consuming operations. Drivers fall into one of the following classes:
RDBMS drivers to access data providers that support some dialect of SQL Non-relational drivers to access data providers that do not support SQL File system drivers to access files Procedure drivers to access program functionality
The functionality within each class can be factored further. For RDBMS drivers, Attunity Connect needs to be aware of the syntactic and semantic nuances of the SQL dialect for each specific data source, of the library functions supported by the data source, and other details. Similarly, a file system may or may not support functionality such as indexes, BLOBs and embedded rowsets, or be capable of efficiently executing single-table predicates (filters). For relational data sources not supported by Attunity Connect through a custom driver, Attunity Connect provides generic ODBC and OLE DB gateways. A syntax file ([Link]) encapsulates backend peculiarities and facilitates the connection of the new data source to Attunity Connect using one of these generic drivers. The following data sources are supported:
Table 101 Relational DB2 Informix Supported Data Sources Non-relational Adabas CISAM File System Flat Files ODBC OLEDB-FS (Flat File System) OLEDB-SQL (Relational) Text-Delimited Procedures Procedure IMS/TM Natural CICS Procedure (Application Connector) CICS Procedure
Oracle Rdb (OpenVMS only) SQL/MP (HP NonStop only) SQL Server (Windows only)
Enscribe (HP NonStop only) IMS/DB (z/OS only) RMS (OpenVMS only)
Table 101 (Cont.) Supported Data Sources Relational Sybase Non-relational VSAM under CICS and VSAM (z/OS only) File System Procedures
In addition, procedure drivers are provided to enable access to program functionality. These procedures include a generic procedure driver and specific drivers for CICS, IMS/TM, and Natural programs on a z/OS platform.
JDBC Client Interface A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ADO Client Interface An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all available data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms. .NET Client Interface [Link] is the data-access component of the [Link] Framework. AIS supports all [Link] objects, methods and properties as well as additional .NET Data Provider classes.
Transaction Support
To ensure the integrity of simultaneous updates to multiple data sources, Attunity Connect supports distributed transactions in the following ways:
As a distributed transaction manager, for safe, reliable multi-server transactions. As an OLE Transaction resource manager, enabling all AIS-enabled data sources to participate in distributed Microsoft MTS transactions using the OLE Transactions protocol. As an XA resource manager. Using the JDBC 2.x Interface, so that all AIS-enabled data sources can participate in distributed J2EE transactions under a J2EE application server.
A transactions log file backs up two-phase-commit (2PC) operations in the case of a failure to recover transactions. Generally, the ability to support distributed transactions depends on the capabilities of the data sources that participate in the transaction. Attunity Connect cannot guarantee data integrity for AIS-enabled data sources that do not support the 2PC protocol. However, Attunity Connect does employ various optimizations to extend the coverage of 2PC support. For example, Attunity Connect can execute a 2PC process as long it involves a maximum of one 1PC data source while all the other data source support 2PC.
10-4 AIS User Guide and Reference
Application Engine
The application engine provides an enterprise application integration (EAI) model that enables any kind of application to interact with applications and data sources via their own native interfaces. On the client end, applications use industry-standard interfaces to communicate with the application engine. In turn, the application engine communicates with specific adapters that access enterprise and application data on the server end. An XML-based schema supports precise application mapping. As a result, the application engine opens legacy applications of all types to integration with each other and with cutting-edge technologies such as Java and XML. Because the application engine uses an EAI model, interactions with the source application or data are precise and predictable. Requesting applications can specify exactly how interactions occur. Moreover, the application engine maps data structures faithfully, facilitating access to familiar applications within new environments. The EAI model is particularly suitable for deployment over the Internet using protocols such as TCP/IP and HTTP because it allows for both stateful and stateless interactions and can batch requests to minimize network traffic. A distinctive feature of the application engine is its ability to translate application structures, such as those typical of legacy COBOL applications, to and from XML. Web and other solutions can use the application engine to interact with both applications and data sources through a growing number of XML-based tools, as well as other application-oriented frameworks such as Suns J2EE JCA (J2EE Connectors Architecture) and Microsoft .NETs SOAP-based interfaces.
Application Adapters
With the AIS generic application adapter, developers can incorporate applications into Attunity Connect solutions. They can wrap and leverage existing application-specific business logic, protect data integrity by writing to a data source through its original application, and trigger operational tasks, all with a single adapter that runs consistently across diverse platforms. The following application adapters are provided:
CICS (on z/OS platforms) COM applications (on Windows platforms) Database and Query adapters access to any supported data source via XML, JCA, COM or .NET. IMS/TM (on z/OS platforms) Legacy applications Pathways (on HP NonStop Himalaya platforms) Tuxedo (on Windows and UNIX platforms)
XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is exceptionally well tailored for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all the platforms where AIS runs. On other
platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP.
JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards. It provides a robust, efficient, and reliable way to integrate Java applications with native applications. COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface: A .NET component, called NetACX, that enables access to application adapters from any .NET-based application.
Events
AIS handles events via an event queue. The event queue is defined as an adapter in Attunity Studio where interactions are expected as events. The event queue itself is managed by a dedicated server process, which is set up by defining an events workspace in the daemon definition.
Attunity Server
Attunity Server is the server component that includes all the software required to enable AIS to run. AIS has a native installation wizard on each of the supported platforms, simplifying deployment. It takes no special skills to install AIS, so IT staff can set up the AIS infrastructure by applying only their platform- and application-specific expertise.
Attunity Studio
Attunity Studio is the configuration tool for AIS. Configuration using Attunity Studio is performed on a Windows platform. The configuration information is stored in the AIS repository on the backend system.
Design Time
Attunity Studio is used during design time to perform the following configuration tasks:
Set up access to machines running AIS Configure the daemon that is responsible for managing communication between AIS machines Configure metadata for both data sources and adapters. For all relational data sources and some non-relational data sources, Attunity Connect uses the native metadata. Otherwise, the metadata is specified to Attunity. See Working with Metadata in Attunity Studio for more information.
Runtime
Attunity Studio is used during runtime to perform the following management task:
Modify the configuration settings Manage daemons and workspaces during runtime (see Runtime Management with Attunity Studio)
Metadata Repository
AIS supports an XML-based schema. The schema and the AIS configuration are stored on the server file system and represent the repository. There is a single main repository for each installation of AIS. The repository maintains server-wide definitions (such as the daemon configuration) and application adapter definitions. There is also a repository for each data source which uses Attunity metadata. These repositories are optimized for fast run-time access. These repositories are not restricted by native operating system file naming conventions.
System Repository
The system repository is used for information that is general to AIS, such as:
Binding information, including the names of configured backend adapters and drivers and environment settings. Daemon definitions, to control client-server communication. User profiles, enabling single sign-on to multiple backend applications and data sources. Information used directly by the Query Processor. An adapter definition for each adapter defined. For each adapter, this includes a list of its interactions and the input and output structures that are used by these interactions.
Attunity metadata for non-relational data sources, files and Attunity Connect procedures, extensions of the native metadata of the data source (the extended metadata feature), and a snapshot of native metadata for better performance (the local copy feature). Synonym definitions for some of the data sources.
Daemons
A daemon, called IRPCD, runs on every machine running AIS and manages communication between machines running AIS. The daemon is responsible for allocating server processes to clients. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace and provides clients with the required servers. When a client requests a connection, the daemon allocates a server process (or where applicable, a thread) to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further
communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains available for use by another client. The daemon supports multiple server configurations called workspaces. Each workspace defines accessible data sources, applications, environment settings, security requirements, and server allocation rules. The allocation of servers by the daemon is based on the workspace that the client uses to access the data source. Thus, a client can access a data source via one workspace, where a server process is allocated from an existing pool of servers, or the client can access a data source via a different workspace, where a new server process is allocated for each client request. A fail-safe mechanism allows the specification of alternate daemons, which function as a standby for high availability.
Processing mode, specifying multi-threaded/single-threaded operations, the number of processes, and server pools. Security settings for impersonation, authorized users, administrators, and encryption.
AIS also supports multi-version interoperability. IT teams can simultaneously install multiple versions of AIS on all supported operating systems. As a result, organizations can begin to deploy new software versions in a staged update process, without interrupting the operations of previous versions. At the provider end of the system, servers act as agents that access, read, manipulate, and write to data sources and applications. Servers accept commands from clients, call the corresponding local functions, and package and return the results to the clients.
Note:
In the event that a target data source actually resides on another machine, the source is represented by an agent on the server using a third-party communications component such as Oracle Net Servers or Sybase CT-Lib, which is transparent to Attunity Connect.
Server software, which includes the application and data engines, drivers and adapters, resides on every AIS machine. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains for use by another client. This kind of server model is very flexible. It accommodates different operating systems and data source requirements. AIS supports several server models:
The multi-threaded model is effective when the data sources support multi-threading (on Windows platforms only). Serialized multi-client server processes are useful for short requests and for data sources that allow more than one simultaneous client per process. The single-client-per-process model supports data sources that only handle one client per process and to maximize client isolation.
Server processes can be reused. Various server process pooling options allow organizations to tune the solution for different application and load requirements.
11
Attunity Integration Suite Architecture Flows
This section has the following sections:
Overview Data Source Architecture Application Adapter Database and Query Adapter Change Data Capture (CDC) Flow
Overview
This section provides architectural drawings that show the basic flows used when you deploy AIS. AIS provides solutions for integrating data between data sources and through applications. In addition, you can also work with changed data captures (CDC). The basic AIS flows are:
Integration using a relational data source Integration using a non-relational data source (file system) Integration using generic database or query adapters Application-based integration Change Data Capture
Data Source
The following figure shows the flow architecture for integrating data with relational data sources.
Figure 111 Relational Data Source Flow
This table shows the flow for data integration from the client Machine to the Server Machine. The client side shows the APIs used for Java and other applications. The following table describes the parts in the relational data v flow. For more information on working with data access, see Implementing a Data Access Solution.
Table 111 Part Client Platform Java-based applications: Consumer Application in Java JDBC API NAV API Non-Java applications Consumer Application Data API An application written in Java. This is a Java application being run by the user on the client machine. The Java programming interface that allows external access to SQL queries. The standard programming interface for AIS. Any application not written in Java. This is an application being run by the user on the client machine. An interface that supports data and allows SQL queries. Relational Data Flow Parts Description
Table 111 (Cont.) Relational Data Flow Parts Part NAV API Query Engine and Optimizer Description The standard programming interface for AIS. The query engine parses the SQL and creates a statement that is read by multiple data sources, if necessary. The query optimizer refines the SQL statement so that it is most efficient and takes the least amount of time to send and receive the results. The way that a statement is handled by AIS, depends on the data source types that are queried. AIS queries the data for relational and non-relational data sources differently. For a more detailed explanation of this part of the data flow, see Query Engine Flow. Server Platform IRPCD (Daemon) Service Loop Driver This is the process that manages communication between all the machines in the flow. The service loop is the following, get request, handle request, revise request and continually repeat the process. The connecting driver for your data source. Attunity supplies many data source drivers. For more information, see the Data Source Reference. The Attunity data driver API. For more information, see The Attunity Connect Developer SDK. A driver created for a specific data source that is not one of the drivers supplied with AIS. The API for the database on the backend and the actual database used in this flow. The code on this level belongs to the database.
Making a Request between Two Relational Databases Making a Request between Two Non-Relational Databases Making a Request between a Relational Data Source and a Non-Relational Data Source
Making a Request between a Relational Data Source and a Non-Relational Data Source
The following figure shows a tree that traces an SQL query on a DISAM (non-relational) data source and an Oracle (relational) data source. In this case, the SQL statement requests an LOJ on the DISAM file and the Oracle table. The query engine splits into two branches. The first branch looks up the information in the DISAM file. The second branch uses the information in the SQL statement to carry out a filtering action and then looks up the information in the table using the filtering rules to find the correct columns.
Application Adapter
The following figure shows the flow architecture for integrating data between applications.
Figure 114 Application Flow
The following table describes the parts of the application flow. The client side is always a thin client and contains the application. The server side contains the main AIS installation and the necessary adapters. For more information on using application access, see Implementing an Application Access Solution.
Table 112 Part Client Platform Consumer Application ACX Gateway ACX Client Server Platform IRPCD (Daemon) This is the process that manages communication between all the machines in the flow. This is an application being run by the user on the client machine. The application can be 3GL, JCA, COM, or XML. A gateway based on the ACX protocol. For more information on the ACX protocol, see the Attunity Connect Developer SDK. A client based on the ACX protocol. This is always a thin client. Application Flow Parts Description
Table 112 (Cont.) Application Flow Parts Part Service Loop ACX Dispatcher Adapter GAP API Custom Adapter Application and Application API Description The service loop is the following, get request, handle request, revise request and continually repeat the process. The ACX-based program that sends XML queries to the application. The adapter that is used to provide access to your application. The Attunity Application API. For more information, see The Attunity Connect Developer SDK. An adapter created for a specific application that is not one of the adapters supplied with AIS. The actual application used in this flow and its API. The code on this level belongs to the application.
The following table describes the parts of the Change Data Capture flow. A change data capture captures changes to databases and writes them to a log file. For more information, see Implementing a Change Data Capture Solution.
Change Data Capture Flow Parts Description The data source where the changes are made. The user application makes the changes to the data source. The tool that identifies and captures the changes from the data source. A data source that saves raw change records. An Attunity Stream component that produces Attunity Stream change records (change stream). A change provider that retrieves raw change records (raw change stream) from the change source and produces Attunity Stream change records (change stream). A change provider that provides value-added services on a change stream. This element is optional. The Change Warehouse produces a change stream. An Attunity component that provides change stream records to a change consumer. A third-party tool or application that consumes change stream records. Examples include ETL and EAI tools and messaging systems.
Change Warehouse
The figure above shows how the query and database adapters handle a request. The flow has two parts. The top (above the dotted line) is the client side. The client side has the application that is making a request to the adapter. The application is based on the
ACX protocol. For information on the ACX protocol, see the Attunity Connect Developer SDK. The request is sent to the adapter on the server. If you are using a database adapter, the request is phrased in a standard XML format, which is converted to an SQL query. If you are using a query adapter, the SQL is entered directly.
Note:
The database adapter is not part of the flow if you use the Query adapter.
The SQL query is sent to the query processor, which sends it to the data source. The data source returns the result of the query, which are sent back to the application. In the figure above, the left side shows the input requests and the right side shows the output results for each phase of the flow.
12
Implementing a Data Access Solution
This section includes the following topics:
Overview Setting Up AIS for Data Access Installing AIS Configuring the System for Data Access (Using Studio) Supported Interfaces Data Access Flow Data Source Metadata
Overview
A data access solution uses data source drivers to let you connect to a supported data source from JDBC or ODBC applications on all platforms. Attunity data sources support ADO and .NET applications on Windows platforms. The following data source drivers are provided:
Table 121 Relational DB2 Data Source Informix Data Source Ingres II (Open Ingres) Data Source Oracle Data Source Oracle RDB Data Source (OpenVMS Only) SQL/MP Data Source (HP NonStop Only) SQL Server Data Source (Windows Only) Sybase Data Source Data Source Drivers Non-relational Adabas C Data Source Generic Flat File Data Source
CISAM /DISAM Data Source ODBC Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS) under CICS and VSAM (z/OS only). OLEDB-FS (Flat File System) Data Source OLEDB-SQL (Relational) Data Source Text Delimited File Data Source
If you are working with a data source for which Attunity does not supply a driver, Attunity supplies an SDK to allow you to develop a driver for your data source. For details, see the Attunity Developer SDK reference.
Install AIS on the Backend. The backend are the machines where the databases are located. You also install the data source drivers here. Install the Attunity Server Software. Install Attunity Studio. Configure the Machines where you installed AIS and the data sources in Attunity Studio. Configure User Profiles. Set up and Configure the Binding for the data source. Configure the Data Sources in the Binding. Set Up the Data Source Metadata.
Installing AIS
Before you begin the process, you must install the necessary components. This includes the AIS client and server, and the data source drivers, the Attunity server software, and Attunity Studio. This section shows where to install the AIS components necessary for a data access solution.
Supported Interfaces
AIS Data Sources provide universal connectivity in many standard interfaces. In addition, Attunity also provides generic drivers for various interfaces, such as ODBC and JDBC. Attunitys universal connectivity uses Standard ANSI 92 SQL to query any supported data source, whether relational or non-relational. Specific versions of SQL for relational data sources can be used by making the SQL query directly to the data source. The following interfaces are supported by AIS:
JDBC Client Interface: A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface: The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ODBC Client Interface Under CICS (z/OS Only): The ODBC interface on a mainframe is access through either a COBOL or C program running under CICS. On the z/OS machine you do not need to run the daemon if the daemon is running on the target machine. OLE DB (ADO) Client Interface: An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all supported data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms.
Note:
The Windows Attunity Server kit is required to use the ADO client interface.
NET Client Interface: [Link] is the data-access component of the Microsoft .NET Framework. AIS supports all [Link] objects, methods and properties as well as [Link] Data Provider classes for specific use with AIS.
<?xml version="1.0" encoding="UTF-8"?> <table name="customer" datasource="___NAVDEMO" description="" fileName="C:\Program Files\Attunity\Server\demo\customer" nBlocks="0" nRows="0" bookmarkSize="4" organization="index"> <fields> <field name="c_custkey" datatype="int4"/> <field name="c_name" datatype="cstring" size="25"/>
<field name="c_address" datatype="cstring" size="40"/> <field name="c_nationkey" datatype="int4"/> <field name="c_phone" datatype="string" size="15"/> <field name="c_acctbal" datatype="double"/> <field name="c_mktsegment" datatype="string" size="10"/> <field name="c_comment" datatype="cstring" size="117" nullable="true"/> </fields> <keys> <key name="cindex" size="4"> <segments> <segment name="c_custkey"/> </segments> </key> </keys> </table>
Adabas C Data Source Enscribe Data Source (HP NonStop Only) VSAM Data Source (z/OS) CISAM /DISAM Data Source
In Binding editor, click the Properties tab (at the bottom of the editor). In the Properties tab, expand Misc. Find the exposeXMLField property, click in the Value column and select true. Save and refresh the binding.
Note:
See also:
Environment Properties
<update> <sql>update adabas:table1 set xml=? where ISN='13'</sql> <inputParameter> <TABLE1 INT="400"> <PE1 FLOAT4="23.67" FIXED2="1500" PACKED_DEC="898900" character="XYZ"/> <PE1 FLOAT4="24.67" FIXED2="600" PACKED_DEC="898900"/> <PACKED_DEC_ARRAY>409</PACKED_DEC_ARRAY> <PACKED_DEC_ARRAY>299</PACKED_DEC_ARRAY> </TABLE1> </inputParameter> </update>
Notes:
Using the Attunity XML Utility lets you enter the input and receive the record. The recommended method for executing an XML operation is to make the select operation (for example a select Update operation), then update the output XML and then set the operation. If subfields are defined in the metadata with one of the XML operations, the subfields must match their original field or the subfield will have a non-valid offset.
13
Setting up Data Sources and Events with Attunity Studio
This section contains the following topics:
Data Sources
Data Sources are added to Bindings in Attunity Studio. A data source driver is used to connect directly to a various data sources, from JDBC, ODBC applications on all platforms, and from ADO [Link] applications on Windows. You use data sources when Implementing a Data Access Solution in Attunity Connect. The following sections describe how to set up data sources in your system.
In the Design Perspective Configuration View, expand the Machine folder and then expand the Machine where you want to add the data source. Open the Bindings folder for the machine where you want to add the data source. Expand the Binding with the data source you are working with. Right-click the Data source folder and select New Data Source. The New Data Source wizard opens.
6.
Name: Enter a name to identify the data source. Type: Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.
7.
Select the data source type from the Type list. Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.
13-1
8.
9.
Enter the information requested in this step. The information required depends on the type of data source selected. See the Data Source Reference for more information on how to configure each data source.
Note:
The above procedure is also used to add a procedure data source. To add a procedure, select a procedure driver as your data source driver. For more information, see Configuring the Procedure Data Source.
1.
To configure a data source Right-click the data source you are working with and select Open. The Data Source configuration opens in the editor.
2. 3.
Click the Configuration tab. You can make changes to the data source configuration as required. This tab can have three sections:
Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. Authentication: This lets you edit the authentication information for data sources that require this. The following data sources have the Authentication section: Adabas (ADD) DB2 (all types) Ingres ODBC OLEDB (all types) Oracle Oracle RDB SQL Server Sybase VSAM (CICS)
Properties: This section lets you change the default values for the configuration properties for your data source.
For information on how to enter the configuration information, see the information for your data source in the Data Source Reference.
Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy
1. 2. 3. 4.
To configure data source advanced properties Open Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description
Transaction type
The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level. A section name in the [Link] file that describes SQL syntax variations. The default syntax file contains the following predefined sections:
Syntax name
OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.
The name of the table owner that is used if an owner is not indicated in the SQL
13-3
Table 131 (Cont.) Data Source Advanced Configuration Field Read/Write information Description Select one of the following:
Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.
Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.
Virtual View Policy Generate sequential view Select this to map a non-relation file to a single table. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this if you want to have an individual table created for every array in the non-relational file. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this to include a column that specifies the row number in the virtual or sequential view. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this for virtual views to include all the columns in the parent record. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.
Select the active workspace from the drop-down list. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, and error message that describes the problem is displayed.
"fat" version of ODBC Client Interface (when the server installation is used as a client).1 To create a data source shortcut Drag-and-drop a data source from the Machine where the data source is defined to the machine where you want the shortcut. Or do the following:
1. 2. 3.
In the Design Perspective Configuration View, expand the machine where the data source shortcut should be defined. Expand the Bindings folder and then expand the binding where you want to add the data source. Right-click Data Sources and select New Data Source Shortcut. The New Data Source Shortcut wizard opens.
4.
If the machine is defined in Attunity Studio, select Machine from Configuration view and select the machine with the target data from the drop-down list. If the machine is defined in the binding as a remote machine (for more information see, Setting up Machines), select Machine defined in current binding remote machines list and select the machine from the list. If you want to also add the machine where the data source resides to the list of machines in the Configuration view select New machine. After clicking Next, the Add Machine screen opens where you can add the machine (see Setting up Machines).
5.
A thin ODBC client is available that does not require a data source shortcut defined on the client machine.
13-5
6.
Physical Address: The physical address of the machine with the data source. You cannot edit this field. Alias in binding: The alias given to the machine for the binding. Port: The machines port number. Workspace: Select the workspace that is used to access the data source.
7.
8.
Add the following information about the machine security in this screen.
User name: Enter the user name for a user that has rights to access this data source on this machine. Password: Enter the users password. Confirm password: Re-enter the users password to confirm. Encryption protocol: Select the encryption protocol (if any) used for this protocol. For information on encryption in AIS, see Encrypting Network Communications. Firewall protocol: Enter the firewall protocol (if any) used for the machine with the data source. For information on using a firewall in AIS, see Accessing a Server through a Firewall.
This user and password information is written in the user profile associated with the binding (the user profile with the same name as the binding name). For details about the user profiles, see Managing a User Profile in Attunity Studio.
9.
Click Next. The list of available data sources opens. Select the data source for which you are creating a shortcut and click Finish. source name is used as a data source in the binding on the client machine. The data source shortcut is displayed in the Configuration View.
10. Select the Alias in binding check box and provide an alias for the name if the data
The active workspace is displayed in a read only field. The test procedure pings the server with the selected workspace when the test is executed.
2.
Click Next The next screen in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.
Events
Attunity Studio uses event queues to handle Events. TheEvent Queue is defined as an adapter in Attunity Studio where interactions are expected as events.
In the Design perspective Configuration view, expand the Machine folder and then expand the machine where you want to add the event. Expand the Bindings folder and then the binding where you want to add the event. Right-click the Events folder and select New event. The New Event editor opens. Enter a name for the event queue.
Notes:
The name must be less than 32 characters. You cannot use the word event in the event name.
6.
Select the type of queue that you are using to handle events.
Event Queue: This will handle events using a regular event queue adapter. Tuxedo Queue: This will handle events with a Tuxedo Queue adapter. CICS Queue: This will handle events with a CICS Queue adapter (for z/OS systems only).
7.
Click Finish. A message appears asking if you want to change the daemon configuration. Click OK to accept the changes made in this procedure, or cancel to close and not confirm the changes. A new editor opens in the Editor section of the Studio workbench. The name of the editor is the name of the binding where you added the event queue.
After you finish adding the event queue to the Studio, follow these steps for making changes to the event queues adapter properties. To edit the event queues adapter properties In the Design perspective Configuration view, expand the Machine folder and then expand the machine with the event, expand the Events folder, then right-click
1.
the event and select Open. The Adapter configuration properties open in the editor
Note:
2.
addExecuteTimestamp: Do not change the setting for this property. remoteExecuteCompress: Do not change the setting for this property. remoteExecuteUrl: Do not change the setting for this property. routers: A list of users who can send receive an event. If the owner of the target is not one of these users, the event is not routed to the user. To add routers, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this router. senders: A list of users who can send an event. If the owner of the event is not one of these users, the event is ignored. To add senders, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this sender.
Expand the event queue adapter, then right-click Imports and select New Import. The Metadata import screen opens in the editor.
3. 4.
Enter a name for the import. The name can contain letters, numbers and the underscore character. Select one of the following from the drop-down list:
Event Queue Import Manager Using COBOL Copybook Files Event Queue Import Manager Using Tuxedo VIEW/FML Files as the import type
5. 6.
Click Finish. The Metadata Import wizard opens. Click Add in the import wizard to add COBOL copybooks or BEA Tuxedo VIEW/FML files. The Add Resource window opens, which lets you select files from the local machine or get them from an FTP site. If the files are on another machine, you can add the FTP site.
13-9
Right-click My FTP Sites and select Add. In the Add FTP site screen, enter the server name where the COBOL copybooks reside. If you are not using an anonymous access, enter a valid username and password. You can browse and transfer files required to generate the metadata. Access the machine using the username as the root directory (high-level qualifier on z/OS systems).
Note:
After you access a machine you can right-click the machine and select Change Root Directory to change the high-level qualifier.
7.
From the Add Resource screen, select the files to transfer and click Finish.
Note:
You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. Each COBOL copybook format must be the same. Therefore, if you try to import a COBOL copybook that uses the first six columns with a COBOL copybook that ignores the first six columns, you must repeat the import.
8.
Click Next. The Apply Filters editor opens. Use this editor to apply filters, if necessary.
Note:
You can only use filters if using COBOL. If you are using Tuxedo VIEW/FML files, go to the next step.
COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type.
Setting up Data Sources and Events with Attunity Studio 13-11
Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP is for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested columns: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Indicates whether to consider case sensitivity. Find: Searches for the selected value. Replace with: Replaces the value that is entered (in the Find field) with the value with another selected value.
9.
10. Click Add to add events. You can change the default name that is specified for the
event. The event mode is async-send, which means that the event queue waits for an input. You specify an input record used by the program associated with the event from the drop-down list in the Input column. This list is generated from the input files specified at the beginning of the procedure. Select a relevant record for the event.
11. Add as many events as necessary and click Next.
12. Click Next twice to open the Import Metadata screen. In this screen you generate
the metadata. You can import the metadata to the mainframe machine or leave the generated metadata on the Attunity Studio machine, to be imported later.
Figure 138 Import Metadata
13. In the Deploy Metadata section, select Yes if you want to transfer the metadata to
Note:
After you import the metadata, you can view the metadata in the Metadata tab. You can also make any adjustments to the metadata and maintain it, as necessary. For more information, see Working with Metadata in Attunity Studio.
13-13
14
Procedure Data Sources
This section includes the following topics:
Adding Procedure Data Sources: The procedure needs to be listed in a binding configuration. The definition in the binding includes properties, set in the procedure editor Properties tab. Each procedure has its own set of specific properties. Defining a Shortcut to a Procedure on Another Machine: A procedure driver requires a metadata definition that describes the inputs and outputs for the procedure.
Using the NAV_UTIL EDIT command. Using the NAV_UTIL UPD_DS command.
In the Design perspective, Configuration view expand the machine where you want to add the procedure data source. Expand the Bindings folder, and then expand the required binding configuration. Right-click Data sources and select New Data source. Enter a name for the procedure in the Name field. Select the procedure type from the Type list. Click Next. Enter the connect string to access the procedure. For information on how to enter the connect string, see the information for the procedure data source that you are using in the Procedure Data Source Reference. Click Finish.
9.
The procedure data source is displayed in the Configuration view and the procedure editor is displayed. You can set or change the properties for procedures in the following ways:
Using the Attunity Studio Design perspective Configuration view. Using the sp_config_datasource stored procedure as follows:
nav_proc:sp_config_datasource(ds_name, <config attribute="value"/>)
You change an environment property by executing a statement with the following format:
call nav_proc:sp_config_environment('proc_name','<config att="value"/>')
Where proc_name is the name of the procedure in the binding configuration. To edit a procedure data source definition Open Attunity Studio. In the Design perspective Configuration view, expand the machine with the procedure data source you are using. Expand the Binding folder and then expand the binding with your procedure data source. Expand the Data sources folder. Right-click your procedure data source and select Open. Click the Configuration tab. This tab should be open by default.
1. 2. 3. 4. 5.
6.
Make changes to any of the sections in this tab, if necessary. This tab can have any of the following sections.
Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. The Natural/CICS Procedure Data Source (z/OS) and the CICS Procedure Data Source require configuration in the Configuration section. Authentication: This lets you edit the authentication information for data sources that require this. The Procedure Data Source (Application Connector). data source requires configuration in the Authentication section: Properties: This section lets you change the default values for the configuration properties for your data source. All of the procedure data sources contain properties to be configured. For information about the properties for each data source, find the data source in the Procedure Data Source Reference.
7.
Click Save.
Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy
1. 2. 3. 4.
To configure data source advanced properties "Open" Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description
Transaction type
The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level.
Table 142 (Cont.) Data Source Advanced Configuration Field Syntax name Description A section name in the [Link] file that describes SQL syntax variations. The default syntax file contains the following predefined sections:
OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.
The name of the table owner that is used if an owner is not indicated in the SQL Select one of the following:
Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.
Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.
Virtual View Policy Generate sequential view Generate virtual views Include row number column All parent columns Select this to generate virtual views for arrays when importing metadata.
A thin ODBC client is available that does not require a data source shortcut defined on the client machine.
To create a procedure shortcut You can drag-and-drop a data source from the machine where the data source is defined to the machine where you want the shortcut. or do the following:
1. 2. 3. 4.
Open Attunity Studio. In the Design perspective Configuration view, expand the machine where you want to create the shortcut. Expand the Bindings folder and the binding where you want to add the procedure shortcut. Right-click the Data sources folder and select New Data Source Shortcut. The New Data source shortcut wizard opens, as shown in the following figure:
5.
Select the machine where the target procedure is defined. If that machine is defined in Attunity Studio, then select Machine from Configuration, and then select the machine from the list. Click Next to open the Add runtime machine access information page. If the machine was defined in the binding as a remote machine, then select Machine defined in current binding remote machines list, and then select
the machine from the drop-down list. Click Next to open the Add runtime machine access information page.
If you want to also add the machine where the procedure resides to the list of machines in the tree, select New machine-add the new machine. Click Next, the Add Machine dialog opens to add the machine as described in Setting up Machines. After you add the new machine information, click Next to open the Add runtime machine access information page.
6.
In the Add runtime machine access information page, you can change the following information, if necessary:
Alias in binding: Change the name of the shortcut. Port: Change the port where the original data source is located. Workspace: Change the workspace for the original data source.
7.
Click Next. The Add runtime security information page is displayed. Add security information, if necessary (User name, Password, Encryption protocol and Firewall protocol).
Note:
This information is written to the user profile, associated with the binding. For more information, see Managing a User Profile in Attunity Studio.
8.
Click Next. The list of available data sources, including procedures, is displayed. Select the required procedure, and click Finish.
Note:
If the procedure name is used as a procedure or data source in the binding on the client machine, check the Alias in binding box and provide an alias for the name.
15
Implementing an Application Access Solution
This section contains the following:
Overview Setting up AIS for Application Access Supported APIs Application Access Flow Defining the Application Adapter ACX Protocol Transaction Support Generic and Custom Adapters
Overview
An application access solution lets you connect between 3GL applications directly. An application context is any enterprise software component where interactions occur within a transaction context. Applications include:
Internally developed applications Enterprise packaged products, such as ERP or CRM products Database systems, such as relational databases or file systems
AIS uses the Attunity Applications Adapter Framework (AAF) to accomplish this. In this framework, an application adapter that directly connects to an application is used instead of a data source. The following application adapters are currently available.
Table 151 Application Adapters Application A program via a CICS EXCI transaction Simple COM-based applications A program via an IMS/TM transaction
Adapter Name CICS Application Adapter (z/OS Only) COM Adapter (Windows Only) IMS/TM Adapter (z/OS Only)
Table 151 (Cont.) Application Adapters Adapter Name Legacy Plug Application Adapter Pathway Application Adapter (HP NonStop Only) Tuxedo Application Adapter (UNIX and Windows Only) Application Any legacy application A program via a Pathway transaction
Note:
If the application is not directly supported by one of these adapters, Attunity Connect includes an SDK that enables you to write an adapter for the specific application. For details, see the Attunity Connect Developer SDK.
Configure the system for application access. Make the configurations in Attunity Studio. For Application Access, select an adapter to use for you integration. See Configuring the System for Application Access (Using Studio) for a detailed explanation on what to configure.
AIS on the backend. The backend of the system is where your data is stored. In an application access solution, you access an application, such as CICS, directly to get the necessary data. This is where your application adapter resides. You must install the full (thick) version of AIS on the backend. For information on how to install AIS, see the Installation Guide for the platform you are working with. AIS for the Application (Thin Client): The application (XML, JCA, COM or .NET) that you are working with is installed with an AIS thin client. For information on how to install the AIS thin client, see the installation guide for the platform you are working on. Attunity Studio: Attunity Studio can be installed on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide.
Machines: You must configure the machines used in the system. Make sure to configure the machine where your backend application and application adapter reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio. User Profiles: You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio. Application adapters: This is what makes this solution the application access solution. To set up your adapter, you must: Prepare to use the adapter by Setting Up Adapters to Attunity Studio. Create the application adapter interactions. You do this by Configuring Application Adapters that you are using for your solution. Make sure that your adapter is responding correctly by carrying out the procedures in Testing Application Adapters. Define the adapter metadata. Adapter metadata defines the interactions for the application adapter and the schema of any input and output records used by the interactions. The metadata is stored as an application adapter definition in the SYS repository, on the machine where the adapter is defined. See Working with Application Adapter Metadata. All metadata is imported from relevant input files (such as COBOL copybooks) and maintained using the Attunity Studio Design perspective Metadata tab. For more information on setting up application adapters, see Setting Up Adapters and the Adapters Reference section for the adapter you are using.
Note:
Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not indicated in the Attunity metadata.
Supported APIs
AIS application adapters connect to supported applications using JCA, XML COM, [Link] APIs on Windows platforms.
The XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is works well for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all supported platforms. On other platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP. The JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards.
It provides a robust, efficient, and reliable way to integrate Java applications with native applications.
The COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface : A .NET component, called NetACX, that enables access from any .NET-based application.
application, such as the CICS adapter. You can also create an application to work with an application developed internally. All adapters must have an adapter or metadata definition. The definition is made up of a list of interactions and a schema. The user must enter the location for an XML schema that includes the metadata for the application. The schema contains all of the record structures without the interactions. The following figure shows an example of the adapter definition. In this case, this is the adapters metadata in XML format.
Figure 152 Example of an Adapter Definition
You define the adapter in Attunity Studio. Studio provides a graphical interface that allows you to define and import metadata definitions for adapters. For more information on how to define adapters in Attunity Studio, see Setting Up Adapters.
ACX Protocol
ACX (Attunity Connect XML Protocol) is Attunitys internal XML protocol. ACX is modeled after JCA (Sun Javas connection protocol). ACX is the wire that connects the applications and APIs with the target Application metadata definitions. ACX executes its tasks with connections between applications. There are two types of connections:
Transient connection: Transient connections are used with a single ACX request. A transient connection closes when an ACX request ends, or when the connection context changes (when a new verb is used to send a new request). Persistent connection: Persistent connections are used for an ongoing dialog with the back-end adapter or for pooling scenarios. Persistent connections are closed manually or when timed out. Persistent ACX connections are logical and do not rely on an active network connection (such as a connected socket) to stay active. This lets you use disconnected protocols such as HTTP as transports for ACX.
The daemon keeps track of persistent connections with a unique connection ID. If you need to send a new ACX request at a later time, you can use a previous persistent connection by using its connection ID.
Note:
The AIS XML Utility lets you make connections between applications using the ACX protocol. For more information see Using the Attunity XML Utility. For information on using ACX, see the Adapters Reference and the XML Client Interface chapter.
Transaction Support
Application adapters may support a two-phase commit capability to the extent that transactions are implemented in the application. A two-phase commit lets a database return to its pre-transaction state if an error occurs. This type of commit is used when more than one database or resource are updated in a single transaction. In a two-phase commit either all or none of the databases are committed. In a transaction using a two-phase commit, all changes are stored temporarily by each database. A transaction monitor issues a pre-commit command that requires an answer from each database. When all databases answer with a positive response, the final commit is issued. Applications with two-phase commit support: Attunity Connect supports the PrepareCommit and Recover API calls. Applications that support two-phase commit can participate fully in a distributed transaction. The following adapters support two-phase commit:
To work with two-phase commit with either of these application adapters, RRS (Transaction Management and Recoverable Resource Manager Services) must be installed. See Transaction Support.
See the reference for the specific adapter for any two-phase commit considerations.
interactions. Examples of generic adapters include CICS Application Adapter (z/OS Only), IMS/TM Adapter (z/OS Only), Pathway Application Adapter (HP NonStop Only), and Legacy Plug Application Adapter.
It receives a string and an integer parameter It returns a string value of up to 512 characters and an integer value
If the above is true then the following figure shows the schema that is generated for this application.
Figure 153 Sample Schema for MyApp
The following is an example of the generated code for the MyApp application:
Figure 154 Code Example for MyApp
For more information on writing application adapters with the GAP SDK, see the Attunity Connect Developer SDK.
16
Setting Up Adapters
This section contains the following topics:
In the Design Perspective Configuration View, expand the Machines folder and then expand the Machine where you want to add the adapter. If you need to add a machine to your Attunity Studio setup, see Setting up Machines. Expand the Bindings folder. Expand the binding where you want to add the adapter. Right-click the Adapters folder and select New Adapter. The New Adapter wizard opens.
Setting Up Adapters 16-1
3. 4. 5.
6.
Name: Enter a name to identify the adapter. Type: Select the adapter type that you want to use from the list. The available adapters are described in the adapter reference section. Create event queue for the adapter: Select this check box if you want to associate an Event Queue with this adapter. For information on event queues, see Events.
Note:
you cannot use the word Event as part of the adapter name.
7.
Click Finish.
Expand the Machines folder. Expand the Bindings folder, and then expand the binding with the adapter you are working with. Expand the Adapter folder. Right-click the adapter that you want to work with and select Open. The adapter configuration editor opens in the editor, which displays the properties for the adapter.
6.
Configure the adapter parameters as required. The configuration properties displayed depend on the type of adapter you are working with. For an explanation of these properties, see the Adapters Reference or the Non-Application Adapters Reference to find the documentation for your adapter.
Select the active workspace from the drop-down list. The test procedure pings the server with the selected workspace when the test is executed. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.
17
Application Adapter Definition
This section includes the following topics:
Overview The adapter Element The interaction Element The schema Element The enumeration Element The record Element The variant record Element The field Element
Overview
An application adapter definition includes the following information:
Interactions List
This part lists the Interactions offered by the adapter. Information items include the interaction name, its description and input and output record names.
17-1
The attributes of the adapter element define simple adapter properties (see below). The interaction elements under the adapter describe particular interactions. The schema element under the adapter provides the schema of all the records used within the adapter.
enum basic password The type of authentication implemented by the adapter, as follows:
none: The adapter does not handle authentication. basic password: The adapter implements basic username-password authentication.
Note: Kerberos authentication will be supported in future releases. connect connectionPoolingSize description maxActiveConnections maxIdleTimeout string int string int int 600 Adapter specific connect string. The number of connections that can be held in the connections pool simultaneously. The number of connections that can be held in the connections pool simultaneously. The maximum number of simultaneous connections an adapter may take (per process). The maximum time, in seconds, that an active connection can stay idle. After that time, the connection is soft-closed and placed in the connections pool or simply destroyed (depending on the pooling settings). The maximum size in bytes that an XML ACX request or reply may span. Larger messages are rejected with an error.
maxRequestSize
int
Table 171 (Cont.) adapter Element Attributes Attribute name Type string Default Description The name of the adapter definition. (This name is normally the name of the adapter specified in the binding configuration. If this name differs from the name in the binding configuration, the binding entry must include a definition element set to the name specified here.) The operating system the application adapter runs under. 120 The maximum amount of time (in seconds) that a connection is kept in the connections pool before it is destroyed. The name of the schema used to define the adapter. The level of transaction support. Your options are:
operatingSystem poolingTimeout
string int
schemaName transactionLevelSupport
0PC: No transaction support 1PC: Simple (single phase) transactions 2PC: Distributed (two phase) transactions
<adapter name="calc" description="Attunity Connect Calc Schema" transactionLevelSupport="0PC" authenticationMechanism="basic-password" maxActiveConnections="0" maxIdleTimeout="600" maxRequestSize="32000">
17-3
Table 172 (Cont.) interaction Element Attributes Attribute mode Type enum Default Description The interaction mode. Available modes are:
sync-send-receive: The interaction sends a request and expects to receive a response. sync-send: The interaction sends a request and does not expect to receive a response. sync-receive: The interaction expects to receive a response. async-send-receive: The interaction sends a request and expects to receive a response that are divorced from the current interaction. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.
string string
The name of the interaction. The name of the output record structure.
<interaction name="add" description="Add 2 numbers" mode="sync-send-receive" input="binput" output="output" /> <interaction name="display" description="Display msg in output stream" mode="sync-send-receive" input="inpmsg" output="outmsg" />
noAlignment name ObjectRef ParamCount program (z/OS only) transaction (z/OS only) transid (z/OS only)
Example 174
<record name="binput"> <field name="p1" type="int" /> <field name="p2" type="int" /> </record> <record name="output"> <field name="result" type="int" /> </record> <record name="inpmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record> <record name="outmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record>
17-5
Defining Hierarchies
The variant record element can be used to define a hierarchical structure. The hierarchical definition includes a record definition for the child. The parent record includes a field record with the type name that is used to define the child.
Example 175 Variant Record
<record name="parent"> <field name="f1" type="child" /> <field name="f2" type="int" /> </record> <record name="child"> <field name="c1" type="int" /> <field name="c2" type="string" nativeType="string" length="20" /> <field name="c3" type="string" nativeType="string" length="20" /> </record>
The XML used to access the adapter, must use the same structure as specified in the interaction definition.
Different nuances of the same data. Different usage of the same physical area in the buffer.
This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variant:
<record name="VAR1"> <field name="VAR_0" type="VAR1__VAR_0" /> </record> <variant name="VAR1__VAR_0"> <field name="UNNAMED_CASE_1" type="VAR1__VAR_0__UNNAMED_CASE_1" /> <field name="PARTCD" type="VAR1__VAR_0__PARTCD" /> </variant> <record name="VAR1__VAR_0__UNNAMED_CASE_1">| <field name="PARTNUM" type="string"
nativeType="string" size="10" /> </record> <record name="VAR1__VAR_0__PARTCD"> <field name="DEPTCODE" type="string" nativeType="string" size="2" /> <field name="SUPPLYCODE" type="string" nativeType="string" size="3" /> <field name="PARTCODE" type="string" nativeType="string" size="5" /> </record>
<record name="ORDER"> <field name="RECTYPE" type="string" nativeType="string" size="1" /> <field name="VAR_1" type="ORDER__VAR_1" /> </record> <variant name="ORDER__VAR_1" selector="RECTYPE"> <field name="ORDER_HEADER" type="ORDER__VAR_1__ORDER_HEADER" case="H"/> <field name="ORDER_DETAILS" type="ORDER__VAR_1__ORDER_DETAILS" case="D"/> </variant> <record name="ORDER__VAR_1__ORDER_HEADER"> <field name="ORDER_DATE" type="string" nativeType="numstr_u" size="8" /> <field name="CUST_ID" type="string" nativeType="numstr_u" size="9" /> </record> <record name="ORDER__VAR_1__ORDER_DETAILS"> <field name="PART_NO" type="string" nativeType="numstr_u" size="9" /> <field name="QUANTITY" type="int" nativeType="uint4" size="4" /> </record>
17-7
Table 175 (Cont.) field Element Attributes Attribute COMtype Type enum Description (Used with the COM adapter) Specifies the field's data type as recognized by COM, using explicit COM enumeration values (for details, see COM Adapter (Windows Only). Runtime value holding the actual number occurrences in the array. A field can be specified as a counter field. The default value for the field. The default value for an integer is zero (0) and for a string NULL. Specifying a default value means that the field can be omitted from the input XML. Note: If a field isnt nullable, when using the database adapter and a default value is not supplied, an error occurs. filter length int Filtering of extraneous, unwanted metadata. This attribute is for internal use only. The size of the field including a null terminator, when the data type supports null termination (such as the cstring data type). (Used with the LegacyPlug adapter) The method by which the field is passed or received by the procedure (either byValue or byReference). When a parameter is used for both input and output, the mechanism must be the same for both the input and the output. For outer-level (non-nested) parameters, structure parameters (for the structure itself, and not structure members), and variant parameters, the default value is byReference. name nativeType string string The name of the field. The Attunity Connect data type for the field. Refer to Managing Metadata for a list of all supported data types. Note: When the type value is string, the nativeType value must also be specified as string. offset paramnum int int An absolute offset for the field in a record. (Used with the LegacyPlug adapter) The procedure argument number. 0 indicates the value is a return value. 1 indicates the value is the first argument, and so on. If paramNum is specified at the record level, it cannot be specified for any of the record members (at the field level). precision private reference required int boolean boolean boolean The float data type precision. Used in conjunction with scale (see below). The value is hidden in the response. Used with array (see above), to identify a pointer. A value is mandatory for this field.
counter
int
defult
string
mechanism
string
Table 175 (Cont.) field Element Attributes Attribute scale size type Type int int string Description The float data type scale. Used in conjunction with precision (see above). The size of the field. The data type of the field. The following are valid:
Binary Boolean Byte Date Double Enum Float Int Long Numeric[(p[,s])] Short String (When the type value is string, the nativeType value must also be specified as string) Time Timestamp
usage
string
(Used with the COM adapter) Explains what the COM adapter is about to do with this field:
InstanceTag: Names an object instance. Property: Handled as a property. Parameter: The field value should be passed as a parameter to/from a method. RetVal: The field will hold a methods return value.
value
string
Example 178
field Element
17-9
Part III
Attunity Stream
This part contains the following topics:
What is the Attunity Stream CDC Solution Implementing a Change Data Capture Solution SQL-Based CDC Methodologies Creating a CDC with the Solution Perspective
18
What is the Attunity Stream CDC Solution
This section includes the following topics:
CDC Solution Overview The Attunity Stream CDC Architecture What Can Be Captured?
Capture changes to data in real-time or near-real-time for applications that demand zero latency and require the most up-to-date data. Real-time data capture guarantees that a change event is immediately available at the consumer. Near-real-time data capture involves a configurable delay before a change is available at the consumer. Using a near-real-time configuration, when there is significant capture activity, events are reported immediately. However, after the system has been idle, it may take a few seconds for the events to start flowing again.
Enable consumers of changed data to receive changes quickly, either by asking for the changes in high-frequencies (such as every few seconds), or by sending them the changes as soon as they are identified. The consumer application periodically requests changes, receiving each time a batch of records that represent all the changes that were captured since the last request cycle. Change delivery requests can be done in low or high frequencies, for example, every 15 seconds or a set number of times a day. The extracted changes are exposed to enable the consumer application to seamlessly access the change records using standard interfaces like ODBC and JDBC or XML and JCA.
Using the Attunity Stream CDC solution enables ETL (extract, transform, and load) processes to run without bringing the site computer systems down. CDC enables the
movement of only changes to data while the operational systems are running, without the need for a downtime window. The CDC architecture consists of the following components:
CDC agents, which are located on the same computer as the changes to be captured. Each agent is customized for the specific data source on the specific platform. A CDC agent provides access to the journal (or logstream), to read the journal for specific table changes. The agent maintains the last position read in the journal (the stream position or context) and starts at this point the next time it polls the journal. The context is stored in the repository where the agent adapter definition is stored. The adapter definition includes a single primary interaction which is used to access the appropriate journal and includes the list of tables to monitor for changes.
Note:
In addition to this interaction, secondary interactions are defined during runtime, based on the table metadata of each table specified as part of the change data capture solution.
Depending on the agent, transactions are supported. If transactions are not used, then auto-commit is used.
A CDC staging area: Changes are stored in a staging area, enabling a single scan of a journal to extract the details of changes to more than one table and to also enable filtering of the changes. A committed change filter is currently available and a redundant change filter (hot-spot optimization) is planned. Both of these filters are described in Tracking Changes - Auditing. The staging area is described in more detail in